id
stringlengths
25
96
input
stringlengths
137
1.08M
output
stringlengths
501
1.6k
instruction
stringclasses
5 values
num_tokens
int64
73
522
isprs/fd602ff6_6234_47bd_8772_6b9866494a58.md
Modelling Errors in X-ray fluoroscopic Imaging Systems Using Photogrammetric Bundle Adjustment with a Data-Driven Self-Calibration Approach J. C. K. Chow D. D. Lichti 1 Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada - [email protected] 1 K. D. Ang 2Department of Research and Development, Vision Technologies, Calgary, Alberta, Canada 24Department of Computer Science, Faculty of Science, University of Calgary, Calgary, Alberta, Canada - [email protected] K. Al-Durgham 3Department of Geomatics Engineering, Schulich School of Engineering, University of Calgary, Calgary, Alberta, Canada - (ddlichti, [email protected] G. Kuntze 5Department of Mechanical and Manufacturing Engineering, Schulich School of Engineering, University of Calgary, Calgary, Alberta, Canada - (gkuntze, gbsharma, jfonsky)@ucalgary.ca G. Sharma 5Department of Mechanical and Manufacturing Engineering, Schulich School of Engineering, University of Calgary, Calgary, Alberta, Canada - (gkuntze, gbsharma, jfonsky)@ucalgary.ca J. Ronsky 5Department of Mechanical and Manufacturing Engineering, Schulich School of Engineering, University of Calgary, Calgary, Alberta, Canada - (gkuntze, gbsharma, jfonsky)@ucalgary.ca ## 1 Introduction X-ray fluoroscopy is a valuable diagnostic imaging modality in gastroenterology, radiology, orthopaedics and many other medical specialities. For example, a barium swallow under a fluoroscope is the key confirmatory tool if a physician suspects achasla in patients presenting with progressive esophageal dysphagia pertaining to both solids and liquids. Contrast fluoroscopy allows for the study of gastrointestinal tract motility by measuring any obstructions and tracking the velocity of a fluid bolus. In order to accurately measure the diameter of the gastrointestinal tract and/or quantify the velocity of particles, the fluoroscopic imaging system needs to be calibrated to minimize the systematic errors. This article proposes a scalable data-driven approach to model the distortions experienced in X-ray fluoroscopy. Quality control is performed by comparing the reconstructed object space positions and the relative sensor positions to a reference solution. ## 2 Background Direct Linear Transformation (DLT) type methods are often used for calibrating fluoroscopic imaging systems because of their computational simplicity (You et al., 2001). However, it assumes that the systematic errors in every image frame are independent of the other image frames acquired by the same sensor. In a well-constructed and stable system, the systematic errors should be relatively constant with a very slow drift (in other words, it has a high temporal stability). Therefore, it is reasonable to hypothesize that a set of X-ray images captured by the same sensor will share the same distortion profile. By considering a set of images concurrently in the distortion modelling process, the accuracy and robustness of the calibration can be improved. To estimate a unique set of calibration parameters for a dual fluoroscopic X-ray imaging system, Lichti et al. (2015) demonstrated the use of the photogrammetric bundle adjustment method to combine image space information extracted from 300 images. The authors reported achieving up to 71% improvement in 3D reconstruction accuracy after calibration. This method was further extended in Al-Durgham et al. (2016) where a semi-automatic target extraction and matching function was added to make the entire calibration process more efficient and user-friendly. However, up until now an expert photogrammetris was still required to study the residuals graphically and perform statistical analysis to determine the appropriate model complexity. In this paper, a data-driven approach is proposed to help make the calibration procedure operator-independent by automatically selecting the most appropriate distortion profile based on the input data during bundle adjustment. ## 3 Mathematical Model The fundamental basis of the proposed calibration method is that the physical arrangement of the high-speed camera, X-ray source, and image intensifier found in a fluoroscopic imaging system can be mathematically approximated by a pin-hole camera model. In this model, no lens or sensor distortions are assumed. A target point in the image space, the homologous target point in the object space, and the hypothetical perspective centre of the camera are assumed to be collinear. This allows well-established camera registration techniques (such as the photogrammetric bundle adjustment) to be applied, which relate multiple radiographs with the calibration target field at various positions and orientations (Equation 1). \\[p_{ij}=q_{j}\\big{(}P_{i}-T_{j}\\big{)}q_{j}^{c} \\tag{1}\\] where, \\(p_{g}=[x_{ij}-x_{g}-dx,\\,v_{ij}-y_{p}-\\,dy,\\,-c]^{T}\\) is the image measurement coordinates of target \\(i\\) in exposure \\(j\\) \\(P_{i}=[X_{i},\\,Y_{k}\\,Z]^{T}\\) is the object space coordinates of target \\(i\\) on the phantom \\(T_{i}=[X_{ij},\\,Y_{k},\\,Z]^{T}\\) is the position of the X-ray system relative to the phantom in exposure \\(j\\) \\(q_{j}=\\) unit quaternion representing rotation of the X-ray system relative to the phantom in exposure \\(j\\). Superscript 'c' represents the quaternion conjugate. This is an idealized model that requires a minimum of three non-collinear targets to solve. In practice, many more targets are observed and statistical inference techniques such as Maximum Likelihood Estimation (MLE) are used to obtain the best estimate of the unknown interior orientation parameters (IOP = \\([x_{W},\\,y_{P}]\\), etc), exterior orientation parameters (EOP = \\([T_{i},\\,q]\\)), and object space target coordinates. With a high redundancy, additional parameters (AP = \\([dx,\\,dy]\\)) can be included in the bundle adjustment to model any systematic errors in the device. Selecting the proper model complexity to estimate the systematic errors requires a delicate balance between bias and variance. Having too many AP will result in an over-fitting problem, while not including enough AP will end up with a high bias. Previously, Lichti et al. (2015) proposed a systematic error model with up to 31 parameters. To choose the optimal number of parameters is a lengthy process: an experienced photogrammetric will need to perform the bundle adjustment with self-calibration many times, starting with a simple model (e.g. no AP) and gradually increasing the model complexity (e.g. adding one AP to the adjustment at a time). With each bundle adjustment, graphical and statistical analysis is performed to assess the added value of the new parameter. If the new parameter reduces the root-mean-squared-error (RMSE) of the observation residuals, is found to be statistically significant using the student \\(t\\)-test, and reduces any systematic trends visually seen in the residual plots then that AP is deemed relevant in the model. In cases where not all the above conditions are satisfied, an expert's judgement is required to determine if that AP should be added. Even for a well-trained photogrammetric this can become a labour-intensive process since there is such a large number of potential AP to choose from. For a non-expert, the upper limit of possible models to select can be estimated using the following expression: \\[\\sum_{n=0}^{31}C(31,n)=\\sum_{n=0}^{31}\\frac{31!}{n!\\,(31-n)!}\\approx 2\\times 10^{9}\\] To automate this model selection process and to make it operator independent, a k-nearest-neighbour (kNN) regression approach is used to model the systematic errors in the residuals. This data-driven machine learning approach assumes that spatially nearby residuals are correlated, and therefore can be approximated by averaging the k nearest residuals. It is considered a parameter-free approach (as no parameters need to be learned using least-squares estimation) and can be highly scalable to large amounts of data if a KD-tree structure is used for organizing the data since the residuals are only in 2D. However, one hyperparameter (k) still needs to be tuned, which indirectly defines the neighbourhood size used in the regression. To tune the k parameter using the data itself a 10-fold cross-validation was used. The weighted L2-norm is used as the error metric (Equation 2) with the grid search approach to find the optimal k. \\[G=\\big{(}\\vec{r}-\\vec{g}(x,y)\\big{)}^{T}C_{r}^{-1}\\big{(}\\vec{r}-\\vec{g}(x,y) \\big{)} \\tag{2}\\] where, \\(\\vec{r}\\) is the vector of residuals \\(C\\) is the covariance matrix of the residuals \\(x\\) and \\(y\\) are the cartesian coordinates in image space \\(\\vec{g}(x,y)\\) is the vector of predicted residuals ### Proposed Methodology The X-ray calibration approach adopted in this paper draws on the concept of grey-box system identification. By initializing \\(\\Delta x\\) and \\(\\Delta y\\) to be zero, a robust photogrammetric bundle adjustment is first performed by minimizing the negative logarithm of the student-t probability distribution (Equation 3). This is equivalent to finding the point of maximum likelihood. Since the collinearity condition is non-linear, the model is linearized using a first-order Taylor series expansion and the unknown parameters are updated iteratively. At every iteration, the step is calculated using the popular trust-region method, the Levenberg-Marquardt algorithm. Once the MLE has converged, the residuals and corresponding variance-covariance matrix are computed. This then serves as the input to the second step, which is the kNN regression. \\[F=\\frac{\\Gamma\\big{(}\\frac{p}{2}+\\frac{p}{2}\\big{)}|c|_{1}^{-\\frac{1}{2}}}{ \\Gamma\\big{(}\\frac{p}{2}\\big{)}}\\Bigg{[}1+\\frac{\\Big{(}l-f(\\vec{\\theta})\\Big{)} ^{T}c_{i}^{-1}\\big{(}l-f(\\vec{\\theta})\\Big{)}^{T}\\frac{p}{2}}{v}\\Bigg{]}^{\\frac {p}{2}\\frac{p}{2}} \\tag{3}\\] where, \\(\\vec{l}\\) is the vector of image measurements \\(C\\) is the covariance matrix of the observations \\(\\vec{\\theta}\\) is the vector of unknown parameters \\(f(\\vec{\\theta})\\) represents the collinearity condition \\(v\\) is the degrees-of-freedom \\(D\\) is the number of observations During the kNN regression, the best estimate of \\(\\Delta\\)AP is determined after automatically tuning the hyperparameter \\(k\\) using cross-validation. Only the residual data that were considered inliers from the bundle adjustment are used for training the regressor. The \\(\\Delta x\\) and \\(\\Delta y\\) are then updated by the \\(\\Delta\\)AP predicted by the regressor (\\(\\Delta\\)P\\({}_{\\text{real}}=\\Delta\\)P\\({}_{\\text{precision}}+\\Delta\\)AP). The process of performing a robust photogrammetric bundle adjustment step followed by the kNN regression is then repeated until convergence. This iterative self-calibration adjustment converges when both the weighted cost function of the bundle adjustment (i.e. \\(F\\)) and the kNN regression (i.e. \\(G\\)) are minimized. After convergence, not only can the IOP, EOP, and object space coordinates be obtained together with their standard deviations, a kNN regressor that has learned the irregularly-spacedsystematic error corrections of the residuals is also available. For new radiographs that are captured, the distortion correction at every pixel location can be predicted by the kNN regressor. ## 4 Experimentation The same data as in Lichti et al. (2015) were used, where a three-dimensional cubic target frame (i.e. a phantom with 503 targets) was imaged using two fluoroscopic image systems simultaneously. Each fluoroscopic system consists of an X-ray source, an image intensifier with a fluorescent screen, and a high-speed solid-state optical camera. The fluoroscopic system was static during the entire experiment while the phantom was repositioned with various orientations within the volume of interest with the help of a height-adjustable turntable. A total of 150 image frames per fluoroscopic system was processed using bundle adjustment with self-calibration by an expert photogrammetri. The reconstructed 3D object space and the resected virtual camera locations using all images serve as the reference solution. A subset of this data, i.e. 15 of the 150 images were uniformly sampled from each fluoroscopic system and processed using the proposed algorithm. This subset acted as the training data; the remaining 135 images were used as testing data. It was hypothesized that if the recovered systematic distortion profile using 10% of the data would be comparable to the result from using 150 images with an expert's judgement on model selection, then the proposed method has the potential to further automate the calibration process without compromising the quality of the calibration solution. The proposed calibration method (i.e. extending conventional bundle adjustment with machine learning) can be approached in various ways. For example, there is the preconception that machine learning can learn everything if given sufficient data, even depth (Sinz et al., 2004). While this may be true, incorporating prior knowledge about the problem can strengthen the solution. Thus, for the experiments described previously, the calibration was done in two ways: (1) the kNN regressor was used to learn both AP and IOP, and (2) the kNN regressor was used to learn the AP only (with the IOP being modelled parametrically). Unlike the AP, the IOP are expected to be present in all imaging systems and their mathematical form is known. The IOP consist of merely three unknown parameters, and they are comparable to a bias and scale factor. By solving for the IOP using the standard parametric form in the bundle adjustment, it is expected that the solution can be improved. If this is true, then a similar argument can be made about estimating the EOP and object space coordinates using the parametric form rather than learning it from data. ## 5 Results and Analyses ### Efficacy of Proposed Calibration Method The proposed calibration method divides up the numerical optimization process into two steps. In the first step the reprojection errors are minimized using a robust bundle adjustment. A surface is then fitted to the residuals using kNN regression. Figure 1 shows the monotonic progressive reduction in the quadratic costs. The gradient begins to diminish around 20 iterations. The weighted average of the total cost (i.e. \\(F+G\\)) shown in Figure 2 demonstrated that a stable local minimum can be found after around 30 iterations by following the gradient. ### Error Modelling of Fluoroscopic Imaging System 1 It is hypothesized that in the absence of systematic errors, the distribution of residuals will follow a Gaussian probability distribution. Figures 3 and 4 show the histogram of the normalized image residuals in x and y, respectively. It can be seen that the distribution resembles a bell shape much better after modelling for the AP. In addition, by explicitly modelling for the IOP rather than learning it from the data showed slight improvements in the reprojection errors. Figure 1: Relative cost at every iteration expressed as a percentage of the initial cost. \\(F\\) is shown in blue and \\(G\\) is shown in red. Figure 3: Histogram of the x-image residuals in fluoroscopic system 1 Figure 2: Weighted average cost of both the bundle adjustment and kNN regressor To analyze the correlation between the reprojection errors and the spatial distribution, the residuals in x and y are plotted as a function of their x and y pixel locations in the image (Figures 5 and 6). The residuals after calibration appear to have a smaller spread overall, with the residuals being slightly larger near the peripheral of the radiograph. This is expected in kNN regression because of the lack of data points near edges of the image, which results in all the data points being on one side of the query point. In other words, near the edge of the radiograph, kNN regression behaves more like an extrapolator than an interpolator. The object space measurement accuracy of system 2 is slightly worse and less uniform than system 1; this might be due to the stability and manufacturing of the system. Even though both systems used identical components there can still be small variations in their build. Regardless, the proposed calibration method is able to improve the object space reconstruction accuracy both in-sample and out-of-sample for system 2 (Tables 5 and 6). When looking at the out-of-sample errors, it was noticed that the estimated sensor position errors were greater with the error correction model when the IOP were learned by the kNN regression. In this dataset, solving for the IOP explicitly in the bundle adjustment has a small improvement to the object space reconstruction quality and was able to reduce the sensor position errors rather than increasing it. ### Simultaneous Calibration of Two Fluoroscopic Imaging Systems In clinics with a dual-fluoroscopic imaging system for tracking 3D motions, both X-ray systems can be calibrated simultaneously. The image measurements in each radiograph are independent, but if they are observing the same phantom then they become correlated through the object space coordinates. By performing bundle adjustment for the two fluoroscopic imaging systems jointly and training a kNN regressor for each system, the following image space errors are reported (Table 7). The in-sample and out-of-sample object space errors (Tables 8 and 9) are comparable to the cases where each fluoroscopic system was calibrated independently (Tables 2, 3, 5, and 6). Only sharing the same object space targets does not seem to provide any statistically significant benefits to the object space reconstruction quality. Therefore, it can be argued that not calibrating both X-ray systems together is preferred because of the reduced computation load. To experience benefits of object space reconstruction accuracy in a dual-fluoroscopic image system, the two X-ray systems should be rigidly mounted \\begin{table} \\begin{tabular}{c c c c c} \\hline & \\multicolumn{3}{c}{**RMSE [ms]**} & \\multicolumn{3}{c}{**Improvement [\\%]**} \\\\ \\cline{2-5} & Before & After & After w/ & After & After w/ \\\\ & Before & After & 100 & & \\\\ \\hline \\(X\\) & 0.669 & 0.150 & 0.121 & 77.561 & 81.941 \\\\ \\hline \\(Y\\) & 0.510 & 0.099 & 0.055 & 80.540 & 89.232 \\\\ \\hline \\(X\\) & 0.708 & 0.131 & 0.096 & 81.506 & 86.439 \\\\ \\hline \\(X\\) & 19.298 & 19.138 & 4.663 & 0.830 & 75.355 \\\\ \\hline \\(Z\\) & 16.101 & 18.966 & 1.825 & -18.468 & 88.602 \\\\ \\hline \\(Z\\) & 16.984 & 18.845 & 1.939 & -10.960 & 88.583 \\\\ \\hline \\end{tabular} \\end{table} Table 6: Out-of-sample errors of fluoroscopic system 2 \\begin{table} \\begin{tabular}{c c c c c c} \\hline & \\multicolumn{3}{c}{**RMSE [ms]**} & \\multicolumn{3}{c}{**Improvement [\\%]**} \\\\ \\cline{2-5} & Before & After & After w/ & After & After w/ \\\\ & Before & After & 100 & & \\\\ \\hline \\(X\\) & 0.669 & 0.150 & 0.121 & 77.561 & 81.941 \\\\ \\hline \\(Y\\) & 0.510 & 0.099 & 0.055 & 80.540 & 89.232 \\\\ \\hline \\(Z\\) & 0.708 & 0.131 & 0.096 & 81.506 & 86.439 \\\\ \\hline \\(X\\) & 19.298 & 19.138 & 4.663 & 0.830 & 75.355 \\\\ \\hline \\(Z\\) & 16.010 & 18.966 & 1.825 & -18.468 & 88.602 \\\\ \\hline \\(Z\\) & 16.984 & 18.845 & 1.939 & -10.960 & 88.583 \\\\ \\hline \\end{tabular} \\end{table} Table 4: Reprojection errors of fluoroscopic system 2 Figure 8: Histogram of the y-image residuals in fluoroscopic system 2 \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c} \\cline{2-7} & \\multicolumn{3}{c|}{**Cost**} & \\multicolumn{3}{c|}{**RMSE [ms]**} & \\multicolumn{3}{c}{**Improvement [\\%]**} \\\\ \\cline{2-7} & \\(r^{2}G^{2}r^{2}r\\) & \\(x\\) & \\(y\\) & \\(r^{2}G^{2}r^{2}r\\) & \\(x\\) & \\(y\\) \\\\ \\hline \\(Before\\) & 1.136E+05 & 1.307 & 1.152 & N/A & N/A & N/A \\\\ \\hline \\(After\\) & 1.039E+04 & 0.451 & 0.363 & 90.878 & 65.474 & 68.508 \\\\ \\hline \\(After\\) & 6.808E+03 & 0.358 & 0.314 & 94.027 & 72.623 & 72.724 \\\\ \\hline \\end{tabular} \\end{table} Table 7: Reprojection errors of fluoroscopic systems 1 and 2 Figure 10: y-image residuals as a function of their row number in the image Figure 7: Histogram of the x-image residuals in fluoroscopic system 2 Figure 9: x-image residuals as a function of their column number in the image together so that a relative position and orientation constraint can be enforced in the MLE (Lichti et al., 2015). ## 6 Conclusion and Future Work Fluoroscopic imaging systems allow the use of low dose radiation to look under the skin of humans non-invasively for clinical diagnoses. This modality is particularly valuable in studying dynamic data because it has a high frame rate. To use this system for quantitative analysis and to improve the geometric accuracy of the images for qualitative assessments, systematic errors of the complete system need to be removed. Previous research has already demonstrated that using photogrammetric bundle adjustment to do a software calibration of the imaging system can improve both the precision and accuracy of the system. This paper presented an extension by adding a machine learning approach using kNN regression to automate the model selection process in the bundle adjustment, thus making it easier for a non-expert to perform the calibration. It has been shown in this paper that not only can the proposed data-driven method make the calibration process less operator dependent, it was able to achieve a similar level of accuracy as the parameter-driven approach. While all information can be learned from the data using machine learning - including depth and camera pose - it was shown that if a geometric relationship is expected to exist in the system, it is more effective to model them explicitly using well-established parametric models. For example, it was found that when both the IOP and AP are learned using kNN regression, the precision and accuracy (using the same input dataset) are both lower than if the IOP are modelled using the conventional approach in the bundle adjustment. At present, most hospitals only have single fluoroscopic systems available, and therefore the proposed method is applicable. Future work will investigate using relative orientation constraints with this method for dual-fluoroscopic imaging systems that have a fixed baseline. It was found in this paper that even if two fluoroscopic imaging systems were calibrated simultaneously and they share the same object space targets, the calibration result is very similar to the scenario where the two fluoroscopic systems were calibrated separately. Hence, there is little benefit to perform a multi-system calibration when a relative orientation constraint cannot be enforced. ## Acknowledgements The colour scheme for the figures is inspired by the International JK Conference 2018 held in Calgary, Canada. ## References * Al-Durgham et al. (2016) Al-Durgham, K., Lichti, D., Kuntze, G., Sharma, G., & Ronsky, J. (2016). Toward an automatic calibration of dual fluoroscopy imaging systems. _International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-BS_, 757-764. * Lichti et al. (2015) Lichti, D., Sharma, G., Kuntze, G., Mund, B., Beveridge, J., & Ronsky, J. (2015). Rigorous geometric self-calibrating bundle adjustment for a dual fluoroscopic imaging system. _IEEE Transaction on Medical Imaging, 34_(2), 589-598. * Sinz et al. (2004) Sinz, F., Candela, J., Bakr, G., Rasmussen, C., & Franz, M. (2004). Learning depth from stereo. In C. Rasmussen, H. Bulthoff, B. Scholkopf, & M. Giese, _Pattern Recognition. Lecture Notes in Computer Science_ (Vol. 3175, pp. 245-252). Springer, Berlin, Heidelberg. * Stamatopoulos et al. (2010) Stamatopoulos, C., Fraser, C., & Cronk, S. (2010). On the self-calibration of long focal length lenses. _International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5_, 560-564. * You et al. (2001) You, B., Siy, P., Anderst, W., & Tashman, S. (2001). Vivo measurement of 3-D skeletal kinematics from sequences of biplane radiographs: Application to knee kinematics. _IEEE Transaction on Medical Imaging, 20_(6), 514-525. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\multicolumn{2}{c|}{} & \\multicolumn{2}{c|}{**RMSE (mm)**} & \\multicolumn{2}{c|}{**Improvement [54]**} \\\\ \\multicolumn{2}{c|}{} & \\multicolumn{1}{c|}{Before} & \\multicolumn{1}{c|}{After} & \\multicolumn{1}{c|}{After} & \\multicolumn{1}{c|}{After} \\\\ & \\multicol
X-ray imaging is a fundamental tool of routine clinical diagnosis. Fluoroscopic imaging can further acquire X-ray images at video frame rates, thus enabling non-invasive in-vivo motion studies of joints, gastrointestinal tract, etc. For both the qualitative and quantitative analysis of static and dynamic X-ray images, the data should be free of systematic biases. Besides precise fabrication of hardware, software-based calibration solutions are commonly used for modelling the distortions. In this primary research study, a robust photogrammetric bundle adjustment was used to model the projective geometry of two fluoroscopic X-ray imaging systems. However, instead of relying on an expert photogrammetris's knowledge and judgement to decide on a parametric model for describing the systematic errors, a self-tuning data-driven approach is used to model the complex non-linear distortion profile of the sensors. Quality control from the experiment showed that 0.06 mm to 0.09 mm 3D reconstruction accuracy was achievable post-calibration using merely 15 X-ray images. As part of the bundle adjustment, the location of the virtual fluoroscopic system relative to the target field can also be spatially resected with an RMSE between 3.10 mm and 3.31 mm. Radiology, X-Ray, Fluoroscopy, Error Modelling, Calibration, Machine Learning, Bundle Adjustment, Biomedical Imaging, Biomechanics
Give a concise overview of the text below.
281
arxiv-format/1712_07540v1.md
# Image Registration Techniques: A Survey \\({}^{1}\\)Sayan Nag\\({}^{*}\\) \\({}^{1}\\)Department of Electrical Engineering Jadavpur University Corresponding author mail id: [email protected] ## I Introduction Image Registration is interpreted as the process of overlaying two or more images of the same scene with respect to a particular reference image. The images may be taken at various circumstances (time-points), from various perspectives (view-points), and additionally by various sensors. The reference image is generally one of these captured images. It geometrically transforms different sets of data into a particular reference co-ordinate system. The discrepancies among these images are interposed owing to the disparate imaging conditions. Image acquisition devices underwent rapid modifications and proliferating amount and diversity of acquired images elicited the research on automatic image registration. In image analysis ventures, one of the most significant step is Image Registration. It is a necessary step to obtain the final information from a combination of a multitude of divergent sources capturing the same information in varied circumstances and diverse manners. Essentially the objective is to detect the concealed relationship existing between the input and the reference images which is usually indicated by a coordinate transformation matrix. Accordingly, an image registration can be essentially devised as an optimization problem. Image registration plays a crucial role in many real-world applications. Image registration finds applications in remote sensing [1-3] involving multispectral classification, environmental monitoring, change detection, image mosaicing, weather forecasting, creating super-resolution images and integrating information into geographic information systems (GIS), in medicine [4-8] including fusion of computer tomography (CT) and NMR data to obtain more complete information about the patient, multi-modal analysis of different diseases like epilepsy where the protocols incorporate functional EEG/MEG data along with anatomical MRI, monitoring tumor evolution, treatment verification, juxtaposition of the patient's data with anatomical atlases, in cartography for map updating, and in computer vision for target localization, automatic quality control and motion tracking. According to the manner of image acquisition the application of Image Registration can be segregated into the following groups. 1. _Multi-view Analysis_: Images of the similar object or scene are captured from multiple viewpoints to gain a better representation of the scanned object or scene. Examples include mosaicing of images and shape recovery from the stereo. 2. _Multi-temporal Analysis_: Images of the same object/ scene are captured at various times usually under dissimilar conditions to notice changes in the object/scene which emerged between the successive images acquisitions. Examples include motion tracking, tracking the growth of tumors. 3. _Multi-modal Analysis_: Different sensors are used to acquire the images of the same object/scene to merge the information obtained from various sources to obtain the minutiae of the object/scene. Examples include integration of information from sensors with disparate characteristics providing better spatial and spectral resolutions independent of illumination-this depends upon the robustness of the registration algorithm, combination of sensors capturing the anatomical information like magnetic resonance image (MRI), ultrasound or CT with sensors acquiring functional information like positron emission tomography (PET), single photon emission computed tomography (SPECT) or magnetic resonance spectroscopy (MRS) to study and analyze seizure disorders, Alzheimer's disease, depression and other diseases. Figure 1 shows a MEG-MRI co-registration, an example of Multi-Modal Registration. Section 2 presents steps involved in Image registration, Section 3 contains classification criteria, Registration methods are presented in Section 4, Transform Model Estimation and Performance Analysis are discussed in Sections 5 and 6 respectively while Section 7 contains the conclusion. ## II Steps Involved In Image Registration An Image Registration task involves the following steps as follows: 1. _Feature detection_: This is an important task of the Image Registration process. The detection process can be manual or automatic depending upon the complexity though automatic detection of features is preferred. Closed-boundary regions [9-16], edges, contours [17-26], line intersections, corners [27] along with their point representatives like center of gravity or line endings (collectively known as Control Points) can serve as features. These features consisting of distinctive objects must be easily detectable, that is, the features will be physically interpretable and identifiable. The feature set of the reference image must be sharing sufficient common features with the non-aligned image(s) irrespective of any undesired occlusions or unexpected changes for proper registration. The algorithm for detection should be robust enough to be able to detect the same features in all projections of the scene without being affected by any specific image deformation or degradation. 2. _Feature matching_: This step essentially establishes the correspondence between the features detected in the non-aligned sensed image and those detected in the reference image [28-36]. Different feature descriptors and similarity measures besides spatial relationships among the features are adopted to set up an accurate accordance. The feature descriptors must so formulated such that they remain unchanged in spite of any degradations and concurrently they must be able to properly discriminate among diverse features while remaining unaffected by noise. 3. _Transform model assessment_: For alignment of the sensed image with the reference image the parameters of the mapping functions are to be estimated [37-43]. These parameters are computed with the established feature correspondence obtained from the previous step. The selectivity of a mapping function depends on a priori knowledge regarding the acquisition process and expected image deformations. In absence of any a priori information the flexibility of the model must be ensured to tackle image deformations. 4. _Image transformation_: The sensed image is transformed for alignment employing the mapping functions. The above mentioned image registration steps are generally followed. Figure 2 shows a pictorial representation of the steps involved in image registration. Though it is noteworthy to mention that it is difficult to fabricate a universal method applicable to all registration assignments the reason attributed to the diversity of images to be registered obtained from a miscellany of sources and the several types of degradations introduced in the images. Besides geometric deformation between the images, the radiometric deformations and noise corruptions should be taken into account for proper registration of images. Fig. 1: Multimodal MRI-MEG Co-registration. Top-Yellow dots represent anatomical landmarks or fiducial points in the axial view of the brain image (anatomical information). Bottom- Pink dots represent the MEG sensors locations and the Green dots represent the scalp-EEG sensors locations. These MEG and EEG data contain the functional information and the bottom picture shows the co-registered brain image (sagittal view). Fig. 2: Steps Involved in Image Registration ## III Classification Criteria of Image Registration Techniques Image registration techniques can be classified based on some criteria [44-45]. These are as follows: 1. _Dimensionality_: This specifies the dimensions of different possible registrations. It may be 2D-2D, 2D-3D or 3D-3D based on the requirement. 2. _Domain of transformation_: It may be global when the entire image is to be registered or it may be local when a portion of the image is taken into consideration for registration purpose. 3. _Type of transformation_: It may be rigid (translation, rotation, reflection), affine (translation, rotation, scaling, reflection, shearing), projective or non-linear. 4. _Registration Quality_: Depending on the data or the features extracted several measures can be adopted and applied. 5. _Parameters of Registration_: These are obtained employing search oriented methods. The optimum parameters found from a search method (e.g., a heuristic search method) determines the quality of transformation and hence the registration. 6. _Subject of Registration_: Same subject is considered for intra-subject registration. If the subjects are different then it is known as inter-subject registration. 7. _Object of Registration_: Different objects include head, abdomen, thorax, knee, etc. 8. _Nature of Registration basis_: It may be extrinsic (based on foreign objects which are easily detectable, e.g., markers glued to skin), intrinsic (based on image information) or non-image based (where imaging coordinates of the two devices are matched). 9. _Interaction_: It may be interactive, semi-automatic or entirely automatic. 10. _Modalities involved_: It may be mono-modal (which is also termed as intra-modal) using modalities like Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), Ultra Sound (US), or Xray or Digital Subtraction Angiography (DSA) or multimodal (which is also known as inter-modal image) employing two or more modalities mentioned above. ## IV Methods Of Image Registration Various methods of Image Registration are as follows. ### _Extrinsic Methods_ In this method artificial foreign objects which are easily detectable are attached to the patient body [46-53]. They serve as external features to be used for feature matching. The complexity is lessened and hence computational is fast and accuracy is also maintained. Examples are markers glued to patient's skin or stereo-tactic frame attached rigidly to the patient's outer skull for invasive neurosurgery related purposes. Fig. 3: Steps Involved in Image Registration. Top Left- Reference Image. Top Right- Non-Aligned Image. Bottom- Aligned Image. Yellow dots represent the extracted features and there are enough common features in both the images. A mapping function is established which gives the bottom image as the final output. #### Vi-A2 Surface Methods Surfaces or boundaries or contours are generally distinct in medical images unlike landmarks. For example, surface-based approach is employed for registering multimodality brain image. These surface matching algorithms are generally applied to rigid body registration. A collection of points, generally called a point set is extracted from the contours in an image. If two surfaces are considered for registration then there will be two such sets. The surface covering the larger volume of the patient, or that having a higher resolution if volume coverage is comparable, is generally considered for generation of the surface model. Iterative Closest Point Algorithm and Correspondence Matching Algorithm are successfully applied as registration algorithms for surface-based techniques [54, 55, 56, 57, 58, 59, 60, 61, 62, 63]. Meta-heuristics and Evolutionary Optimization are also seen to solve these high dimensional optimization problems of surface registrations. #### Vi-A3 Moments and Principle Axes Methods The orthogonal axes about which the moments of inertia are minimized are known as the principle axes. Two identical objects can be registered accurately by bringing their principal axes into concurrence without employing any rigid/affine transformations. If the objects are not identical but similar in appearance then they can be approximately registered by this technique [64, 16]. For moment based methods pre-segmentation is done in many cases to engender satisfactory outcomes. #### Vi-A4 Correlation Based Methods This method is essentially useful for registration of monodomal images and for comparison of several images of the similar object [65]. It has immense usage in the field of medical sciences for analyzing and treatment of disease. Extracted features from the images are also used to obtain the cross-correlation coefficients for image registration. [66, 67, 68, 69]. Cross-correlation and Phase-correlation techniques based on Fourier domain are also used for image registration. Successful yet complex ventures have been significantly made using subspace-based frequency estimation approach for the Fourier based image registration problem employing multiple signal classification algorithm (MUSIC) to proliferate robustness eventually yielding accurate results [70]. Normalized mutual information between the images have been used for image registration purposes adopting an Entropy Correlation Coefficient (ECC) [71]. Fourier-based techniques accompanied by search algorithms have been exploited to evaluate the transformation between two input images [72]. #### Vi-A5 Mutual Information Based Methods In mutual information-based registration methods the joint probability of the intensities of comparable voxels in the images under consideration are estimated. Mutual information based measures are utilized to aid Voxel-based Registration. Mutual information can be fruitfully utilized for establishing the correspondence between the features of the reference and the sensed images as mentioned in the step of feature-matching. Correlation methods have proved inefficient for multi-modal registration. But, the mutual information based methods do not suffer from such a problem, rather they are found to perform effectively in multi-modal registration tasks. Gradient descent optimization methods have been employed to maximize mutual information [73]. Window and pyramid based approaches are used to achieve image registration using mutual information [74]. Other methods used include hierarchical search strategies along with simulated annealing [35] and the Powell's multi-dimensional direction set method [66]. Recently various optimization methods and multi-resolution strategies are adopted for mutual information maximization. #### Vi-A6 Wavelet Based Methods Wavelet Transform was introduced to get an idea of the time instant at which a particular frequency exists. The width of the window is altered as the transform is computed for each spectral component- the most important characteristic of the multi-resolution wavelet transform. It offers both time and frequency selectivity, that is, it is able to localize properties in both temporal and frequency domains. The wavelet-based image registration can be effectively. After choosing several wavelet coefficients by selection rules like the maximum absolute wavelet coefficient in the multi-spectral image and the high-resolution image for individual band the partial wavelet coefficients of the high-resolution image are replaced with those of the multi-spectral low-resolution image. The pyramidal approaches also use wavelet decomposition owing to its intrinsic multiresolution properties. Different types of wavelets like the Haar, Symlet, Daubechies [75] and Coiflets are applied for finding the correspondence with different sets of wavelet coefficients. Wavelet-based feature extraction techniques along with normalized cross-correlation matching and relaxation-based image matching techniques are used thereby incorporating sufficient control points to reduce the local degradations, for image registration [76]. #### Vi-A7 Soft Computing Based Methods These methods are comparatively recent and advanced and are successfully applied to image registration tasks. They include Artificial Neural Networks, Fuzzy Sets and several Optimization Heuristics. Artificial Neural NetworksAn artificial neural network (ANN) is a computational model which is formulated based on biological neural networks. It is also known as Multi-Layer Perceptron (MLP) since it contains a number of hidden layers. These layers consist of an interconnected group of artificial neurons an information is passed on from one layer to the next layer. Artificial Neural Networks or simply Neural Networks learns adaptively in the learning phase when information flows through the network and updates the neuron-linksaccordingly by assigning various weights to them. Neural Networks can be viewed upon as non-linear statistical data modeling tools employed to model complex relationships between inputs and outputs or to recognize patterns in data, also called Pattern Recognition. There are two types of schemes: (1) feed-forward networks, where the links are devoid of any loop (e.g., multilayer perceptron (MLP) and radial basis function neural networks (RBF) and (2) recurrent networks which include loops (e.g., self-organizing maps (SOM) and Hopfield Neural Networks). A priori information about the output is an essential requirement for training feed forward networks, on the other hand, recurrent neural networks generally do not require any such previous knowledge regarding the expected output. The rigorous training process in an ANN modifies and adaptively updates the network architecture abreast the connection weights or link weights to be able to learn complex non-linear input-output relationships thereby parlaying the robustness and efficacy of performance. Multi-layer Perceptron, Radial basis functions, self-organizing maps and Hopfield networks have been utilized for different computational and optimization aspects and for designing registration matrices in Image Registration problems [77]. Neural Networks have also been used for solving mono-modal and multi-modal medical image registration problems [78]. Fuzzy Sets A fuzzy set is a collection of elements having a continuous sequence of membership grades or degrees. Fuzzy sets was introduced by L. A. Zadeh in 1965. Fuzzy sets follow the properties of inclusion, union, complement, intersection, etc. In classical set theory, the membership values of elements in a set are decided in binary terms depending upon whether an element belongs or does not belong to the set. In contrast, fuzzy set theory allows the grading of the membership of elements in a fuzzy set as decided with the assistance of a membership function which assigns values residing in the interval [0, 1]. Fuzzy Sets manifest the perception of partial membership of an element within the set- this permits Fuzzy sets to tackle uncertainty and inaccuracies. Fuzzy Sets have been explicitly applied to Image registration techniques [79-80]. It has also been utilized to choose and pre-process the extracted features to be registered. Fuzzy logic is used to enhance the precision in the transformation parameters as estimated approximately previously eventually leading to accurate registration estimates [81]. Optimization Heuristics Optimization problems applied in several domains of Engineering Design and Optimization have some mathematical models and objective functions. They may be unconstrained (without constraints) or constrained (with constraints) having both continuous as well as discrete variables. The task of finding the optimal solutions is difficult with numerous curtailments being active at the points of global optima. Traditional methods including Gradient Descent, Dynamic Programming and Newton Methods are computationally less efficient, whereas provide feasible solutions in a stipulated time. The list of meta-heuristics include Genetic Algorithm (GA) [82], Particle Swarm Optimization (PSO) [83], Gravitational Search Algorithm (GSA) [84], Ant Colony Optimization (ACO) [85-86], Stimulated Annealing (SA) [87-88], and Plant Propagation Algorithm (PPA) [89-90] and so on. GA is a relatively old, approximate search technique used in computing. These global search heuristics form an important class of evolutionary algorithms that mimics evolutionary biological processes such as mutation, selection, and crossover and abandonment. Likewise, Particle Swarm Optimization and Differential Evolution along with their existing variants are relatively advanced heuristics than can efficiently solve Optimization problems. These optimization heuristics are applied to image registration problems for finding the optimal parameters necessary for designing a transformation model [91]. ## V Transform Model Estimation A transformation is expounded as the process of mapping a set of points to various other locations. The objective is to design a proper transformation model which transforms the sensed image with respect to the original image with maximum accuracy. The transformations that may be performed are translation, rotation, scaling, shearing and reflection. These are collectively known as affine transformation. Also there are projective and non-linear transformations as well. ### 1. Translation Let a point \\(x\\) is to be translated by \\(t\\) units, then the matrix representation of this transformation is given as: \\[\\begin{bmatrix}y_{1}\\\\ y_{2}\\end{bmatrix}=\\begin{bmatrix}x_{1}\\\\ x_{2}\\end{bmatrix}+\\begin{bmatrix}t_{1}\\\\ t_{2}\\end{bmatrix} \\tag{1}\\] where, \\(y_{1}\\), \\(y_{2}=\\) new point, \\(x_{1}\\), \\(x_{2}=\\) old point, \\(t_{1}\\), \\(t_{2}=\\) translation value. ### 2. Rotation A point with co-ordinate \\(P_{1}(x_{1},x_{2})\\) on a 2-D plane is rotated by an angle \\(\\theta\\) with respect to origin then the relationship between the final point \\(P_{2}(y_{1}\\), \\(y_{2})\\) and the initial point is given as: \\[\\begin{bmatrix}y_{1}\\\\ y_{2}\\end{bmatrix}=\\begin{bmatrix}cos\\theta&sin\\theta\\\\ -sin\\theta&cos\\theta\\end{bmatrix}+\\begin{bmatrix}x_{1}\\\\ x_{2}\\end{bmatrix} \\tag{2}\\] where, \\(y_{1}\\), \\(y_{2}=\\) new point, \\(x_{1}\\), \\(x_{2}=\\) old point, \\(\\theta=\\) rotational parameter. ### 3. Scaling Scaling is required to resize an image, or to work with images whose voxel sizes differ between images. It is represented as:\\[\\begin{bmatrix}y_{1}^{\\prime}\\\\ y_{2}\\end{bmatrix}=\\begin{bmatrix}s_{1}&0\\\\ 0&s_{2}\\end{bmatrix}+\\begin{bmatrix}x_{1}\\\\ x_{2}\\end{bmatrix} \\tag{3}\\] where, \\(y_{1}\\), \\(y_{2}=\\) new point, \\(x_{1}\\), \\(x_{2}=\\) old point, \\(s_{1}\\), \\(s_{2}=\\) scaling parameters. ### 4. Shearing In shearing the parallel lines are only preserved. It may be represented as: \\[\\begin{bmatrix}y_{1}\\\\ y_{2}\\end{bmatrix}=\\begin{bmatrix}a_{11}&a_{12}\\\\ a_{21}&a_{22}\\end{bmatrix}\\begin{bmatrix}x_{1}\\\\ x_{2}\\end{bmatrix}+\\begin{bmatrix}a_{13}\\\\ a_{23}\\end{bmatrix} \\tag{4}\\] where, \\(y_{1}\\), \\(y_{2}\\)\\(=\\) new point, \\(x_{1}\\), \\(x_{2}\\)\\(=\\) old point, \\(a_{11}\\), \\(a_{12}\\), \\(a_{13}\\), \\(a_{21}\\), \\(a_{22}\\), \\(a_{23}\\)\\(=\\) shearing parameters. Fig. 4. shows an example of shearing transformation. ## VI Performance Analysis It is required to estimate how accurate the registration actually is. Also to qualitatively analyze the performance of the algorithms some metrics are used. They also serve as the basis for improvement in the registration in each iteration. The selection of similarity measures depends on modality of images to be registered. Correlation based metrics like Correlation Coefficient is applicable to mono-modal registration and Mutual Information is utilized for multi-modal image registration purposes. Correlation Coefficient (CC):CC is essentially a similarity measure which gives an idea of how well the reference and transformed images are identical [32, 33, 34]. If two images are perfectly identical, CC gives a value equal to 1, whereas, if the two images are completely uncorrelated CC value is equal to 0 and CC value equal to -1 indicates that the images are completely anti-correlated, which means one image is the negative of the other. It gives satisfactory results with mono-modal registration. It is represented as: \\[\\begin{split}\\mathcal{CC}=\\frac{\\sum_{i}(x_{i}-x_{m})(y_{1}-y_{m}) }{\\sqrt{\\sum_{i}(x_{i}-x_{m})^{2}}\\sqrt{\\sum_{i}(y_{1}-y_{m})^{2}}}\\end{split} \\tag{5}\\] where \\(x_{i}\\), \\(y_{i}=\\) intensity of \\(i^{\\text{th}}\\) pixel in the reference and sensed image respectively, and \\(x_{m}\\), \\(y_{m}=\\) mean intensity of reference and sensed image respectively. Mutual Information (MI):MI is yet another measure determining the degree of similarity measured between the image intensities of corresponding voxels in both images [35, 36]. MI is maximized when both the images are accurately aligned. The values of MI are non-negative and symmetric. The range of MI values starts from zero and can vary up to a high value. High MI value depicts large reduction in uncertainty whereas zero MI value is clear indication that the two variables are independent. It is represented as: \\[\\begin{split} MI(x,y)=\\sum_{y\\in\\tau}\\sum_{x\\in\\tau}p(x,y)\\text{ log}\\left(\\frac{p(x,y)}{p_{1}(x)p_{2}(y)}\\right)\\end{split} \\tag{6}\\] where \\(p(x,y)=\\) joint distribution function and \\(p_{1}(x)\\), \\(p_{2}(y)=\\) marginal distribution functions. Fig. 4: Example of Affine Transformations (Translation, Rotation, Scaling and Shearing) Fig. 5: Example of Multi-modal Image registration using Mutual Information as similarity measure. Top Left- Reference MRI brain image (axial view), Top ## VII Conclusion This paper tries to present a survey of the registration methods along with the detailed classifications among various approaches. Image registration is an essential step for integrating or fusing and analyzing information from various sensors (sources). It has immense applications in fields of medical sciences, computer vision and remote sensing. Image registrations with complex nonlinear distortions, multi-modal registration and registrations of occluded images despite being affected by illumination factors among others thus contributing to the robustness of the approaches belong to the most challenging tasks at the present scenario. Generation of features or control points and the mapping or transformation functions are essential steps and a lot of research work needs to be done to enhance the accuracy. In multimodal registration, MI technique has gained popularity in particular whereas for mono-modal images correlation based similarity metrics are preferred. Robustness and Reliability can be proliferated by hybrid approaches combining MI based techniques with feature-based measures. Several soft computing methods including the optimization heuristics are applied to find the optimum parameters mostly in case of affine transformations based registration. No gold standard algorithms or approaches can be developed for image registration purposes because of the dependency on the images under consideration. Thus, despite a lot of work has been done, automatic image registration is still considered as an open problem. The future works will be introducing new feature-based methods, where apt modality-insensitive features can provide robust as well as accurate outcomes for the registration. ## Acknowledgment I would like to extend my sincere gratitude to Professor Sugata Munshi, Professor Amitava Chatterjee and Professor Mita Dutta for their support and guidance. ## References * [1] L.M.G. Fonseca, B.S. Manjunath, Registration techniques for multisensor remotely sensed imagery, Photogrammetric Engineering and Remote Sensing 62 (1996) 1049-1056.J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68-73. * [2] E. Gu'lch, Results of test on image matching of ISPRS WG, ISPRS Journal of Photogrammetry and Remote Sensing 46 (1991) 1-18 * [3] J. le Moigne, First evaluation of automatic image registration methods, Proceedings of the International Geoscience and Remote Sensing Symposium IGARSS'98, Seattle, Washington, 1998, pp. 315-317. * [4] D.L.G. Hill, P.G. Batchelor, M. Holden, D.J. Hawkes, Medical image registration, Physics in Medicine and Biology 46 (2001) R1-R45. * [5] H. Lester, S.R. Arridge, A survey of hierarchical non-linear medical image registration, Pattern Recognition 32 (1999) 129-149. * [6] P.A. van den Elsen, E.-J.D. Pol, M.A. Viergever, Medical image matching-a review with classification, IEEE Engineering in Medicine and Biology 12 (1993) 26-39. * [7] J.B.A. Maintz, M.A. Viergever, A survey of medical image registration, Medical Image Analysis 2 (1998) 1-36. * [8] Damas, Sergio, Oscar Cordon, and Jose Santamaria. \"Medical image registration using evolutionary computation: An experimental survey.\" _IEEE Computational Intelligence Magazine_ 6.4 (2011): 26-42. * [9] A. Goshtasby, G.C. Stockman, C.V. Page, A region-based approach to digital image registration with subpixel accuracy, IEEE Transactions on Geoscience and Remote Sensing 24 (1986) 390-399 * [10] A. Goshtasby, G.C. Stockman, Point pattern matching using convex hull edges, IEEE Transactions on Systems, Man and Cybernetics 15 (1985) 631-637. * [11] Y.C. Hsieh, D.M. McKeown, F.P. Perlant, Performance evaluation of scene registration and stereo matching for cartographic feature extraction, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (1992) 214-237 * [12] M. Holm, Towards automatic rectification of satellite images using feature based matching, Proceedings of the International Geoscience and Remote Sensing Symposium IGARSS'91, Espoo, Finland, 1991, pp. 2439-2442 * [13] P.A. Brivio, A.D. Ventura, A. Rampini, R. Schettini, Automatic selection of control points from shadow structures, International Journal of Remote Sensing 13 (1992) 1853-1860. * [14] M. Sester, H. Hild, D. Fritsch, Definition of ground control features for image registration using GIS data, Proceedings of the Symposium on Object Recognition and Scene Classification from Multispectral and Multisensor Pixels, CD-ROM, Columbus, Ohio, 1998, 7 pp * [15] M. Roux, Automatic registration of SPOT images and digitized maps, Proceedings of the IEEE International Conference on Image Processing ICIP'96, Lausanne, Switzerland, 1996, pp. 625-628 * [16] J. Flusser, T. Suk, A moment-based approach to registration of images with affine geometric distortion, IEEE Transactions on Geoscience and Remote Sensing 32 (1994) 382-387. * [17] Y.C. Hsieh, D.M. McKeown, F.P. Perlant, Performance evaluation of scene registration and stereo matching for cartographic feature extraction, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (1992) 214-237. * [18] S. Moss, E.R. Hancock, Multiple line-template matching with EM algorithm, Pattern Recognition Letters 18 (1997) 1283-1292 * [19] W.H. Wang, Y.C. Chen, Image registration by control points pairing using the invariant properties of line segments, Pattern Recognition Letters 18 (1997) 269-281. * [20] X. Dai, S. Khorram, Development of a feature-based approach to automated image registration for multitemporal and multisensor remotely sensed imagery, International Geoscience and Remote Sensing Symposium IGARSS'97, Singapore, 1997, pp. 243-245. * [21] V. Govindu, C. Shekhar, R. Chellapa, Using geometric properties for correspondence-less image alignment, Proceedings of the International Conference on Pattern Recognition ICPR'98, Brisbane, Australia, 1998, pp. 37-41. * [22] G.P. Penney, J. Weese, J.A. Little, P. Desmedt, D.L.G. Hill, D.J. Hawkes, A comparison of similarity measures for use in 2D-3Dmedical image registration, IEEE Transactions on Medical Imaging 17 (1998) 586-595. * [23] G. Medioni, R. Nevatia, Matching images using linear features, IEEE Transactions on Pattern Analysis and Machine Intelligence 6 (1984) 675-685. * [24] D. Shin, J.K. Pollard, J.P. Muller, Accurate geometric correction of ATSR images, IEEE Transactions on Geoscience and Remote Sensing 35 (1997) 997-1006 * [25] S.Z. Li, J. Kittler, M. Petrou, Matching and recognition of road networks from aerial images, Proceedings of the Second European Conference on Computer Vision ECCV'92, St Margherika, Italy, 1992, pp. 857-861 * [26] N. Vujovic, D. Brzakovic, Establishing the correspondence between control points in pairs of mammographic images, IEEE Transactions on Image Processing 6 (1997) 1388-1399. * [27] A. Noble, Finding corners, Image and Vision Computing 6 (1988) 121-128. * [28] R.J. Althof, M.G.J. Wind, J.T. Dobbins, A rapid and automatic image registration algorithm with subpixel accuracy, IEEE Transactions on Medical Imaging 16 (1997) 308-316. * [29] D.I. Barnea, H.F. Silverman, A class of algorithms for fast digital image registration, IEEE Transactions on Computing 21 (1972) 179-186. * [30] W.K. Pratt, Correlation techniques of image registration, IEEE Transactions on Aerospace and Electronic Systems 10 (1974) 353-358. * [31] H. Hanaizumi, S. Fujimura, An automated method for registration of satellite remote sensing images, Proceedings of the International Geoscience and Remote Sensing Symposium IGARSS'93, Tokyo, Japan, 1993, pp. 1348-1350 * [32] R. Berthilsson, Affine correlation. Proceedings of the International Conference on Pattern Recognition ICPR'98, Brisbane, Australia, 1998, p. 1458-1461. * [33] A. Simper, Correcting general band-to-band misregistrations, Proceedings of the IEEE International Conference on Image Processing ICIP'96, Lausanne, Switzerland, 1996, 2, pp. 597-600 * [34] W.K. Pratt, Digital Image Processing, 2nd ed., Wiley, New York, 1991. * [35] N. Ritter, R. Owens, J. Cooper, R.H. Eikelboom, P.P. van Saarloos, Registration of stereo and temporal images of the retina, IEEE Transactions on Medical Imaging 18 (1999) 404-418. * [36] C. Studholme, D.L.G. Hill, D.J. Hawkes, An overlap invariant entropy measure of 3D medical image alignment, Pattern Recognition 32 (1999) 71-86. * [37] J.M. Fitzpatrik, J.B. West, The distribution of target registration error in rigid-body point-based registration, IEEE Transactions on Medical Imaging 20 (2001) 917-927 * [38] J. Flusser, An adaptive method for image registration, Pattern Recognition 25 (1992) 45-54. * [39] R.B. Huseby, O.M. Halck, R. Solberg, A model-based approach for geometrical correction of optical satellite images, Proceedings of the International Geoscience Remote Sensing Symposium IGARSS'99, Hamburg, Germany, 1999, pp. 330-332. * [40] O. Thepaut, K. Kpalma, J. Ronsin, Automatic registration of ERS and SPOT multisensor images in a data fusion context, Forest Ecology and Management 128 (2000) 93-100. * [41] A. Goshtasby, Piecewise linear mapping functions for image registration, Pattern Recognition 19 (1986) 459-466. * [42] A. Goshtasby, Piecewise cubic mapping functions for image registration, Pattern Recognition 20 (1987) 525-533. * [43] A. Goshtasby, Image registration by local approximation methods, Image and Vision Computing 6 (1988) 255-261. * [44] P. A. Van Den Elsen, E. J. D. Pol and M. A. Viergever, \"Medical image matching: a review with classification\", IEEE Engineering in medicine and biology, 12(1):1993,26-39 * [45] J. B. Antoine Maintz and Max A. Viergever, \"A Survey of Medical Image Registration\", Medical Image Analysis (1998) Volume2, number 1, pp 1-36, Oxford University Press * [46] L. Lemieux, N. D. Kitchen, S. W. Hughes and D. G. T. Thomas, \"Voxel-based localization in frame-based and frameless stereokay and its accuracy\". Medical physics, 21(8):1301-1310, 1994. * [47] L. Lemieux and R. Jagoe, \"Effect of fiducial marker localization on stereotactic target coordinate calculation in CT slices and radiographs\" Physics in medicine and biology, 39:1915-1928, 1994. * [48] K. P. Gall and L. J. Verhey, \"Computer-assisted positioning of radiotherapy patients using implanted radioopaque fiducials\", Medical physics, 1993, 1152-1159 * [49] C. R. Maurer, G. B. Aboutanos, B. M. Dawant, R. A. Margolin, R. J. Maciunas and J. M. Fitzpatrick., \"Registration of CT and MR brain images using a combination of points and surfaces\", Medical imaging: image processing, volume 2434, Bellingham, WA, 1995, SPIE Press, 109-123 * international congress series, Amsterdam, 1996,693-698 * [51] C. R. Maurer, R. Calvin, J. J. McCrory, and J. M. Fitzpatrick, \"Estimation of accuracy in localizing externally attached markers in multimodal volume head images\", In M. H. Loew, editor, Medical imaging: image International Journal of Signal Processing, Image Processing and Pattern Recognition Vol. 2, No.3, September 2009 22 processing, volume 1898, 1993, SPIE Press, 43-54 * [52] A. C. Evans, S. Marrett, J. Torrescorzo, S. Ku, and L.Collins, \"MRI-PET correlation in three dimensions using a volume of interest (VOI) atlas\", Journal of cerebral blood flow and metabolism, 11, A69-A78, 1991 * [53] W. D. Leslie, A. Borys, D. McDonald, J. O. Dupont and A. E. Peterdy, \"External reference markers for the correction of head rotation in brain single-photon emission tomography\", European journal of nuclear medicine, 22(4):351-355, 1995. * [54] C. Dorai, G. Wang, A.K. Jain, C. Mercer, \"Registration and integration of multiple object views for 3D model construction\", IEEE Trans. Pattern Anal. Machine Intelligence. 20 (1),(1998), 83-89. * [55] T. Masuda, K. Sakaue, N. Yokoya, \"Registration and integration of multiple range images for 3-D model construction\", Proceedings of the 13th International Conference on Pattern Recognition, Vol. 1, 1996, 879-883. * [56] P.J. Besl, N.D. McKay, \"A method for registration of 3-D shapes\", IEEE Trans. Pattern Analysis Machine Intelligence 14 (2) (1992), 239-256. * [57] G. Blais, M.D. Levine, \"Registering multiview range data to create 3D computer objects\", IEEE Trans. Pattern Analysis and.Machine Intelligence 17 (8) (1995) 820-824. * [58] T. Masuda, N. Yokoya, \"A robust method for registration and segmentation of multiple range images\", Proceedings of the Second Workshop on CAD-Based Vision, 1994, 106-113. * [59] S.M. Yamany, A.A. Farag, \"Free-form surface registration using surface signatures\" Proceedings of the Seventh IEEE International Conference on Computer Vision, Vol. 2, 1999, 1098-1104. * [60] C. Schutz, T. Jost, H. Hugli, \"Multi-feature matching algorithm for free-form 3D surface registration\", Proceedings of the 14th International Conference on Pattern Recognition, Vol. 2, 1998, 982-984. * [61] A.E. Johnson, M. Hebert, \"Surface registration by matching oriented points\", Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling, 1997, 121-128. * [62] W.R. Fright, A.D. Linney, \"Registration of3-D head surfaces using multiple landmarks\", IEEE Trans. on Medical Imaging 12 (3) (1993), 515-520. * [63] Chi Kin Chow, Hung Tat Tsui, Tong Lee, \"Surface registration using a dynamic genetic algorithm\", Pattern Recognition 37, (2004), 105-117 * [64] J. Flusser, T. Suk, \"Pattern recognition by affine moment invariants\", Pattern Recognition 26, (1993), 167- 174 * [65] B.K. Ghaffary, A.A. Sawchuk, \"A survey of new techniques for image registration and mapping\", Proceedings of the SPIE: Applications of Digital Image Processing 432 (1983) 222-239. * [66] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and Suetens, \"Multimodality image registration by maximization of mutual information,\" IEEE Transactions on Medical Imaging, vol. 16, no. 2, (1997), 187-198 * [67] S. Sanjay-Gopal, H. P. Chan, T. Wilson, M. Helvie, N. Petrick and B. Sahiner, \"A regional registration technique for automated interval change analysis of breast lesions on mammograms,\" Medical Physics, vol. 26, no. 12, (1999), 2669-2679 * [68] L. Junck, J. G. Moen, G. D. Hutchins, M. B. Brown, and D. E. Kuhl, \"Correlation methods for the centering, rotation, and alignment of functional brain images\", Journal of nuclear medicine, 31,(1990), 1220-1276 * [69] S. L. Bacharach, M. A. Douglas, R. E. Carson, P. J. Kalkowski, N. M. T. Freedman, P. Perrone-Filardi and R.O.Bonow, \"Three-dimensional registration of cardiac positron emission tomography attenuation scans\", Journal of nuclear medicine, 34(2), (1993), 311-321 * [70] Min Xu Varshney P.K., \" A Subspace Method for Fourier-Based Image Registration\", Geoscience and Remote Sensing Letters, IEEE 33, Volume 6, Issue 3 (July 2009), 491-494 * [71] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and Suetens, \"Multimodality image registration by maximization of mutual information,\" IEEE Transactions on Medical Imaging, vol. 16, no. 2, (1997), 187-198 * [72] Samritijarapon O. Chitschbuk O., \" An FFT-Based Technique and Best-first Search for Image Registration\", International Symposium on Communications and Information Technologies, ISCIT 2008. * [73] P.Viola, W.M. Wells, \"Alignment by maximization of mutual information\", International Journal of Computer Vision 24, (1997), 137-154. * [74] P. Thevenaz, M. Unser, Spline pyramids for inter-modal image registration using mutual information, Proceedings of SPIE: Wavelet Applications in Signal and Image Processing, San Diego, CA, (1997), 236- 247. * [75] J. le Moigne, \"Parallel registration of multi-sensor remotely sensed imagery using wavelet coefficients\", Proceedings of the SPIE: Wavelet Applications, Orlando, Florida, 2242, (1994), 432-443 * [76] Gang Hong Yun hang, \"Combination of feature-based and area-based image registration technique for high resolution remote sensing image\", IEEE international symposium on Geoscience and remote sensing, IGARSS 2007 (July 2007), 377-380 * [77] JHeng Liua., Jinqqi Yan., David Zhang, \"Three-dimensional surface registration: A neural network strategy\" Neurocomputing 70, (2006), 597-602 * [78] Lifeng Shang, Jian Cheng Lv, Zhang Yi, \"Rigid medical image registration using PCA neural network\", Neurocomputing 69 (2006), 1717-1722 * [79] G. Berks, A. Ghassemi, and D. G. von Keyserlingk, \"Spatial registration of digital brain atlases based on fuzzy set theory,\" Comput. Med. Imaging Graph., vol. 25, (Jan. 2001), 1-10 * [80] Y. Hata, S. Kobashi, S. Hirano, and M. Ishikawa, \"Registration of multi-modality medical images by soft computing approach,\" in Proc. ICONIP'99, (1999), 878-883. * [81] Ramirez L. Durdle N.G. Raso V.J. \"A Parameters Selection Scheme for Medical Image Registration\", Fuzzy Information Processing Society, 2006. NAFIPS 2006. Annual meeting of the North American,(June2006), 505-510 * [82] S.-J. Wu and P.-T. Chow, Genetic algorithms for nonlinear mixed discrete-integer optimization problems via meta-genetic parameter optimization, Engineering Optimization, vol. 24, no. 2, pp. 137-159, 1995. * [83] [27] R. Eberhart and J. Kennedy, A new optimizer using particle swarm theory, in Proceedings of the 6th International Symposium on Micro Machine and Human Science (MHS '95), pp. 39- 43, IEEE, Nagoya, Japan, October 1995. * [84] Rashedi, E., Nezamabadi-pour, H., Saryazdi, S., GSA: A Gravitational Search Algorithm 179(13), 2232-2248 (2009). * [85] [29] M. Dorigo and G. D. Caro, Ant algorithms for discrete optimization, Artificial Life, vol. 5, no. 3, (1999), pp. 137-172. * [86] [30] M. Dorigo and L. M. Gambardella, Ant colony system: a cooperative learning approach to the traveling salesman problem, IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, (1997), pp. 53-66. * [87] C. Zhang and H.-P. Wang, Mixed-discrete nonlinear optimization with simulated annealing, Engineering Optimization, vol. 21, no. 4, pp. 277-291, 1993. * [88] E. H. L. Aarts, J. H. M. Korst, and P. J. M. van Laarhoven, Simulated annealing, in Local Search in Combinatorial Optimization, pp. 91-120, 1997. * [89] Nag S. Adaptive Plant Propagation Algorithm for Solving Economic Load Dispatch Problem. arXiv preprint arXiv:1708.07040. 2017 Aug 4. * [90] Nag S. A Type II Fuzzy Based Multi-Level Image Thresholding Using Adaptive Plant Propagation Algorithm. arXiv preprint arXiv:1708.09461. 2017 Aug 23. * [91] J.M. Rouet, J.J. Jacq, and C. Roux, \"Genetic algorithms for a robust 3-D MR-CT registration,\" IEEE Trans. Inform. Technol. Biomed., vol. 4, (Jun. 2000), 126-136 [MISSING_PAGE_POST]
Image Registration is the process of aligning two or more images of the same scene with reference to a particular image. The images are captured from various sensors at different times and at multiple view-points. Thus to get a better picture of any change of a scene/object over a considerable period of time image registration is important. Image registration finds application in medical sciences, remote sensing and in computer vision. This paper presents a detailed review of several approaches which are classified accordingly along with their contributions and drawbacks. The main steps of an image registration procedure are also discussed. Different performance measures are presented that determine the registration quality and accuracy. The scope for the future research are presented as well. Image registration, classification, contribution, drawback, performance measures, registration quality, accuracy, future research.
Give a concise overview of the text below.
155
arxiv-format/1703_04961v1.md
Predicting with Limited Data - Increasing the Accuracy in Vis-Nir Diffuse Reflectance Spectroscopy by Smote ## 1 Introduction Diffuse reflectance spectroscopy in the visible and near-infrared range (VIS-NIR DRS) has proved to be useful to assess various soil properties [1]. It can be employed to provide more data rapidly and inexpensively compared to classical laboratory analysis. Therefore, DRS is increasingly used for vast soil surveys in agriculture and environmental research [2, 3]. Recently, several studies have shown the applicability of VIS-NIR DRS _in situ_ as a proximal soil sensing technique [4, 5]. To predict soil properties from soil spectra, a model is calibrated, often using partial least squares (PLS) regression. However, when calibration is based on air-dried spectra collected under laboratory conditions, predictions of soil properties from field spectra tend to be less accurate [4]. Usually, this decrease in accuracy is attributed to varying moisture between air-dried calibration samples and field spectra recorded with a variable moisture content. Different remediation techniques have been proposed, ranging from advanced preprocessing of the spectra [6] to \"spiking\" the calibration set with field spectra [4]. In our study, we adopt a slightly different view on the calibration problem. It does not only apply to the varying moisture conditions between the calibration data set and the field spectra. Indeed, it is also valid if we want to predict soil properties in a range where calibration samples are rare. Mining with rarity or learning from imbalanced data is an ongoing research topic in Machine Learning [7]. In imbalanced data sets frequent samples outnumber the rare once. Therefore, a model will be better at predicting the former and might fail for the latter. Two different approaches exist to take care of the data imbalance: we can either adjust the model or \"balance\" the data. The latter approach has the advantage that we can use the usual modelling framework. Synthetic minority oversampling technique (SMOTE) is one way to balance the data. It was first proposed for classification [8] and recently for regression [9]. SMOTE oversamples the rare data by generating synthetic points and thus helps to equalize the distribution. In this study, we propose a strategy to increase the prediction accuracy of soil properties from field spectra when they are rare in calibration. The goal of this study is to build a calibration model to predict soil organic carbon content (SOCC) from field spectra by air-dried samples spiked with synthetic field spectra. ## 2 Material and Methods ### Data acquisition The studied soil was sampled at the southern slopes of Mt. Kilimanjaro, Tanzania (3\\({}^{\\circ}\\) 4\\({}^{\\prime}\\) 33\\({}^{\\prime\\prime}\\) S, 37\\({}^{\\circ}\\) 21\\({}^{\\prime}\\) 12\\({}^{\\prime\\prime}\\) E) in coffee plantations. Due to favourable soil and climate in this region,extensive coffee plantations constitute a frequent form of land use. We took 31 samples for calibration at 4 different study sites. For validation, we scanned 12 field spectra at a wall of a soil pit and sampled soil material for chemical analysis at the scanned spots. We call these validation field spectra F. After collection, the calibration samples were dried in an oven at 45\\({}^{\\circ}\\)C and sieved \\(<\\) 2 mm. Subsequently, they were scanned with an AgriSpec portable spectrophotometer equipped with a Contact Probe (Analytical Spectral Devices, Boulder, Colorado) in the range 350-2500 nm with 1 nm intervals. The same spectrometer was used in the field. The instrument was calibrated with a Spectralon white tile before scanning the soil samples. For the measurement, a thoroughly mixed aliquot of the sample was placed in a small cup and the surface was smoothed with a spatula. Each sample was scanned 30 times and the signal averaged to reduce the noise. In the following, we call this calibration data set L. SOCC was measured in a CNS-Analyser by high temperature combustion with conductivity detectors. ### Generating data by synthetic minority oversampling To generate new data to spike the calibration data set L, we used SMOTE [8] and its extension for regression [9]. This algorithm consists of generating new synthetic data using existing data and is summarized below. In our case, we generated new spectra and the related SOCC using the field spectra F. The new spectra are created by calculating the difference between a field spectrum and one of its nearest neighbours and adding this difference (weighted by a random number between 0 and 1) to the field spectrum. The SOCC of the synthetic spectrum is then a weighted average between the SOCC of the field spectrum and the used nearest neighbour. SMOTE has two parameters, namely \\(N\\), the number of points to generate for each existing point (given in percent of the whole data set) and \\(k\\), the number of nearest neighbours. To study the influence of these parameters we generated six different synthetic data sets S1 through S6, varying \\(N=100,200,300\\) and \\(k=3,5\\). ### Data pretreatment and explorative analysis We corrected each spectrum (calibration, validation and synthetic) for the offset at 1000 and 1830 nm and kept only parts with a high signal-to-noise ratio (450-2400 nm). Then, we transformed the spectra to absorbance \\((\\log_{10}(1/\\mathrm{reflectance}))\\) and smoothed them using the Singular Spectrum Analysis (SSA). SSA is a non-parametric technique to decompose a signal into additive components that can be identified as the signal itself or as noise [10]. Finally, we divided each spectrum by its maximum and calculated the first derivative. In order to assess similarities between the calibration, validation and synthetic data sets, we calculated the Principal Component Analysis (PCA) of the (uncorrected original) spectra L and F and projected the synthetic data into the space spanned by the principal components. ### Partial least squares regression We calibrated seven different PLS models. For model I we used the data set L, the spectra scanned under laboratory conditions. Model II through VII were calibrated on L spiked with synthetic spectra S1 through S6. To find the best model I through VII, we varied the number of PLS components between 1 and 15. Based on the predictions in the leave-one-out cross-validation (LOOCV) we calculated the correct Akaike Information Criterion [11]\\(\\text{AIC}_{\\mathrm{c}}=n\\ln(\\mathrm{RMSE}^{2})+2p+\\frac{2p(p+1)}{n-p-1}\\), where \\(n\\) is the number of calibration samples, \\(p\\) the number of PLS components and \\(\\mathrm{RMSE}\\) the root mean-squared error. The latter is defined as \\(\\mathrm{RMSE}=\\sqrt{\\sum_{i=1}^{n}(\\hat{y}_{i}-y_{i})^{2}}\\), where \\(\\hat{y}_{i}\\) are the predicted and \\(y_{i}\\) the measured SOCCs. We selected the model with the smallest \\(\\text{AIC}_{\\mathrm{c}}\\) as the most plausible. To assess the model quality, we used the \\(\\mathrm{RMSE}\\), the mean error \\(\\mathrm{ME}=\\frac{1}{n}\\sum_{i=1}^{n}\\hat{y}_{i}-y_{i}\\) and the coefficient of determination \\(R^{2}=1-\\sum_{i=1}^{n}(y_{i}-\\hat{y}_{i})^{2}/\\sum_{i=1}^{n}(y_{i}-\\bar{y})^{2}\\), where \\(\\bar{y}\\) is the mean SOCC. ### Monte Carlo simulations SMOTE has two random components because it selects spectra randomly (with replacement) among the nearest neighbours and weights the difference between spectra by a random number (between 0 and 1). To study the influence of these random steps, we generated 100 different datasets S1 through S6. Each data set was then used to spike the calibration data set L, to build a new PLS model and to predict the data set F. ## 3 Results and Discussion The first principal components (PCs) explain 85.4% and 11.2% of variance, respectively. We can clearly identify two distinct groups of samples: the calibration data set L and the field spectra F (Fig. 1). In other words, the data sets L and F differ. The synthetic points that were projected into the space spanned by the PCs resemble the field spectra as expected. The distinct characteristics of the data sets L and F accord well with the difficulties to predict the data set F by using the laboratory spectra L only (Table 1 and Table 2). Although the LOOCV of model I yields a moderate \\(\\mathrm{RMSE}\\) and a large \\(R^{2}\\), the validation on the data set F fails. Spiking the calibration data set L with synthetic spectra increases the prediction accuracy of the SOCC in the data set F. Actually, the \\(\\mathrm{RMSE}\\) decreases and \\(R^{2}\\) increases with increasing number of synthetic points both for the LOOCV and the validation (Table 1 and Table 2). However, the number of model parameters also increases from 2 to 7. The Monte Carlo results show only a small variability in the interquartile range. However, some synthetic data sets in model V produced \\(R^{2}\\) values smaller than \\(-\\)0.53, the value we obtain in model I on air-dried samples only. This might be due to the combination of neighbours during smoting. In general, models with 5 neighbours were more accurate than those with 3 neighbours. However, the number of neighbours had a smaller influence on the prediction accuracy than the number of synthetic points. It is difficult to decide _a priory_ how many synthetic points should be included in the calibration. Indeed, in a classification problem the goal is to approximate an equal distribution of different classes such that the rare class becomes an ordinary one. In regression, however, we do not know which features of the data make them rare. For our data, the range of SOCC in the data set L is larger than in the tion data by synthetic ones generated from these field spectra. In general, the prediction accuracy increases when a sufficient number of synthetic points is included in the calibration. However, because it is difficult to determine this number _a priori_, we recommend to generate several synthetic data sets to find an appropriate model. ## Acknowledgements This study is part of the project DFG FOR 1246 \"Kiliman-jaro ecosystems under global change: Linking biodiversity, biotic interactions and biogeochemical ecosystem processes\" and was supported by the Deutsche Forschungsgemeinschaft. ## References * [1] B. Stenberg and R. A. Viscarra Rossel, \"Diffuse reflectance spectroscopy for high-resolution soil sensing,\" in _Proximal Soil Sensing_, R. A. Viscarra Rossel, A. B. McBratney, and B. Minasny, Eds., pp. 29-47. Springer, 2010. * enabling an evidence-based diagnostic surveillance approach to agricultural and environmental management in developing countries,\" _Journal of near Infrared Spectroscopy_, vol. 15, no. 1, pp. 1-19, 2007. * [3] T.-G. Vagen, K. D. Shepherd, and M. G. Walsh, \"Sensing landscape level change in soil fertility following deforestation and conversion in the highlands of Madagascar using Vis-NIR spectroscopy,\" _Geoderma_, vol. 133, no. 3, pp. 281-294, 2006. * [4] R. A. Viscarra Rossel, S. R. Cattle, A. Ortega, and Y. Fouad, \"In situ measurements of soil colour, mineral composition and clay content by vis-nir spectroscopy,\" _Geoderma_, vol. 150, no. 3, pp. 253-266, 2009. * [5] T. H. Waiser, C. L. S. Morgan, D. J. Brown, and C. T. Hallmark, \"In situ characterization of soil clay content with visible near-infrared diffuse reflectance spectroscopy,\" _Soil Science Society of America Journal_, vol. 71, no. 2, pp. 389-396, 2007. * [6] B. Minasny, A. B. McBratney, V. Bellon-Maurel, J.-M. Roger, A. Gobrecht, L. Ferrand, and S. Joalland, \"Removing the effect of soil moisture from nir diffuse reflectance spectra for the prediction of soil organic carbon,\" _Geoderma_, vol. 167, pp. 118-124, 2011. * [7] G. M. Weiss, \"Mining with rarity: A unifying framework,\" _Sigkdd Explorations_, vol. 6, pp. 1-19, 2004. * [8] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, \"Smote: Synthetic minority over-sampling technique,\" _Journal of Artificial Intelligence Research_, vol. 16, pp. 321-357, 2002. * [9] Luis Torgo, Rita P Ribeiro, Bernhard Pfahringer, and Paula Branco, \"Smote for regression,\" in _Progress in Artificial Intelligence_, pp. 378-389. Springer, 2013. * [10] Nina Golyandina and Anatoly Zhigljavsky, _Singular spectrum analysis for time series_, Springer, 2013. * [11] N. Sugiura, \"Further analysts of the data by akaike's information criterion and the finite corrections,\" _Communications in Statistics-Theory and Methods_, vol. 7, no. 1, pp. 13-26, 1978. \\begin{table} \\begin{tabular}{l l r r r r r r} \\hline \\hline Model & Data set(s) & \\(N(\\%)\\) & \\(k\\) & \\(p\\) & \\(\\mathrm{RMSE}\\) (mg g\\({}^{-1}\\)) & \\(R^{2}\\) & ME (mg g\\({}^{-1}\\)) \\\\ \\hline I & L & – & – & 2 & 6.25 & 0.77 & –0.20 & \\\\ II & L and S1 & 100 & 3 & 5 (4; 5) & 5.29 (5.18; 5.47) & 0.80 (0.79; 0.81) & –0.06 (\\(-\\)0.10; \\(-\\)0.01) & \\\\ III & L and S2 & 200 & 3 & 6 (6; 6) & 4.51 (4.47; 4.56) & 0.83 (0.83; 0.84) & 0.07 ( 0.03; 0.11) & \\\\ IV & L and S3 & 300 & 3 & 7 (6; 7) & 4.01 (3.98; 4.06) & 0.85 (0.84; 0.85) & 0.08 ( 0.05; 0.11) & \\\\ V & L and S4 & 100 & 5 & 4 (3; 5) & 5.31 (5.16; 5.55) & 0.80 (0.78; 0.81) & –0.02 (\\(-\\)0.10; 0.04) & \\\\ VI & L and S5 & 200 & 5 & 6 (6; 6) & 4.51 (4.45; 4.55) & 0.83 (0.83; 0.84) & 0.06 ( 0.01; 0.10) & \\\\ VII & L and S6 & 300 & 5 & 6 (6; 7) & 4.05 (4.02; 4.08) & 0.84 (0.84; 0.85) & 0.07 ( 0.05; 0.09) & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Statistics of the PLS calibration. Median values and 25% and 75% quantiles in parenthesis. \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline Model & \\(\\mathrm{RMSE}\\) (mg g\\({}^{-1}\\)) & \\(R^{2}\\) & ME (mg g\\({}^{-1}\\)) \\\\ \\hline I & 6.18 & \\(-\\)0.53 & \\(-\\)3.88 \\\\ II & 3.09 (2.82; 3.58) & 0.62 (0.49; 0.68) & \\(-\\)0.03 (\\(-\\)0.53; 0.79) \\\\ III & 2.00 (1.79; 2.40) & 0.84 (0.77; 0.87) & 0.14 (\\(-\\)0.01; 0.36) \\\\ IV & 1.31 (1.08; 1.58) & 0.93 (0.90; 0.95) & 0.16 ( 0.06; 0.27) \\\\ V & 3.06 (2.79; 3.56) & 0.62 (0.49; 0.69) & \\(-\\)0.28 (\\(-\\)0.70; 0.79) \\\\ VI & 2.12 (1.81; 2.39) & 0.82 (0.77; 0.87) & 0.24 (\\(-\\)0.04; 0.48) \\\\ VII & 1.62 (1.29; 2.07) & 0.89 (0.83; 0.93) & 0.18 ( 0.02; 0.37) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Statistics of the PLS validation. Median values and 25% and 75% quantiles in parenthesis.
Diffuse reflectance spectroscopy is a powerful technique to predict soil properties. It can be used _in situ_ to provide data inexpensively and rapidly compared to the standard laboratory measurements. Because most spectral data bases contain air-dried samples scanned in the laboratory, field spectra acquired _in situ_ are either absent or rare in calibration data sets. However, when models are calibrated on air-dried spectra, prediction using field spectra are often inaccurate. We propose a framework to calibrate partial least squares models when field spectra are rare using synthetic minority oversampling technique (SMOTE). We calibrated a model to predict soil organic carbon content using air-dried spectra spiked with synthetic field spectra. The root mean-squared error of prediction decreased from 6.18 to 2.12 mg g\\({}^{-1}\\) and \\(R^{2}\\) increased from \\(-0.53\\) to 0.82 compared to the model calibrated on air-dried spectra only. Christina Bogner Ecological Modelling BayCEER, University of Bayreuth Dr.-Hans-Frisch-Str. 1-3 95445 Bayreuth, Germany Anna Kuhnel, Bernd Huwe Soil Physics Group BayCEER, University of Bayreuth Universitatstr. 30 95447 Bayreuth, Germany diffuse reflectance spectroscopy, soil, partial least squares, calibration, SMOTE
Provide a brief summary of the text.
291
arxiv-format/2204_02846v1.md
Evidence for positive long- and short-term effects of vaccinations against COVID-19 in wearable sensor metrics -- Insights from the German Corona Data Donation Project Marc Wiedermann [email protected] Institute for Theoretical Biology und Integrated Research Institute for the Life-Sciences, Humboldt University of Berlin, Philippstr. 13, 10115 Berlin, Germany Annika H. Rose Institute for Theoretical Biology und Integrated Research Institute for the Life-Sciences, Humboldt University of Berlin, Philippstr. 13, 10115 Berlin, Germany Robert Koch Institute, Nordtyler 20, 13353 Berlin, Germany Benjamin F. Maier Institute for Theoretical Biology und Integrated Research Institute for the Life-Sciences, Humboldt University of Berlin, Philippstr. 13, 10115 Berlin, Germany Robert Koch Institute, Nordtyler 20, 13353 Berlin, Germany Jakob J. Kolb Institute for Theoretical Biology und Integrated Research Institute for the Life-Sciences, Humboldt University of Berlin, Philippstr. 13, 10115 Berlin, Germany Robert Koch Institute, Nordtyler 20, 13353 Berlin, Germany David Hinrichs Institute for Theoretical Biology und Integrated Research Institute for the Life-Sciences, Humboldt University of Berlin, Philippstr. 13, 10115 Berlin, Germany Robert Koch Institute, Nordtyler 20, 13353 Berlin, Germany Dirk Brockmann Institute for Theoretical Biology und Integrated Research Institute for the Life-Sciences, Humboldt University of Berlin, Philippstr. 13, 10115 Berlin, Germany May 19, 2022 ## I Introduction COVID-19, the disease caused by infection with SARS-CoV-2, is usually accompanied by symptoms such as fever, cough, sore throat, shortness of breath, and fatigue [1]. These symptoms are most prevalent during the acute phase of the disease, commonly defined as the four weeks following symptom onset [2]. However, in some instances, these symptoms, along with a wide range of others, can persist, develop, or recur for weeks to months beyond this acute phase [3; 4] which is known as post-acute sequelae of SARS-CoV-2 infection (PASC) or, more commonly, Long COVID [2; 5; 6; 7; 8]. For instance, results from the UK-based REACT-2 study indicate that 15% of surveyed people reported at least three COVID-19 related symptoms lasting for 12 weeks or more, while 38% reported at least one [9; 10]. A follow-up study of COVID-19 patients discharged from a hospital in Wuhan, China revealed that 76% of people reported at least one symptom 6 months after infection, 63% of which specifically reported fatigue [11], one of the most prevalent Long COVID symptoms [2; 12; 13]. Other Long COVID symptoms include cognitive dysfunction, confusion, and brain fog as well as chest pain, shortness of breath, head- and muscle aches, dizziness, and heart palpitations [12; 14]. They can be experienced by all age groups and are not exclusive to people with a severe course of the disease in its acute phase [10], though some indication exists that the risk of contracting Long COVID can be linked to the number and severity of symptoms experienced at the start of the illness [14]. Generally, such long-term effects are not unique to COVID-19, but have also been reported for Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS) [15; 16; 17; 18]. Vaccinations are effective against infection with SARS-CoV-2, hospitalization, ICU admission, and death following COVID-19 illness [19; 20; 21]. However, numerous reports of breakthrough infections after vaccination, especially for the two variants of concern B.1.617.2 (Delta) and B.1.1.529 (Omicron), have raised public concern [22]. Still, a large UK-based cohort study reported reduced risk of hospitalization or having more than five symptoms in the first week of illness after receiving at least one vaccination dose [23]. Almost all symptoms were reported less frequently for breakthrough infections compared to infected unvaccinated individuals and vaccinated individuals were more likely to be fully asymptomatic [23]. Vaccinations have also been shown to reduce the need for emergency care/hospitalization following breakthrough infection nearly ten-fold in a US population in Michigan [24]. Similar results were obtained from studies in Qatar [25], Spain [26], Italy [27] and Israel [28], which consistently showed substantial reductions in severe cases between 38% and 91% percent. Likewise, high vaccine efficacies between 72% against the B.1.351 variant (Beta) [29] and over 90% for the B.1.617.2 variant (Delta) [28; 30] were reported even though immunity waned across age groups a few months after the second vaccine dose [31; 32]. Although vaccines lower the risk of symptomatic and severe cases of breakthrough COVID-19 infection, it remains a subject of contention in COVID research whether they also attenuate or prevent symptoms associated with PASC/Long COVID [33]. For instance, in an unrepresentative sample of 1,949 participants in a Long COVID Facebook group poll, 24 people reported Long COVID symptoms after symptomatic breakthrough infections [34]. A study of 1,500 people in Israel found that 19% of recorded breakthrough cases resulted in symptoms that lasted longer than 6 months [35] and a study of around 6,000 adolescents found increased risk of prolonged symptoms even when people were fully vaccinated [36]. However, the comparatively small sample sizes in these studies effectively hinder general conclusions to be drawn [33]. Among studies with larger sample sizes, an analysis of 1,240,009 users of the UK-based _COVID Symptom Study app_ in fact revealed a reduced risk of developing symptoms that last more than 28 days following a second vaccination dose [23]. This finding is contradicted by another retrospective study of 10,024 vaccinated COVID-19 positive individuals in the UK that found no significant reduction in symptoms related to Long COVID [37]. Here, we provide evidence that vaccination against COVID-19 may significantly reduce the likelihood of developing long-term symptoms following an infection with SARS-CoV-2. We use large scale, daily data on resting heart rate, physical activity and sleep collected over a period of more than two years, see Fig. 1. This data was collected as part of the Corona Data Donation Project (Corona-Datenspende-App (CDA), [4]), a smartphone app that allows users to submit such vital data by linking with consumer-grade wearable sensors, such as smartwatches and fitness trackers. The app was developed at the Robert Koch Institute, Germany's federal public health institute and released in Germany on April 12, 2020 to participants of age 16 and older. As of April 3, 2022, a total of 1,177,636 people installed the CDA and 524,761 users submitted at least one data point. In March 2022, the app had approximately 190,000 monthly active users, and more than 120,000 people have submitted daily data over more than 600 days. Additionally, users can optionally participate in in-app surveys, which include questions regarding diagnostic test results and vaccination data. The large sample size, high temporal resolution and comparatively long observation period permits continuous tracking of an individual's biometrics and enables a fine-grained analysis of vital signs prior to a SARS-CoV-2 infection, throughout the acute phase of the disease, and during the recovery process. As such, the combination of survey and vital data enables analysis of long-term physiological and behavioral changes in COVID-19 positive and -negative individuals stratified by vaccination status. The systematic use of large-scale data collected via commercially available fitness trackers and smartwatches is a rapidly growing line of data-driven, medical research [39; 40; 41]. Wearable sensor data has been applied to study physiological markers of depression [42; 43], characterize daily physiology and circadian rhythms [44], and improve surveillance of influenza-like illness [45]. In the COVID-19 context, this approach has been applied to the Figure 1: Exemplary time series of daily resting heart rate (RHR, top), physical activity (middle) and sleep duration (bottom) for a time window of 150 days in a representative individual who was unvaccinated (left) and vaccinated (right) at the time of taking a positive PCR-test (grey shading). Dashed lines denote the user’s baseline, i.e., the average of the 60 days prior to the test. We observe a strong peak in RHR, a drop in physical activity and increased sleep duration for the unvaccinated individual around the time of the test. Similar patterns are observed for the vaccinated individual with respect to physical activity and sleep duration, the latter change being less pronounced. A visible change of RHR is absent for the vaccinated individual around the week of the test. early detection of COVID-19 in individuals [46], predicting overall case numbers and changes in trends [47], and discriminating COVID-19 positive from negative individuals [48; 49]. Moreover, studies conducted prior to sufficient availability of vaccination data were able to link infections with SARS-CoV-2 to elevated resting heart rate that only returned to baseline levels an average of 79 days after symptom onset [50]. ## II Results We computed weekly changes in resting heart rate (RHR), daily steps, and sleep duration around the date of a COVID-19 PCR-test for a total of 8,097 individuals, of which 2,240 experienced a breakthrough infection, 318 were infected prior to achieving immunity, and 5,539 reported negative PCR-tests, thus serving as a control group (see SI, Fig. S6). The per-user baseline against which changes are computed is given by the respective average of each variable over the 8 weeks preceding a confirmed infection with SARS-CoV-2, indicated by the approximate date of a PCR-test (given at a weekly temporal resolution). Signals were normalized by subtracting the daily average from all individual time series to account for seasonal effects, e.g. naturally prolonged sleep duration in winter and increased activity in summer. Thus, individual vital data is always measured relative to the population-wide average on a given day. Further details on the characteristics, pre-processing, and analysis of the data are provided in the Materials & Methods Section. ### Vaccinations mitigate long-term vital changes in COVID-19 positive individuals We first evaluated the evolution of average changes in weekly RHR, step count, and sleep duration in the weeks following a PCR-test separately for each user cohort, depicted in Fig 2. On average, the RHR of unvaccinated users with SARS-CoV-2 infection increased by \\(\\sim\\)1.7 beats per minute in the week of the PCR-test and only returned to baseline levels after 11 weeks (Fig. 2A). This finding qualitatively confirms similar results obtained in earlier studies [50; 51] that did not specifically differentiate by vaccination status. We found a pronounced drop in RHR at around one week after a PCR-test with values that decreased even below baseline for vaccinated users. This aligns with earlier studies [50] and potentially indicates transient bradycardia following infection [52]. RHR of unvaccinated individuals already increased significantly in the week preceding a PCR-test at values of \\(\\sim\\)1.1 beats per minute above normal. We found weaker average deviations in RHR for vaccinated individuals with a maximum value of \\(\\sim\\)1.2 beats per minute in the week of the PCR-test. These deviations were accompanied by a swifter return to baseline levels after approximately 3 to 6 weeks. Except for the two weeks following a PCR-test the average RHR-change for vaccinated individuals was approximately two to three times lower than for those that were unvaccinated (Fig. 2A), potentially indicating a milder course of the disease on average. The average daily activity (Fig. 2B) decreased in the week of the PCR-test by \\(\\sim\\)2,000 and \\(\\sim\\)3,000 steps per day for vaccinated and unvaccinated individuals, respectively. For both groups, the reduction usually began the week of a positive PCR-test, and thus might have been partially modulated by changes in behavior, i.e. self-isolation. A return to baseline activity among vacci Figure 2: Changes in RHR, activity, and sleep duration in unvaccinated infected (red) and vaccinated infected (purple) as well as negative controls (blue). Changes are measured relative to the two months preceding the test. Errors bars indicate standard error. Filled (empty) red stars indicate periods where the average vital change of unvaccinated individuals is stronger than that of vaccinated (negative) individuals using a one-sided Welch t-test with a significance level of \\(\\alpha=0.01\\). Purple stars indicate significant differences between vaccinated and COVID-19 negative individuals. nated individuals occurred after only 4 weeks compared to around 6 to 11 weeks for a small subset of unvaccinated users, indicating that at least some suffered from a prolonged reduction in activity, thereby skewing the mean towards negative values. Finally, the average sleep duration of unvaccinated individuals increased abruptly by \\(\\sim\\)37 minutes per day during the week of the PCR-test. For vaccinated individuals, this effect was reduced by more than half to an average of only \\(\\sim\\)15 minutes per day. Similar magnitudes were observed in the first week following a PCR-test, where sleep duration was increased by \\(\\sim\\)46 and \\(\\sim\\)24 minutes per day for unvaccinated and vaccinated users, respectively. Sleep duration returned to baseline values quickly for both user groups as compared to activity or RHR. By the third week, anomalies in sleep duration were comparable with that of the COVID-19 negative control group. Still, we found a significant increase of approximately two times in the average need for rest during the acute phase of the disease when comparing vaccinated and unvaccinated individuals. We also found small variations in RHR, activity, and sleep duration in COVID-19 negative individuals around the test period, which might be caused by other diseases, such influenza or a common cold, that might have caused people to take a PCR-test in the first place. ### Distribution of extreme vital changes in the acute phase As the most prominent changes were observed during the acute phase of the disease, cf. Fig. 2, we next investigated the distributions of weekly increases in RHR and sleep duration and decreases in activity for each cohort in the four weeks following a PCR-test, Fig. 3. For all metrics, the frequencies of changes decayed continuously with increasing values. As expected, the smallest changes (less than 1 ppm/day of RHR change, less than 1,000 steps reduction in activity/day, and less than 15 minutes of additional sleep/day) were most commonly observed in the COVID-19 negative cohort. Likewise, between 40% and 50% of all COVID-19 positive individuals experienced only small changes in all three metrics, likely indicating comparatively mild courses. Note that these numbers are well below the commonly reported percentage of mild and moderate cases of at least 80% [53; 54] while exceeding rough estimates (\\(\\sim\\)41%) for asymptomatic infections in confirmed COVID-19 cases. Hence, we found a reasonable amount of individuals with little or no changes in their vital data during SARS-CoV-2 infection. The observed frequencies of larger and more extreme vital changes in the unvaccinated cohort were consistently higher than those measured for vaccinated and COVID-19 negative individuals, again indicating increased likelihood for a severe course in unvaccinated individuals, Fig. 3. \\begin{table} \\begin{tabular}{l|c c c|c} & Vaccinated & Unvaccinated & Negative & Total \\\\ \\hline \\hline Female & 901 (40.22\\%) & 130 (40.88\\%) & 2146 (38.74\\%) & 3177 (39.24\\%) \\\\ Male & 1330 (59.38\\%) & 185 (58.18\\%) & 3349 (60.46\\%) & 4864 (60.07\\%) \\\\ Other & 9 (0.40\\%) & 3 (0.94\\%) & 44 (0.79\\%) & 56 (0.69\\%) \\\\ Age (mean) & 47.09yr & 47.97yr & 50.09yr & 49.17yr \\\\ Age (std) & 11.37yr & 11.73yr & 12.37yr & 12.15yr \\\\ \\end{tabular} \\end{table} Table 1: Number of users per gender as well as mean and standard deviation of age in the respective cohorts under study. Percentages in parentheses indicate the respective share of users within the specific cohort. Figure 3: Relative frequency of weekly average vital changes in weeks zero to four following a PCR-test for vaccinated and unvaccinated COVID-19 positive individuals as well as COVID-19 negative individuals. Numbers on the vertical axis indicate the center of each bin. Values outside the specific bins are added to the smallest and largest bin, respectively. ### Vaccinations reduce short-term risk of severe vital changes in COVID-19 positive individuals In order to clarify the prevalence of severe courses in acute cases, we measured how such extreme changes in vital signals were distributed for the unvaccinated, vaccinated and COVID-19 negative cohorts in the first weeks after taking a PCR-test. The results are depicted in Fig. 4. Specifically, we computed within each cohort the share of users whose vital changes exceeded a certain threshold. Specifically, we counted individuals whose change in RHR exceeded 5 bpm/day, which is indicative of an increase of half a degree in body temperature [55]. Moreover, we chose thresholds to define extreme activity reduction of 5,000 steps/day as well as pronounced sleep prolongation of more than 1 hour per day, both of which well exceed the maximum observed average change in Fig. 2 while still yielding reasonable frequencies in Fig. 3. In the unvaccinated group, the frequency of more than 5 bpm/day RHR change varied between 17.5% in the week of and 10% in the fourth week after a positive PCR-test, Fig. 4A. For weeks zero, one, and three those frequencies were significantly larger than those of the vaccinated cohort. The same held true for weeks two and four when comparing the unvaccinated cohort with the COVID-19 negative control group. Only from week five onward were extreme RHR changes equally likely in COVID-19 positive individuals as in the negative control cohort. In contrast, in the vaccinated cohort, extreme RHR changes were only significantly more common than controls during the week of the PCR-test. However, we note that, even in the week of the PCR-test, the respective frequency was still only approximately half as large as for the unvaccinated cohort. Likewise, we found significantly increased prevalence of drastic reduction in activity for both the vaccinated and unvaccinated cohorts in the first 2-3 weeks following a PCR-test when compared to the COVID-19 negative control group, Fig. 4B. In addition, the prevalence in the unvaccinated cohort was significantly larger compared to the vaccinated group in all significant weeks, again indicating a reduced risk of severe illness following full vaccination. Moreover, a small share (\\(\\sim\\)5%) of unvaccinated individuals already showed substantial activity reduction in the week prior to taking a PCR-test, indicating a potential precursor for the developing disease. No significant extreme reductions in activity were observed after the third week, Fig. 4B. Finally, we considered the frequency of individuals with an increased sleep duration of 1 hour/day, indicative of a strongly increased need for rest in the acute phase of COVID-19, Fig. 4C. Both COVID-19 positive cohorts showed greatly increased frequency in sleep prolongation during weeks zero and one following a PCR-test, with more than 30% of cases in the unvaccinated group and 10-20% of cases in the vaccinated cohort. Extended sleep duration was also significantly prominent in the unvaccinated cohort in the second week after a positive PCR-test with a prevalence of more than 20% compared to less than 10% in the vaccinated and the control cohorts. Hence, roughly 3 to 4 out of 10 unvaccinated individuals experienced an increased sleep duration of more than 1 hour/day for an extended period of 2 to 3 weeks. Figure 4: Share of donors in each cohort whose weekly average vital data exceeded a specified threshold. (A) Share of donors with more than 5 bpm/day RHR increase. (B) Share of donors with an activity reduction of more than 5,000 steps/day. (C) Share of donors with an increased sleep duration of more than 1 hour/day. Red, purple and blue bars indicate unvaccinated, vaccinated and COVID-19 negative donors, respectively. Error bars indicate the standard error of a binomial distribution. Filled (empty) red stars indicate periods where the respective prevalence in unvaccinated individuals was stronger than that in vaccinated (negative) individuals using a one-sided two proportion z-test with a significance level of \\(\\alpha=0.01\\). Purple stars indicate significant differences between vaccinated and COVID-19 negative individuals. ## III Discussion & Conclusion We analyzed changes in resting heart rate (RHR), physical activity, and sleep duration around the time of a PCR-test for 2,240 vaccinated and 318 unvaccinated COVID-19 positive individuals as well as 5,539 individuals in a COVID-19 negative control group. Participants in this study were self-recruited, often following media announcements. They submitted their vital data and meta-information, i.e., socio-demographics, PCR-test dates and results, and vaccination status, via the Robert Koch Institute's Corona-Datenspende smartphone app (CDA), downloadable free of charge for German residents over the age of 16. We found that average deviations and subsequent stabilizations in vital signals were most pronounced for unvaccinated individuals, with the longest normalization period spanning an average of 11 weeks post PCR-test week for both RHR and activity. Similar findings have been obtained in other studies that, although not explicitly stated, likely mostly considered unvaccinated individuals due to the scarcity of vaccines at the time [50, 51]. Average vital changes for vaccinated persons were less pronounced, albeit at times still significantly different from the COVID-19 negative control group. In addition to prolonged average changes, we found that extreme values were more likely to be observed for unvaccinated individuals in the acute phase of the disease when compared to vaccinated individuals or the negative cohort. Finally, we observed that both RHR as well as the step count of unvaccinated COVID-19 positive individuals, already differed significantly from the negative control group in the week prior to taking a PCR-test, hinting at its potential to serve as an early warning indicator of a coming illness [45, 46]. Our results provide further evidence that vaccinations can not only mitigate severe cases of acute COVID-19, which is in line with the broader literature [19, 20, 21, 23, 24, 25, 26, 27, 29], but also highlight their potential for attenuating long-term physiological and behavioral changes [33, 34, 36, 37]. Our results exemplify the great potential that lies in passive sensing for public health research as it fosters robust and large-scale analysis of long-term, high-resolution longitudinal data with interpretable metrics that can be linked to an individual's physiology [56, 57]. Our analysis comes with some limitations that need to be considered when contextualizing the above results. First and foremost, our analysis did not discriminate infections by the respective variant of concern (VOC) that was predominant at the time a PCR-test was taken. Furthermore, individuals that were designated as unvaccinated were primarily assigned to this cohort because, at the time of infection, vaccine availability was limited (see SI Sec. I). Hence, all unvaccinated donors were likely infected with B.1.1.7 (Alpha) or the wild-type which might have triggered different physiological responses than the later emergent variants, specifically B.1.617.2 (Delta) and, more recently, B.1.1.529 (Omicron). The majority of breakthrough infections in our data set were reported when these latter two variants were prevalent, meaning that the observed effects could partially also be explained by weaker physiological responses to Omicron [58]. To account for this effect, we performed an additional sensitivity analysis (see SI Sec. IIA) that only considered infections reported prior to December 15, 2021 and thus likely caused by Delta, which has been reported to cause more severe cases than Alpha [59, 60, 61]. When only considering this subset of users in the vaccinated cohort, we found that the results presented in Fig. 2 still held, indicating that vaccinated individuals likely infected with the Delta variant exhibited significantly weaker average changes in vital data compared to the unvaccinated group. We also compared average vital changes in that same cohort of vaccinated individuals infected before December 15, 2021 to individuals who reported infections after that date and, therefore, were likely infected with the Omicron VOC. We found hardly any significant differences in the temporal evolution of vital changes between the cohorts (see SI Sec. IIB). Hence, in the context of our analysis, it is reasonable to combine all recorded breakthrough infections into a single cohort for ease of interpretability and without having results skewed by the influence of a single variant of concern. By now, almost all users of the Corona Data Donation Project are at least fully vaccinated or have received a booster vaccination. Physiological responses to more recent variants of concern in unvaccinated individuals could, therefore, only be recorded if unvaccinated individuals were specifically recruited. In addition, we did not explicitly account for the time between the receipt of the latest vaccination dose and the date of breakthrough infection, which ignores the potential effects of waning immunity [62, 63]. However, most breakthrough infections in our data set were recorded in the first four months after achieving immunity (see SI Sec. III), likely indicating sufficient protection in the majority of the vaccinated cohort. We also note that neither of our three cohorts is representative of the German population. Our sample shows a large over-representation of male individuals (see Table 1), as well as an under-representation of adolescent and elderly (65+) persons, see SI Sec. IV for the distribution of age groups. Moreover, there is good reason to assume that our study population is more health-conscious than the average population since the usage of fitness trackers is partially correlated with or, at least, facilitates awareness of health-related behavior [64]. Likewise, the cohorts might not be fully representative of one another, even though the basic proportions of gender and age match well across them, Table 1. There are many other additional confounding factors that might influence the observed vital changes, such as pre-existing conditions, self-reported symptoms during the disease, socio-economic status, as well as demographics including age, sex, and body mass index. Due to the limited sample size, we refrained from performing any further discrimination along these potentially confounding factors, but such analyses could be performed in the future if the recorded cohorts increased in size. However, despite the above limitations, our analysis provides relevant insights regarding the efficacy of vaccine against long-term effects of COVID-19. Because vulnerable groups, such as the elderly, are under-represented, the observed differences might become more pronounced if more people from such groups participated in the study. As such, the results for unvaccinated individuals might indicate a lower bound for expected vital changes which might become larger when a more representative cohort is considered, thereby potentially increasing the observed differences between our cohorts further. Furthermore, we acknowledge that we cannot disentangle whether changes in vital data, especially activity and sleep duration, were caused by altered behavioral in response to a positive diagnosis or whether those changes were an actual physiological imprint of an acute infection. While we likely observed a combination of both effects, it remains impossible to quantify their individual influence. However, we may assume that the mere effect of isolation reduces the opportunity for physical activity equally for unvaccinated and vaccinated individuals, thereby making it an unlikely explanation for all the observed changes in daily activity. Still, across all metrics, the average deviation in step counts were most similar between the COVID-19 positive cohorts, indicating that these changes are, in fact, partially driven by self-isolation. Future work should aim to reproduce and validate the results obtained in this study, ideally with data that is collected in a similar fashion such as through the _DETECT_[65] or _Evidation_[66] systems in the US as well as _TemPredict_ which covers a broad range of international users [67, 68]. We further propose to investigate and improve the representativeness of the user sample, i.e., by comparison with common health survey programs, such as GEDA in Germany [69] or NHIS in the US [70]. One should then aim to specifically advertise for an increased participation of currently under-represented groups, potentially by providing wearable devices to users that could normally not afford such devices and are therefore missing from the data set. We suggest to also incorporate higher-frequency data recorded with a temporal resolution on the scale of minutes [44]. Such data would allow for the quantification of more subtle changes in physiology, such as the postural orthostatic tachycardia syndrome (POTS) [71], another typical condition associated with Long COVID [72, 73]. After all, it is a unique advantage of wearable sensors that data can be measured over extended periods at high resolution and minimal burden to the individual, thereby making them a promising tool to complement traditional clinical methods for a data-driven approach to public health research [65, 57, 65]. ###### Acknowledgements. We thank Michael Hallek, Christian Karagiannidis, and Christa Scheidt-Nave for helpful discussions during the preparation of this manuscript. Paul Burggraf and Hannes Schenk are greatfully acknowledged for their technical support in the data collection process. We also thank Claudia Enge and Lorenz Wascher for their continuous assistance regarding data privacy and data protection. ## Material & Methods ### Data Characteristics Between April 12, 2020 and April 3, 2022, a total of 524,761 people installed the Corona-Datenspende App and submitted at least one vital data point. Of these users, 38,853 people agreed to also participate in regular surveys about COVID-19 test results, vaccination status, and other information relevant for pandemic research. 29,323 users submitted their vaccination status and the months of receiving doses and 20,461 provided the week and result of their first positive PCR-test or their first ever PCR-test if all results were negative. The overlap between users who submitted both vaccination status and test results resulted in 16,693 people. ### Data Preprocessing Due to inconsistent measurements in sleep duration for Apple devices following a manufacturer update on October 10, 2022, 325 affected users were removed from the data set. In addition, users who achieved immunity from a single dose with the vaccine Ad26.COV2.S (Janssen) were excluded from the analysis (338 users). Of this subset, we kept users who donated at least one vital data point between eight weeks preceding and twenty weeks following their PCR-test (12,198 users). We computed weekly averages if at least six data points were present in a given week. Users with less than three weeks of sufficient data preceding their test were dropped since no reliable baseline could be computed, leaving a total of 8,097 users. We considered all users who received at least two vaccination doses prior to their positive PCR-test as _vaccinated_ and all others _unvaccinated_, such that 2,240 individuals experienced a breakthrough infection, 318 were infected prior to achieving immunity, and 5,539 only reported negative PCR-tests. A detailed cohort-diagram can be found in the SI, Fig. S6. ### Data Analysis As a first step, we computed per-user anomalies of all vital data. For this purpose, we subtracted daily population-wide averages of resting heart rate, step count and sleep duration from each user's time series in order to account for seasonal effects, such as increased activity in summer or prolonged sleep duration in winter. For an increased accuracy, we used the data of all 16,368 users that are left in the study cohort after removing apple users with inconsistent data for this purpose (see above). After this transformation, a value of zero indicated that the user's vital measurement was en par with the population-wide average on a given day, while positive and negative values indicated above- and below-average values, respectively. Next, user data was down-sampled into weekly bins, enabling the same temporal resolution as the approximate date of reported PCR-tests. We then computed for each user and vital metric an individual baseline as the average over the 8 weeks prior to the PCR-test. We subtracted this baseline from each time series to obtain the corresponding deviations in vital signals. For all users, the time series were then aligned with the week of a PCR-test and averaged (for the results in Sec. II.1) or thresholded (for the results in Sec. II.3) depending on the desired analysis. ### Statistical analysis For all discussions of differences in average vitals, we used a one-sided Welch t-test. For analyzing the prevalence of extreme vital changes, we applied a one-sided two proportion z-test. For both tests, we used a significance level of \\(\\alpha=0.01\\). One-sided tests were used since we put our focus on whether vital changes of unvaccinated individuals _exceeded_ those of the vaccinated or negative cohorts. Likewise, we were only interested in vital changes of vaccinated individuals if they _exceeded_ those measures observed for the negative control cohort. ### Ethical considerations All individuals participating in the Corona Data Donation Project provided informed consent electronically via the app. The study was reviewed and approved by the Data Privacy Officer at the Robert Koch-Institute (2021-009) in agreement with the Federal Commissioner for Data Protection and Freedom of Information (BfDI). ## References * [1]A. Nalbandian, K. Sehgal, A. Gupta, M. V. Madhavan, C. McGroder, J. S. Stevens, J. R. Cook, A. S. Nordvig, D. Shalev, T. S. Sehrawat, N. Ahluwalia, B. Bikdeli, D. Dietz, C. Der-Nigoghossian, N. Liyanage-Don, G. F. Rosner, E. J. Bernstein, S. Mohan, A. A. Beckley, D. S. Seres, T. K. Choueiri, N. Uriel, J. C. Ausiello, D. Accili, D. E. Freedberg, M. Baldwin, A. Schwartz, D. Brodie, C. K. Garcia, M. S. V. Elkind, J. M. Connors, J. P. Bilezikian, D. W. Landry, and E. Y. Wan (2021) Post-acute COVID-19 syndrome. Nature Medicine27, pp. 601. Cited by: SSI. * [2]A. Nalbandian, K. Sehgal, A. Gupta, M. V. Madhavan, C. McGroder, J. S. Stevens, J. R. Cook, A. S. Nordvig, D. Shalev, T. S. Sehrawat, N. Ahluwalia, B. Bikdeli, D. Dietz, C. Der-Nigoghossian, N. Liyanage-Don, G. F. Rosner, E. J. Bernstein, S. Mohan, A. A. Beckley, D. S. Seres, T. K. Choueiri, N. Uriel, J. C. Ausiello, D. Accili, D. E. Freedberg, M. Baldwin, A. Schwartz, D. Brodie, C. K. Garcia, M. S. V. Elkind, J. M. Connors, J. P. Bilezikian, D. W. Landry, and E. Y. Wan (2021) Post-acute COVID-19 syndrome. Nature Medicine27, pp. 601. Cited by: SSI. * [3]E. Wynberg, H. D. G. van Willigen, M. Dijkstra, A. Boyd, N. A. Kootstra, J. G. van den Aardweg, M. J. van Gils, A. Matser, M. R. de Wit, T. Leenstra, G. de Bree, M. D. de Jong, M. Prins, and RECoV-REED Study Group (2021) Evolution of Coronavirus Disease 2019 (COVID-19) Symptoms During the First 12 Months After Illness Onset. Clinical Infectious Diseases, ciab759, pp.. Cited by: SSI. * [4]M. Michelen, L. Manoharan, N. Elkheir, V. Cheng, A. Dagens, C. Hastie, M. O'Hara, J. Suett, D. Dahmash, P. Bugaeva, I. Rigby, D. Munblit, E. Harriss, A. Burls, C. Foote, J. Scott, G. Carson, P. Olliaro, L. Sigrid, and C. Stavropoulou (2021) Characterising long COVID- a living systematic review. BMJ Global Health6, e005427. Cited by: SSI. * [5]E. Wynberg, A. X. Han, A. Boyd, H. D. G. van Willigen, A. Verveen, R. Lebbink, K. van der Straten, N. A. Kootstra, M. J. van Gils, C. Russell, T. Leenstra, M. D. de Jong, G. J. de Bree, M. Prins, and R. S. Group (2022) The Effect of SARS-CoV-2 Vaccination on Post-Acute COVID-19 Syndrome (PACS): A Prospective Cohort Study. SSRN 10, pp. 2139/ssrn.4022243. Cited by: SSI. * [6]E. Wynberg, H. D. G. van Willigen, M. Dijkstra, A. Boyd, N. A. Kootstra, J. G. van den Aardweg, M. J. van Gils, A. Matser, M. R. de Wit, T. Leenstra, G. de Bree, M. D. de Jong, M. Prins, and RECoV-REED Study Group (2021) Evolution of Coronavirus Disease 2019 (COVID-19) Symptoms During the First 12 Months After Illness Onset. Clinical Infectious Diseases, ciab759, pp.. Cited by: SSI. * [7]E. Wynberg, A. X. Han, A. Boyd, H. D. G. van Willigen, A. Verveen, R. Lebbink, K. van der Straten, N. A. Kootstra, M. J. van Gils, C. Russell, T. Leenstra, M. D. de Jong, G. J. de Bree, M. Prins, and R. S. Group (2022) The Effect of SARS-CoV-2 Vaccination on Post-Acute COVID-19 Syndrome (PACS): A Prospective Cohort Study. SSRN 10, pp. 2139/ssrn.4022243. Cited by: SSI. * [8]J. B. Soriano, S. Murthy, J. C. Marshall, P. Relan, and J. V. Diaz (2021) A clinical case definition of post-COVID-19 condition by a Delphi consensus. The Lancet Infectious Diseases. Cited by: SSI. * [9]T. Greenhalgh, M. Knight, C. A'Court, M. Buxton, and L. Husain (2020) Management of post-acute covid-19 in primary care. BMJ370, pp. m3026. Cited by: SSI. * [10]T. Greenhalgh, M. Knight, C. A'Court, M. Buxton, and L. Husain (2020) Management of post-acute covid-19 in primary care. BMJ370, pp. m3026. Cited by: SSI. * [11]N. A. Alwan, The road to addressing long covid, Science373, pp. 491. Cited by: SSI. * [12]C. H. Sudre, B. Murray, T. Varsavsky, M. S. Graham, R. S. Penfold, R. C. Bowyer, J. C. Pujol, K. Klaser, M. Antonelli, L. S. Canas, E. Molteni, M. Modat, M. Jorge Cardoso, A. May, S. Ganesh, R. Davies, L. H. Nguyen, D. A. Drew, C. M. Astley, A. D. Joshi, J. Merino, N. Tsereteli, T. Fall, M. F. Gomez, E. L. Duncan, C. Menni, F. M. K. Williams, P. W. Franks, A. T. Chan, J. Wolf, S. Ourselin, T. Spector, and C. J. Steves (2021) Attributes and predictors of long COVID, Nature Medicine **27**, 626 (2021). * Carfi _et al._ [2020]A. Carfi, R. Bernabei, F. Landi, and for the Gemelli Against COVID-19 Post-Acute Care Study Group, Persistent Symptoms in Patients After Acute COVID-19, JAMA **324**, 603 (2020). * Ziauddeen _et al._ [2022]N. Ziauddeen, D. Gundasani, M. E. O'Hara, C. Hastie, P. Roderick, G. Yao, and N. A. Alwan, Characteristics and impact of long covid: Findings from an online survey, PLOS ONE **17**, e0264331 (2022). * Ong _et al._ [2004]K.-C. Ong, A. W.-K. Ng, L. S.-U. Lee, G. Kaw, S.-K. Kwek, M. K.-S. Leow, and A. Earnest, Pulmonary function and exercise capacity in survivors of severe acute respiratory syndrome, European Respiratory Journal **24**, 436 (2004). * Moldofsky and Patcai [2011]H. Moldofsky and J. Patcai, Chronic widespread musculoskeletal pain, fatigue, depression and disordered sleep in chronic post-SARS syndrome; a case-controlled study, BMC Neurology **11**, 37 (2011). * Hui _et al._ [2005]D. S. Hui, G. M. Joynt, K. T. Wong, C. D. Gomersall, T. S. Li, G. Antonio, F. W. Ko, M. C. Chan, D. P. Chan, M. W. Tong, T. H. Rainer, A. T. Ahuja, C. S. Cockram, and J. J. Y. Sung, Impact of severe acute respiratory syndrome (SARS) on pulmonary function, functional capacity and quality of life in a cohort of survivors, Thorax **60**, 401 (2005). * Ahmed _et al._ [2020]H. Ahmed, K. Patel, D. C. Greenwood, S. Halpin, P. Lewthwaite, A. Salawu, L. Eyre, A. Breen, R. O'Connor, A. Jones, and M. Sivan, Long-term clinical outcomes in survivors of severe acute respiratory syndrome and Middle East respiratory syndrome coronavirus outbreaks after hospitalisation or ICU admission: a systematic review and meta-analysis, Journal of Rehabilitation Medicine **52** (2020). * Tregoning _et al._ [2021]J. S. Tregoning, K. E. Flight, S. L. Higham, Z. Wang, and B. F. Pierce, Progress of the COVID-19 vaccine effort: viruses, vaccines and variants versus efficacy, effectiveness and escape, Nature Reviews Immunology **21**, 626 (2021). * Higdon _et al._ [2022]M. M. Higdon, B. Wahl, C. B. Jones, J. G. Rosen, S. A. Truelove, A. Baidya, A. A. Nande, P. A. ShamaeiZadeh, K. K. Walter, D. R. Feikin, M. K. Patel, M. D. Knoll, and A. L. Hill, A systematic review of COVID-19 vaccine efficacy and effectiveness against SARS-CoV-2 infection and disease, medRxiv 10.1101/2021.09.17.2163549 (2022). * Maier _et al._ [2021]B. F. Maier, M. Wiedermann, A. Burdinski, P. Klamser, M. A. Jenny, C. Betsch, and D. Brockmann, Germany's current COVID-19 crisis is mainly driven by the unvaccinated, medRxiv, 2021.11.24.21266831 (2021). * Nixon and McMilovu [2021]D. F. Nixon and L. C. McMilovu, Vaccine breakthrough infections with sars-cov-2 variants, New England Journal of Medicine **385**, e7 (2021). * Antonelli _et al._ [2021]M. Antonelli, R. S. Penfold, J. Merino, C. H. Sudre, E. Molteni, S. Berry, L. S. Canas, M. S. Graham, K. Klaser, M. Modat, B. Murray, E. Kerfoot, L. Chen, J. Deng, M. F. Osterdahl, N. J. Cheetham, D. A. Drew, L. H. Nguyen, J. C. Pujol, C. Hu, S. Selvachandran, L. Polidori, A. May, J. Wolf, A. T. Chan, A. Hammers, E. L. Duncan, T. D. Spector, S. Ourselin, and C. J. Steves, Risk factors and disease profile of post-vaccination SARS-CoV-2 infection in UK users of the COVID Symptom Study app: a prospective, community-based, nested, case-control study, The Lancet Infectious Diseases **22**, 43 (2022). * Americas **4**, 100065 (2021). * Butt _et al._ [2021]A. A. Butt, H. Nafady-Hego, H. Chemaitelly, A.-B. Abou-Samra, A. A. Khal, P. V. Coyle, Z. A. Kanaani, A. H. Kaleeckal, A. N. Latif, Y. A. Masalmani, R. Bertollini, and L. J. A. Raddad, Outcomes Among Patients with Breakthrough SARS-CoV-2 Infection After Vaccination, International Journal of Infectious Diseases **110**, 353 (2021). * Cabezas _et al._ [2021]C. Cabezas, E. Coma, N. Mora-Fernandez, X. Li, M. Martinez-Marcos, F. Fina, M. Fabregas, E. Hermosilla, A. Jover, J. C. Contel, Y. Lejardi, B. Enfedaque, J. M. Argimon, M. Medina-Peralta, and D. Prieto-Alhambra, Associations of BNT162b2 vaccination with SARS-CoV-2 infection and hospital admission and death with covid-19 in nursing homes and healthcare workers in Catalonia: prospective cohort study, BMJ **374**, n1868 (2021). * Mateo-Urdiales _et al._ [2021]A. Mateo-Urdiales, M. Del Manso, X. Andrianou, M. Spuri, F. D'Ancona, A. Filia, M. C. Rota, D. Petrone, M. F. Vescio, F. Riccardo, A. Bella, P. Pezzotti, and M. Fabiani, Initial impact of SARS-Cov-2 vaccination on healthcare workers in Italy- Update on the 28th of March 2021, Vaccine **39**, 4788 (2021). * Glatman-Freedman _et al._ [2021]A. Glatman-Freedman, Y. Hershkovitz, Z. Kaufman, R. Dichtiar, L. Keinan-Boker, and M. Bromberg, Effectiveness of BNT162b2 Vaccine in Adolescents during Outbreak of SARS-CoV-2 Delta Variant Infection, Israel, 2021, Emerging Infectious Diseases **27**, 2919 (2021). * Abu-Raddad _et al._ [2021]L. J. Abu-Raddad, H. Chemaitelly, and A. A. Butt, Effectiveness of the BNT162b2 Covid-19 Vaccine against the B.1.1.1.7 and B.1.351 Variants, New England Journal of Medicine **385**, 187 (2021). * Lopez Bernal _et al._ [2021]J. Lopez Bernal, N. Andrews, C. Gower, E. Gallagher, R. Simmons, S. Thelwall, J. Stowe, E. Tessier, N. Groves, G. Dabrera, R. Myers, C. N. J. Campbell, G. Amirthalingam, M. Edmunds, M. Zambon, K. E. Brown, S. Hopkins, M. Chand, and M. Ramsay, Effectiveness of Covid-19 Vaccines against the B.1.617.2 (Delta) Variant, New England Journal of Medicine **385**, 585 (2021). * Goldberg _et al._ [2021]Y. Goldberg, M. Mandel, Y. M. Bar-On, O. Bodenheimer, L. Freedman, E. J. Haas, R. Milo, S. Alroy-Preis, N. Ash, and A. Huppert, Waning Immunity after the BNT162b2 Vaccine in Israel, New England Journal of Medicine **385**, e85 (2021). * Haas _et al._ [2021]E. J. Haas, F. J. Angulo, J. M. McLaughlin, E. Anis, S. R. Singer, F. Khan, N. Brooks, M. Smaja, G. Mircus, K. Pan, J. Southern, D. L. Swerdlow, L. Jodar, Y. Levy, and S. Alroy-Preis, Impact and effectiveness of mRNA BNT162b2 vaccine against SARS-CoV-2 infections and COVID-19 cases, hospitalisations, and deaths following a nationwide vaccination campaign in Israel: an observational study using national surveillance data, The Lancet **397**, 1819 (2021). * Ledford [2021]H. Ledford, Do vaccines protect against long COVID? What the data say, Nature **599**, 546 (2021). * Massey _et al._ [2021]D. Massey, D. Berrent, and H. Krumholz, Breakthrough Symptomatic COVID-19 Infections Leading to Long Covid: Report from Long Covid Facebook Group Poll, medRxiv, 2021.07.23.21261030 (2021). * Bergwerk _et al._ [2021]M. Bergwerk, T. Gonen, Y. Lustig, S. Amit, M. Lipsitch, C. Cohen, M. Mandelboim, E. G. Levin, C. Rubin, V. Indenbaum, _et al._, Covid-19 breakthrough infections in vaccinated health care workers, New England Journal of Medicine **385**, 1474 (2021). * the physical and mental health of children and non-hospitalised young people 3 months after SARS-CoV-2 infection; a national matched cohort study (The CLoCK) Study., Research Square 10.21203/rs.3.rs-798316/v1 (2022). * Taquet _et al._ [2021]M. Taquet, Q. Dercon, and P. J. Harrison, Six-month sequelae of post-vaccination SARS-CoV-2 infection: a retrospective cohort study of 10,024 breakthrough infections, medRxivxiv, 2021.10.26.21265508 (2021). * [4][https://corona-datenspende.de](https://corona-datenspende.de). * Colombo _et al._ [2019]D. Colombo, J. Fernandez-Alvarez, A. Patane, M. Semonella, M. Kwiatkowska, A. Garcia-Palacios, P. Cipresso, G. Riva, and C. Botella, Current State and Future Directions of Technology-Based Ecological Momentary Assessment and Intervention for Major Depressive Disorder: A Systematic Review, Journal of Clinical Medicine **8**, 465 (2019). * Jaiswal _et al._ [2020]S. J. Jaiswal, G. Quer, M. Galarnyk, S. R. Steinhubl, E. J. Topol, and R. L. Owens, Association of Sleep Duration and Variability With Body Mass Index: Sleep Measurements in a Large US Population of Wearable Sensor Users, JAMA Internal Medicine **180**, 1694 (2020). * Quer _et al._ [2020]G. Quer, P. Gouda, M. Galarnyk, E. J. Topol, and S. R. Steinhubl, Inter- and intraindividual variability in daily resting heart rate and its associations with age, sex, sleep, BMI, and time of year: Retrospective, longitudinal cohort study of 92,457 adults, PLOS ONE **15**, e0227709 (2020). * Ghandeharioun _et al._ [2017]A. Ghandeharioun, S. Fedor, L. Sangermano, D. Ionescu, J. Alpert, C. Dale, D. Sontag, and R. Picard, Objective assessment of depressive symptoms with machine learning and wearable sensors data, in _2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII)_ (2017) pp. 325-332. * Wang _et al._ [2018]R. Wang, W. Wang, A. daSilva, J. F. Huckins, W. M. Kelley, T. F. Heatherton, and A. T. Campbell, Tracking Depression Dynamics in College Students Using Mobile Phone and Wearable Sensing, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies **2**, 43:1 (2018). * Bowman _et al._ [2021]C. Bowman, Y. Huang, O. J. Walch, Y. Fang, E. Frank, J. Tyler, C. Mayer, C. Stockbridge, C. Goldstein, S. Sen, and D. B. Forger, A method for characterizing daily physiology from widely used wearables, Cell Reports Methods **1**, 100058 (2021). * Radin _et al._ [2020]J. M. Radin, N. E. Wineinger, E. J. Topol, and S. R. Steinhubl, Harnessing wearable device data to improve state-level real-time surveillance of influenza-like illness in the USA: a population-based study, The Lancet Digital Health **2**, e85 (2020). * Mishra _et al._ [2020]T. Mishra, M. Wang, A. A. Metwally, G. K. Bogu, A. W. Brooks, A. Bahmani, A. Alavi, A. Celli, E. Higgs, O. Dagan-Rosenfeld, B. Fay, S. Kirkpatrick, R. Kellogg, M. Gibson, T. Wang, B. Rolnik, A. B. Ganz, X. Li, and M. P. Snyder, Early Detection Of COVID-19 Using A Smartwatch, medRxiv, 2020.07.06.20147512 (2020). * Zhu _et al._ [2020]G. Zhu, J. Li, Z. Meng, Y. Yu, Y. Li, X. Tang, Y. Dong, G. Sun, R. Zhou, H. Wang, K. Wang, and W. Huang, Learning from Large-Scale Wearable Device Data for Predicting the Epidemic Trend of COVID-19, Discrete Dynamics in Nature and Society **2020**, e6152041 (2020). * Quer _et al._ [2021]G. Quer, J. M. Radin, M. Gadaleta, K. Baca-Motes, L. Ariniello, E. Ramos, V. Kheterpal, E. J. Topol, and S. R. Steinhubl, Wearable sensor data and self-reported symptoms for COVID-19 detection, Nature Medicine **27**, 73 (2021). * Gadaleta _et al._ [2021]M. Gadaleta, J. M. Radin, K. Baca-Motes, E. Ramos, V. Kheterpal, E. J. Topol, S. R. Steinhubl, and G. Quer, Passive detection of COVID-19 with wearable sensors and explainable machine learning algorithms, npj Digital Medicine **4**, 1 (2021). * Radin _et al._ [2021]J. M. Radin, G. Quer, E. Ramos, K. Baca-Motes, M. Gadaleta, E. J. Topol, and S. R. Steinhubl, Assessment of Prolonged Physiological and Behavioral Changes Associated With COVID-19 Infection, JAMA Network Open **4**, e2115959 (2021). * Natarajan _et al._ [2022]A. Natarajan, H.-W. Su, and C. Heneghan, Occurrence of relative bradycardia and relative tachycardia in individuals diagnosed with COVID-19, medRxivxiv, 2022.02.02.22270342 (2022). * Amaratunga _et al._ [2020]E. A. Amaratunga, D. S. Corwin, L. Moran, and R. Snyder, Bradycardia in Patients With COVID-19: A Calm Before the Storm?, Cureus **12**, e8599 (2020). * Wu and McGoogan [2020]Z. Wu and J. M. McGoogan, Characteristics of and Important Lessons From the Coronavirus Disease 2019 (COVID-19) Outbreak in China: Summary of a Report of 72,314 Cases From the Chinese Center for Disease Control and Prevention, JAMA **323**, 1239 (2020). * Schilling _et al._ [2021]J. Schilling, K. Tolksdorf, A. Marquis, M. Faber, T. Pfoch, S. Buda, W. Haas, E. Schuler, D. Altmann, U. Grote, _et al._, Die verschiedenen Phasen der COVID-19-Pandemie in Deutschland: Eine deskriptive Analyse von Jauar 2020 bis Februar 2021, Bundesgesundheitsbstlatt-Gesundheitsschutz **64**, 1093 (2021). * Karjalainen and Vitatsalo [1986]J. Karjalainen and M. Vitatsalo, Fever and Cardiac Rhythm, Archives of Internal Medicine **146**, 1169 (1986). * Cornet and Holden [2018]V. P. Cornet and R. J. Holden, Systematic review of smartphone-based passive sensing for health and wellbeing, Journal of Biomedical Informatics **77**, 120 (2018). * Trifan _et al._ [2019]A. Trifan, M. Oliveira, and J. L. Oliveira, Passive Sensing of Health Outcomes Through Smartphones: Systematic Review of Current Solutions and Possible Limitations, JMIR mHealth and uHealth **7**, e12649 (2019). * Chen _et al._ [2022]J. Chen, R. Wang, N. B. Gilby, and G.-W. Wei, Omicron Variant (B.1.1.529): Infectivity, Vaccine Breakthrough, and Antibody Resistance, Journal of Chemical Information and Modeling **62**, 412 (2022). * Ong _et al._ [2021]S. W. X. Ong, C. J. Chiew, L. W. Ang, T.-M. Mak, L. Cui, M. P. H. S. Toh, Y. D. Lim, P. H. Lee, T. H. Lee, P. Y. Chia, S. Maurer-Stroh, R. T. P. Lin, Y.-S. Leo, V. J. Lee, D. C. Dye, and B. E. Young, Clinical and Virological Features of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Variants of Concern: A Retrospective Cohort Study Comparing B.1.1.7 (Alpha), B.1.351 (Beta), and B.1.617.2 (Delta), Clinical Infectious Diseases, ciab721 (2021). * Fisman and Tuite [2021]D. N. Fisman and A. R. Tuite, Evaluation of the relative virulence of novel SARS-CoV-2 variants: a retrospective cohort study in Ontario, Canada, CMAJ **193**, E1619 (2021). * Sheikh _et al._ [2020]A. Sheikh, J. McMenamin, B. Taylor, and C. Robertson, SARS-CoV-2 Delta VOC in Scotland: demographics,risk of hospital admission, and vaccine effectiveness, The Lancet **397**, 2461 (2021). * de Gier _et al._ [2021]B. de Gier, M. Kooijman, J. Kemmeren, N. d. Keizer, D. Donglemans, S. C. J. L. v. Iersel, J. v. d. Kasteele, S. P. Andeweg, R. C.-, e. a. s. Team, H. E. d. Melker, S. J. M. Hahne, M. J. Knol, and S. v. d. Hof, COVID-19 vaccine effectiveness against hospitalizations and ICU admissions in the Netherlands, April-August 2021, medRxiv, 2021.09.15.21263613 (2021). * Tartof _et al._ [2021]S. Y. Tartof, J. M. Slezak, H. Fischer, V. Hong, B. K. Ackerson, O. N. Ranasinghe, T. B. Frankland, O. A. Ogun, J. M. Zamparo, S. Gray, S. R. Valluri, K. Pan, F. J. Angulo, L. Jodar, and J. M. McLaughlin, Effectiveness of mRNA BNT162b2 COVID-19 vaccine up to 6 months in a large integrated health system in the USA: a retrospective cohort study, The Lancet **398**, 1407 (2021). * Wu _et al._ [2016]Q. Wu, K. Sum, and D. Nathan-Roberts, How Fitness Trackers Facilitate Health Behavior Change, Proceedings of the Human Factors and Ergonomics Society Annual Meeting **60**, 1068 (2016). * Radin _et al._ [2021]J. M. Radin, G. Quer, M. Jalili, D. Hamideh, and S. R. Steinhubl, The hopes and hazards of using personal health technologies in the diagnosis and prognosis of infections, The Lancet Digital Health **3**, e455 (2021). * Shapiro _et al._ [2021]A. Shapiro, N. Marinek, I. Clay, B. Bradshaw, E. Ramirez, J. Min, A. Trister, Y. Wang, T. Althoff, and L. Foschini, Characterizing COVID-19 and Influenza Illness in the Real World via Person-Generated Health Data, Patterns **2**, 100188 (2021). * Mason _et al._ [2021]A. E. Mason, F. M. Hecht, S. K. Davis, J. L. Natale, W. Hartogensis, N. Damaso, K. T. Claypool, S. Dilichert, S. Dasgupta, S. Purawat, V. K. Viswanath, A. Klein, A. Chowdhary, S. M. Fisher, C. Anglo, K. Y. Puldon, D. Veasna, J. G. Prather, L. S. Pandya, L. M. Fox, M. Busch, C. Giordano, B. K. Mercado, J. Song, R. Jaimes, B. S. Baum, B. A. Telfer, C. W. Philipson, P. P. Collins, A. A. Rao, E. J. Wang, R. H. Bandi, B. J. Choe, E. S. Epel, S. K. Epstein, J. B. Krasnoff, M. B. Lee, S.-W. Lee, G. M. Lopez, A. Mehta, L. D. Melville, T. S. Moon, L. R. Mujica-Parodi, K. M. Noel, M. A. Orosco, J. M. Rideout, J. D. Robishaw, R. M. Rodriguez, K. H. Shah, J. H. Siegal, A. Gupta, I. Altintas, and B. L. Smarr, Detection of COVID-19 using multimodal data from a wearable device: results from the first TemPredict Study, Scientific Reports **12**, 3463 (2022). * Smarr _et al._ [2020]B. L. Smarr, K. Aschbacher, S. M. Fisher, A. Chowdhary, S. Dilichert, K. Puldon, A. Rao, F. M. Hecht, and A. E. Mason, Feasibility of continuous fever monitoring using wearable devices, Scientific Reports **10**, 21640 (2020). * Damerow _et al._ [2020]S. Damerow, A. Rommel, F. Prutz, A.-K. Beyer, U. Hapke, A. Schienkiewicz, A. Starker, A. Richter, J. Baumert, J. Fuchs, B. Gaertner, S. Muters, J. Lemcke, and J. Allen, Developments in the health situation in Germany during the initial stage of the COVID-19 pandemic for selected indicators of GEDA 2019/2020-EHIS, Journal of Health Monitoring **5**, 3 (2020). * for Health Statistics (1986)N. C. for Health Statistics (US). Division of Health Interview Statistics, _National health interview survey_ (US Public Health Service, National Center for Health Statistics, 1986). * Postural Tachycardia Syndrome (POTS), in _Primer on the Autonomic Nervous System (Third Edition)_, edited by D. Robertson, I. Biaggioni, G. Burnstock, P. A. Low, and J. F. R. Paton (Academic Press, San Diego, 2012) pp. 517-519. * Raj _et al._ [2021]S. R. Raj, A. C. Arnold, A. Barboi, V. E. Claydon, J. K. Limberg, V.-E. M. Lucci, M. Numan, A. Peltier, H. Snapper, S. Vernino, and the American Autonomic Society, Long-COVID postural tachycardia syndrome: an American Autonomic Society statement, Clinical Autonomic Research **31**, 365 (2021). * Miglis _et al._ [2020]M. G. Miglis, T. Prieto, R. Shaik, S. Muppidi, D.-I. Sinn, and S. Jaradeh, A case report of postural tachycardia syndrome after COVID-19, Clinical Autonomic Research **30**, 449 (2020). **Evidence for positive long- and short-term effects of vaccinations against COVID-19 in wearable sensor metrics -- Supplementary information** ## I Distribution of positive PCR-tests over time As mentioned in the main manuscript most infections in vaccinated individuals took place during the first three waves of the pandemic while breakthrough infections are mostly recorded during the Delta and Omicron waves in late 2021 and 2022, Fig. S1. In fact, the latest infection of an unvaccinated person was reported in June 2021 and only 4 vaccinated individuals report an infection prior to that month. ## II Influence of variant-specific breakthrough infections on the main results ### Vital changes after breakthrough infections with B.1.617.2 compared to unvaccinated individuals To account for the fact that the analysis in the main manuscript does not discriminate breakthrough infections by the respective variant of concern, we repeat the analysis in Sec. 2C only for cases that were reported before December 15, 2021. During that time only the more severe B.1.617.2 (Delta) Variant was predominant, Fig. S2 [1]. Recall from the main manuscript, that we found significant differences in resting heart rate (RHR) between unvaccinated and vaccinated individuals at a high significance level (\\(\\alpha=0.01\\)) at almost all weeks, except the two weeks following a positive PCR-test. Likewise vaccinated individuals differed significantly from the COVID-19 negative control group at weeks -1, 0, 3 and 5-6. In addition, we found that RHR-changes of vaccinated individuals consistently fall below those of unvaccinateds. If we now restrict our analysis to infections in vaccinated individuals that were likely caused by Delta, we find that this general pattern still holds, Fig. S2A. Due to the smaller sample size of Delta infections we do, however, adjust the significance level to \\(\\alpha=0.05\\). Average RHR-changes for unvaccinated individuals still significantly exceed those of vaccinateds in the week preceeding a positive PCR-test, as well as the weeks 2-4, 8 and 10-11 after the test. Hence, the general trends towards lower expected RHR-changes for vaccinated individuals also holds if the analysis is restricted to the Delta variant of concern. This is particular important in the context of our work since Delta is generally considered the variant that causes the most severe cases. Hence, it is reasonable to conclude that an infection of a vaccinated person with Delta is likely still less pronounced with respect to RHR changes than an infection with the B.1.1.7 (Alpha) variant or the wild-type of SARS-CoV-2, albeit the latter two being associated with less severe courses of the disease. We find similar patterns for physical activity, i.e. the weekly averaged number of daily steps taken, even though vaccinated individuals show significantly reduced values for weeks 0 to 5 (again using a significance level of \\(\\alpha=0.05\\)compared to only the first three weeks after a PCR-test for the entire cohort, cf. Fig. 2B in the main manuscript. Still, activity reduction in unvaccinated individuals takes much longer (up to 12 weeks) to return to normal values, again indicating that vaccines also mitigate risks of long-term activity reduction after an infection with Delta. Ultimately, sleep duration (Fig. S2C) of vaccinated individuals after an infection with Delta is only significantly increased for the two weeks following a positive PCR-test which is en par with the observations for the entire cohort, compare again Fig. 2C in the main manuscript. Hence, also with respect to this vital type we see no significant qualitative differences in the vaccinated cohort if we restrict our analysis to Delta infections. ### Vital changes for breakthrough infections with B.1.617.2 compared to B.1.1.529 We perform an additional analysis to compare the observed vital changes in vaccinated individuals between the two pandemic waves for which respective test-dates are available, see also Fig. S1. In particular we split the vaccinated user cohort into one that reports an infection prior to December 15, 2021 and one that reports PCR-tests after that date. We assume that for the former the infection was likely caused by B.1.617.2 (Delta) and for the latter it was caused by B.1.1.529 (Omicron). In analogy to Fig. 2 in the main manuscript and Fig. S2 we compute average changes in RHR, step count and sleep duration for both cohorts, Fig. S3. We further use a two-sided Welch t-test at a significance level of \\(\\alpha=0.01\\) to assess whether the respective averages have to be considered different. For all three vital signs we find similar qualitative temporal evolutions as well as magnitudes in the respective changes. Especially for resting RHR we find large a similarity in the weeks around a positive PCR-test when comparing breakthrough infections in the two respective cohorts, Fig. S3A. Only from week two onwards does the return tobaseline take a slightly different shape depending on the considered variant. However, at no point in time can the two averages be considered statistically different at a signigicance level of \\(\\alpha=0.01\\), indicating a likely similar imprint of an infection with either variant of concern on RHR. Likewise, we find almost the same maximum reduction of \\(\\sim\\)\\(3,000\\) steps per day in the week after a positive PCR-test regardless of the variant that caused the breakthrough infection, Fig. S3B. Moreover, the respective averages only differ significantly during the week of the PCR-test and the second week after, Fig. S3B. A visual inspection of Fig. S3B suggests that the return to baseline takes slightly longer for a breakthrough infection with Delta which aligns with the common observation of a generally milder course of COVID-19 after an Omicron infection [2, 3]. Ultimate, we find again similar patterns across variants when considering average changes in sleep duration, Fig. S3C. Average sleep duration increases by \\(\\sim 24\\) minutes per day for both breakthrough infections with B.1.617.2 or B.1.1.529. A visual inspection again suggests a somewhat slower return to pre-disease values for a Delta-infection compared to Omicron. This observation is underlined by a statistically significant difference in the expected sleep duration in the second week after the PCR-test, Fig. S3. Taken together, we conclude that it is reasonable to combine all recorded breakthrough infection into a single user cohort, since differences in vital changes between the two major variants B.1.617.2 and B.1.1.529 are small and hardly significant. Moreover, the analysis in Sec. II.1 underlines that the results presented in the main manuscript do not change substantially on a qualitative level if only breakthrough infections before December 15, 2021 are considered in the vaccinated user cohort. Since B.1.617.2 is considered to be the variant of concern that causes severe courses of COVID-19 most often our analysis implies that the average vital changes in vaccinated users that suffer from such an infection are still lower than those of unvaccinated individuals that carried out an infection with B 1.1.7. (Alpha) or the wildtype. ## III Observed time differences between vaccinations and infection For all recorded breakthrough infections we compute the approximate time difference between receiving the last vaccination dose and the date of the PCR-test, Fig. S4. Note that vaccination dates are only available with an accuracy of one month and PCR test dates only with an accuracy of one week. Hence, we cannot rule out that individuals contracted COVID-19 in the first two weeks after receiving the second vaccination dose, potentially misclassifying their case as a breakthrough infection since immunity might not have been achieved. However, only \\(\\sim\\)\\(2.3\\%\\) of all breakthrough cases are recorded in the month of the last vaccination dose. Hence, this effect can be considered negligible. ## IV Representativeness of study cohort with respect to age As already discussed in the main manuscript, our study cohort is not representative of the overall German population. However, a comparison with official census data from 2020 [27] reveals that at least the frequencies of the commonly defined age groups 25-39 and 60-64 are well recaptured in our study cohort, Fig. S5. We note a large over-representation of people aged 40-59 and consequentially an under-representation of elderly (aged 65 and more) and children/adolescents (aged 20 or younger). The latter group is by definition mostly excluded from our study since participation is only possible for citizens aged 16 or older. The elderly group is likely underrepresented due to a generally lower adoption of new technologies with older people. ## References * [1] Robert Koch-Institut, SARS-CoV-2 Sequenzdaten aus Deutschland (2022). * [2] J. Nealon and B. J. Cowling, Omicron severity: milder but not mild, The Lancet **399**, 412 (2022). * [3] E. Callaway, H. Ledford, _et al._, How bad is omicron? what scientists know so far, Nature **600**, 197 (2021). * [4] Obtained from [https://www-genesis.destatis.de/genesis/online](https://www-genesis.destatis.de/genesis/online). ## Supplementary Figures
Vaccines are among the most powerful tools used to combat the COVID-19 pandemic. They are highly effective against infection and substantially reduce the risk of severe disease, hospitalization, ICU admission, and death. However, their potential for attenuating long-term effects of a SARS-CoV-2 infection, commonly denoted as Long COVID, remains elusive and is still subject of debate. Such long-term effects can be effectively monitored at the individual level by analyzing physiological data collected by consumer-grade wearable sensors. Here, we investigate changes in resting heart rate, daily physical activity, and sleep duration in response to a SARS-CoV-2 infection stratified by vaccination status. Data was collected over a period of two years in the context of the German Corona Data Donation Project with currently around 190,000 monthly active donors. Compared to their unvaccinated counterparts, we find that vaccinated individuals on average experience smaller changes in their vital data that also return to normal levels more quickly. Likewise, extreme changes in vitals during the acute phase of the disease occur less frequently in vaccinated individuals. Our results solidify evidence that vaccines can mitigate long-term detrimental effects of SARS-CoV-2 infections both in terms of duration and magnitude. Furthermore, they demonstrate the value of large scale, high-resolution wearable sensor data in public health research.
Condense the content of the following passage.
268
arxiv-format/2107_11979v2.md
HYPER-SNN: Towards Energy-efficient Quantized Deep Spiking Neural Networks for Hyperspectral Image Classification Gourav Datta, Souvik Kundu, Akhilesh R. Jaiswal, Peter A. Beerel G. Datta, S. Kundu, A. R. Jaiswal and P. A. Beerel are with the Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90089 USA e-mail: {gdatta, souvikkah, ahilesh, pabeerel}@usc.edu. ## I Introduction Hyperspectral imaging, which extracts rich spatial-spectral information about the ground surface, has shown immense promise in remote sensing [1]. It is currently used in several applications ranging from geological surveys [2], to the detection of camouflaged vehicles [3]. In hyperspectral images (HSIs), each pixel can be considered as a high-dimensional vector where each entry corresponds to the spectral reflectivity [1] of a particular wavelength. The goal of the classification task is to assign a unique semantic label to each pixel [4]. For HSI classification, several spectral feature-based methods have been proposed, including support vector machine [5], random forest [6], canonical correlation forest [7], and multinomial logistic regression [8]. To improve the accuracy of HSI classification, researchers have integrated spatial features into existing learning methods [9]. Some spectral-spatial methods for classifying HSIs include fusing correlation coefficient and sparse representation [10], Boltzmann entropy-based band selection [11], joint sparse model and discontinuity preserving relaxation [12], and extended morphological profiles [13, 14]. Some of these methods have also been proposed to exploit the spatial context with various morphological operations for HSI classification. However, these spectral-spatial feature extraction methods rely on hand-designed descriptions, prior information, and empirical hyperparameters [1]. Lately, convolutional neural networks (CNNs) have yielded higher accuracy than some hand-designed features [15]. CNNs have shown promise in multiple applications where visual information processing is required, including image classification [16], object detection [17], semantic segmentation [18], and depth estimation [19]. In particular, CNN-based methods act as an end-to-end feature extractor that consists of a series of hierarchical filtering layers for global optimization. The 2-D CNN stacked autoencoder [1] was the first attempt to extract deep features from its compressed latent space to classify HSIs. Following this work, [20] employed a 2-D CNN model to extract the spatial information in a supervised manner and classify the raw hyperspectral images. The multibranch selective kernel network with attention [21] and pixel-block pair based data augmentation techniques [22] were developed to address the gradient vanishing and overfitting problems. To extract the spatial-spectral features jointly from the raw HSI, researchers proposed a 3-D CNN architecture [23], which achieves even better classification results. Authors in [24, 25, 26] created multiscale spatiospectral relationships using 3-D CNN and fused the features using a 2-D CNN to extract more robust representation of spectral-spatial information. However, the performance and success of multi-layer CNNs are generally associated with high power and energy costs [27]. A typical hyperspectral image cube consists of several hundred spectral frequency bands, and hence, classifying these images using traditional CNNs require a large amount of computational power, especially when real time processing is necessary, as in target tracking or identification [28]. The high energy cost and the demand for deployment of HSI sensors in battery-powered edge devices motivates exploring alternative lightweight energy-efficient HSI classification models. In particular, low-latency spiking neural networks (SNNs) [29] have gained attention because they can be more computational efficient than CNNs for a variety of applications, including image analysis. To achieve this goal, analog inputs are first encoded into a sequence of spikes using one of a variety of proposed encoding methods, including rate coding [30, 31], direct coding [32], temporal coding [33], rank-order coding [34], phase coding [35], and other exotic coding schemes [36, 37]. Among these, rate and direct coding have shown competitive performance on complex tasks [30, 31] while others are either limited to simpler tasks such as learning the XOR function and classifying MNIST images or require a large number of spikes for inference. In particular, for rate coding, the analog value is converted to a spike train using a Poisson generator function with a rate proportional to the input pixel value. The number of timesteps \\(T\\) in each train is inversely proportional to the quantization error in the representation, as illustrated in Fig. 1(b). [31]. In contrast, in direct-input encoding, the analog pixel values are fed into the first convolutional layer as multi-bit values that are fixed for all \\(T\\) timesteps. [32]. In addition to accommodating various forms of encoding inputs, supervised learning algorithms for SNNs have overcome various roadblocks associated with the discontinuous derivative of the spike activation function [38, 39]. In particular, recent works have shown that SNNs can be efficiently converted from artifical neural networks (ANNs) by approximating the activation value of ReLU neurons with the firing rate of spiking neurons [31]. Low-latency SNNs trained using ANN-SNN conversion, coupled with supervised training, have been able to perform at par with ANNs in terms of classification accuracy in traditional image classification tasks [32]. This motivates this work which explores the effectiveness of SNNs for HSI classification. More specifically, this paper provides the following contributions: * We propose two convolutional architectures for HSI classification that can yield classification accuracies similar to state-of-the-art (SOTA) and are compatible with our ANN-SNN conversion framework. * We propose a hybrid training algorithm that first converts an ANN for HSI classification to an iso-architecture SNN, and then trains the latter using a novel quantization-aware spike timing dependent backpropagation (Q-STDB) algorithm. * We evaluate and compare the energy-efficiency of the SNNs obtained by our training framework, with standard ANNs, using appropriate energy models, which reveal that our SNNs trained for HSI classification can offer significant improvement in compute efficiency. The remainder of this paper is structured as follows. In Section II we present necessary background and related work. Section III and IV describe our quantization-aware SNN training method and network architectures respectively. We present the detailed experimental evaluations of our proposal in Section V. We show the improvement in energy-efficiency of our proposed SNN for all the HSI classification tasks in Section VI. Finally, the paper concludes in Section VII. ## II Background and Related Work ### _SNN Modeling_ An SNN consists of a network of neurons that communicate via a sequence of spikes modulated by synaptic weights. The activity of pre-synaptic neurons modulates the membrane potential of postsynaptic neurons, generating an action potential or spike when the membrane potential crosses a firing threshold. The spiking dynamics of a neuron are generally modeled using either the Integrate-and-Fire (IF) [40] or Leaky-Integrate-and-Fire (LIF) model [41]. Both IF and LIF neurons accumulate the input current into their respective states or membrane potentials. The difference between the two models is that the membrane potential of a IF neuron does not change during the time period between successive input spikes while the LIF neuronal membrane potential leaks at a constant rate. In this work, we use the LIF model to convert ANNs trained with ReLU activations, to SNNs, because the leak term provides a tunable control knob that can reduce inference latency and spiking activity.The IF model can be characterized by the following differential equation \\[C\\frac{dU_{i}(t)}{dt}=I_{i}(t)=\\sum_{j}W_{ij}\\cdot S_{j}(t) \\tag{1}\\] where \\(C\\) is the membrane capacitance, \\(U_{i}(t)\\) and \\(I_{i}(t)\\) are the membrane potential and input synaptic current of the \\(i^{th}\\) neuron at time \\(t\\). As illustrated in Fig. 1(a), \\(U_{i}(t)\\) integrates the incoming (pre-neuron) binary spikes \\(S_{j}(t)\\) multiplied by weights \\(W_{ij}\\). The neuron generates an output spike when \\(U_{i}\\) exceeds the firing threshold \\(V\\). However, because of its continuous-time representation, Eq. 1 is incompatible for implementation in common Machine Learning (ML) frameworks (e.g. Pytorch). Hence, we follow an iterative version evaluated in discrete time, within which spikes are characterized as binary values (1 represents the presence of a spike) [42]. \\[U_{i}(t)=U_{i}(t-1)+\\sum_{j}W_{ij}S_{j}(t)-V\\cdot O_{j}(t-1) \\tag{2}\\] \\[O_{i}(t-1)=\\begin{cases}1,&\\text{if }U_{i}(t-1)>V\\\\ 0,&\\text{otherwise}\\end{cases} \\tag{3}\\] \\(O_{i}(t)\\) is the output spike at time step \\(t\\). Note that the third term in Eq. 2 exhibits soft reset by reducing the membrane potential \\(U_{i}\\) by the threshold \\(V\\) at time step \\(t\\), if an output spike is generated at the \\((t-1)^{th}\\) time step. Alternatively, Fig. 1: (a) Feedforward fully-connected SNN architecture with integrate and fire (IF) spiking dynamics, (b) The spike input generated over several timesteps through a Poisson generator. It is clear that the larger the number of timesteps, the better the accumulated input spikes approximates the original input image. hard reset implies resetting \\(U_{i}\\) to 0. Soft reset minimizes the information loss by allowing the spiking neuron to carry forward the surplus potential above the firing threshold to the next time step [42]. ### _SNN Training Techniques_ Recent research on training supervised deep SNNs can be broadly divided into three categories: 1) ANN-to-SNN conversion-based training, 2) Spike timing dependent backpropagation (STDB), and 3) Hybrid training. #### Ii-B1 ANN-to-SNN Conversion Recent works have demonstrated that SNNs can be efficiently converted from ANNs by approximating the activation value of ReLU neurons with the firing rate of spiking neurons [43, 44, 45, 31, 46]. This technique uses standard backpropagation-based training for the ANN models and helps an iso-architecture SNN achieve superior classification accuracy in image recognition tasks [44, 31]. However, the SNNs resulting from these ANN-SNN conversion algorithms require an order of magnitude higher latency compared to other training techniques [31]. In this work, we use ANN-SNN conversion as an initial step in Q-STDB because it is of relatively low complexity and yields high classification accuracy on deep networks. #### Ii-B2 STDB The threshold comparator in the IF neuronal model yields a discontinuous and thus non-differentiable function, making it incompatible with the powerful gradient-descent based learning methods. Consequently, several approximate training methodologies have been proposed to overcome the challenges associated with non-differentiability [47, 48, 38, 49]. The key idea of these works is to approximate the spiking neuron functionality with a continuous differentiable model or use surrogate gradients as an approximate version of the real gradients to perform gradient descent based training. Unfortunately, SNNs trained using this approach generally require a large number of time steps, in the order of few hundreds, to process an input. As a result, the backpropagation step requires the gradients of the unrolled SNN to be integrated over all these time steps. This multiple-iteration backpropagation-through-time (BPTT) coupled with the exploding memory complexity has hindered the applicability of surrogate gradient based learning methods to deep convolutional architectures. #### Ii-B3 Hybrid Training A recent paper [42] proposed a hybrid training methodology where the ANN-SNN conversion is performed as an initialization step and is followed by an approximate gradient descent algorithm. The authors observed that combining the two training techniques helps the SNNs converge within a few epochs while requiring fewer time steps. Another recent paper [32] proposed a training scheme for deep SNNs in which the membrane leak and the firing threshold along with other network parameters (weights) are updated at the end of every batch via gradient descent after ANN-SNN conversion. Moreover, [32] applied direct-input encoding where the pixel intensities of an image are fed into the SNN input layer as fixed multi-bit values each timestep to reduce the number of required fewer time steps needed to achieve SOTA accuracy. Thus, the first convolutional layer composed of LIF neurons acts as both a feature extractor and spike-generator. This is similar to rate-coding except that the spike-rate of the first hidden layer is a function of its weights, membrane leak, and threshold parameters that are all learned by gradient descent. This work extends these hybrid learning techniques by incorporating weight quantization, as defined below. ## III Proposed Quantized SNN Training Method In this section, we evaluate and compare the different choices for SNN quantization in terms of compute efficiency and model accuracy. We then incorporate the chosen quantization technique into STDB, which we refer to as Q-STDB. ### _Study of Quantization Choice_ Uniform quantization transforms a weight element \\(w\\in[w_{min},w_{max}]\\) to a range \\([-2^{b-1},2^{b-1}-1]\\) where \\(b\\) is the bit-width of the quantized integer representation. There are primarily two choices for the above transformation, known as _affine_ and _scale_ quantization. Detailed descriptions of these two types of quantization can be found in [50]. Our key motivation for SNN weight quantization is the hardware acceleration of inference using energy-efficient integer or fixed-point computational units implemented as crossbar array based processing-in-memory (PIM) accelerators. Note that the six transistor SRAM array based in-memory computing requires low-precision weights for multiply-and-accumulate (MAC) operations due to low density of the bit-cells. Previous research [51, 52] have proposed post-training SNN quantization tailored towards unsupervised learning, which may not scale to complex vision tasks without requiring high-precision (\\(\\geq 8\\) bits). In contrast, in this work, we propose quantization-aware training, where the weights are fake quantized (see [50]) in the forward path computations, while the gradients and weight updates are calculated using the full precision weights. There are several choices for sharing quantization parameters among the tensor elements in a SNN. We refer to this choice as quantization granularity. We employ per-tensor (or per-layer) granularity where the same quantization parameters are shared by all elements in the tensor, because this reduces the computational cost compared to other granularity choices with no impact on model accuracy. Activations are similarly quantized, but only in the SNN input layer, since they are binary spikes in the remaining layers. To evaluate the compute cost, let us consider a 3-D convolutional layer \\(l\\), the dominant layer in HSI classification models, that performs a tensor operation \\(O_{l}=X_{l}\\mathbin{\\text{\\textcent{\\textcent{\\textcent{\\textcent{\\textcent{\\textcent {\\textcentcent{\\textcentcent{\\textcentcentcent{\\textcentcentcentcent{\\textcentcentcentcentcent{ \\textcentcentcentcentcent{\\textcentcentcentcentcent{\\ where \\(s_{s}^{X}\\) and \\(s_{s}^{W}\\) are scalar values for scale quantization representing the levels of the input and weight tensor respectively. Hence, scale quantization results in an integer convolution, followed by a point-wise floating-point multiplication for each output element. Given that a typical convolution operation involves a few hundred MAC operations (accumulate for binary spike inputs) to compute an output element, a single floating-point operation for the scaling shown in Eq. 4 is a negligible computational cost. Note that \\(X_{l}\\) only needs to be quantized if \\(l\\) is the input layer. In all other cases, \\(X_{l}^{Q}=X_{l}\\) and \\(s_{s}^{X}=1\\). Although both affine and scale quantization enable the use of low-precision arithmetic, affine quantization results in higher computationally expensive inference as shown below. \\[O_{l} \\approx\\frac{X_{l}^{Q}-z_{a}^{X}}{s_{a}^{X}}\\oplus\\frac{W_{l}^{Q}- z_{a}^{W}}{s_{a}^{W}}\\] \\[=\\frac{(X_{l}^{Q}\\oplus W_{l}^{Q}-z_{a}^{X}\\oplus(W_{l}^{Q}-z_{a} ^{W})-X_{l}^{Q}\\oplus z_{a}^{W})}{s_{a}^{X}\\cdot s_{a}^{W}} \\tag{5}\\] where \\(z_{a}^{X}\\) and \\(z_{a}^{W}\\) are tensors of sizes equal to that of \\(X_{l}^{Q}\\) and \\(W_{l}^{Q}\\) respectively that consist of repeated elements of the scalar zero-values of the input activation and weight tensor respectively. On the other hand, \\(s_{a}^{X}\\) and \\(s_{a}^{W}\\) are the corresponding scale values. The first term in the numerator of Eq. 5 is the integer convolution operation similar to the one performed in scale quantization shown in Eq. 4. The second term contains integer weights and zero-points, which can be computed offline, and adds an element-wise addition during inference. The third term, however, involves the quantized activation \\(X_{l}^{Q}\\), which cannot be computed offline. This extra computation, depending on the implementation, can introduce considerable overhead, reducing or even eliminating the throughput and energy advantage that low precision PIM accelerators offer over floating-point MAC units. Hence, we use scale quantization during inference. Note, however, that our experiments detailed in Section V show that using scale quantization during SNN training degrades the test accuracy significantly. Hence, we propose that training should use affine quantization of both the weights and input layer activations. Note that for a integer math unit or PIM accelerator, we do not necessarily need to quantize the SNN membrane potentials which are obtained as results of the accumulate operations of the weight elements. This is because the membrane potentials only need to be compared with the threshold voltage once for each time step, which consumes negligible energy, and can be performed using high precision fixed-point comparators (in the periphery of the crossbar array for PIM accelerators). However, quantizing the potentials can reduce the data movement cost as discussed in Section VI-B. ### _Q-STDB based Training_ In this subsection, we derive the expressions to compute the gradients of the parameters at all layers for our training framework. Our framework, which is illustrated in Fig. 2, incorporates the quantization methodology described above into the STDB technique used to train SNNs [32], where the spatial and temporal credit assignment is performed by unrolling the network in time and employing BPTT. _Output Layer:_ The neuron model in the output layer \\(L\\) only accumulates the incoming inputs without any leakage, does not generate an output spike, and is described by \\[\\mathbf{u}_{L}^{t}=\\mathbf{u}_{L}^{t-1}+\\hat{\\mathbf{w}_{L}}\\mathbf{o}_{L-1}^{t} \\tag{6}\\] where \\(N\\) is the number of output labels, \\(\\mathbf{u}_{L}\\) is a vector containing the membrane potential of \\(N\\) output neurons, \\(\\hat{\\mathbf{w}}_{L}\\) is the fake quantized weight matrix connecting the last two layers (\\(L\\) and \\(L-1\\)), and \\(\\mathbf{o}_{L-1}\\) is a vector containing the spike signals from layer \\((L-1)\\). The loss function is defined on \\(\\mathbf{u}_{L}\\) at the last time step \\(T\\). We employ the cross-entropy loss and compute the softmax of \\(\\mathbf{u}_{L}^{T}\\). The output of the network is passed through a softmax layer that outputs a probability distribution. The loss function \\(\\mathcal{L}\\) is defined as the cross-entropy between the true output (\\(y\\)) and the SNN's predicted distribution (\\(p\\)). \\[\\mathcal{L}=-\\sum_{i=1}^{N}y_{i}log(p_{i}),\\quad p_{i}=\\frac{e^{u_{i}^{T}}}{ \\sum_{j=1}^{N}e^{u_{j}^{T}}}, \\tag{7}\\] The derivative of the loss function with respect to the membrane potential of the neurons in the final layer is described by \\(\\frac{\\partial\\mathcal{L}}{\\partial\\mathbf{u}_{L}^{T}}=(\\mathbf{p}-\\mathbf{y})\\), where \\(\\mathbf{p}\\) and \\(\\mathbf{y}\\) are vectors containing the softmax and one-hot encoded values of the true label respectively. To compute the gradient at the current time step, the membrane potential at the previous step is considered as an input quantity [32]. With the weights being fake quantized, gradient descent updates the network parameters \\(\\mathbf{w}_{L}\\) of the output layer as Fig. 3: Fake quantization forward and backward pass with straight through estimator (STE) approximation Fig. 2: Proposed SNN training framework details with 3-D convolutions. \\[\\mathbf{w}_{L} =\\mathbf{w}_{L}-\\eta\\Delta\\mathbf{w}_{L} \\tag{8}\\] \\[\\Delta\\mathbf{w}_{L} =\\sum_{t}\\frac{\\partial\\mathcal{L}}{\\partial\\mathbf{w}_{L}}=\\sum_{t} \\frac{\\partial\\mathcal{L}}{\\partial\\mathbf{u}_{L}^{t}}\\frac{\\partial\\mathbf{u}_{L}^{t}} {\\partial\\hat{\\mathbf{w}}_{L}}\\frac{\\partial\\hat{\\mathbf{w}}_{L}}{\\partial\\mathbf{w}_{L}}\\] \\[=\\frac{\\partial\\mathcal{L}}{\\partial\\mathbf{u}_{L}^{t}}\\sum_{t}\\frac{ \\partial\\mathbf{u}_{L}^{t}}{\\partial\\hat{\\mathbf{w}}_{L}}\\frac{\\partial\\hat{\\mathbf{w}}_{ L}}{\\partial\\mathbf{w}_{L}}\\approx(\\mathbf{p}-\\mathbf{y})\\sum_{t}\\mathbf{o}_{L-1}^{t}\\] (9) \\[\\frac{\\partial\\mathcal{L}}{\\partial\\mathbf{o}_{L-1}^{t}} =\\frac{\\partial\\mathcal{L}}{\\partial\\mathbf{u}_{L}^{t}}\\frac{ \\partial\\mathbf{u}_{L}^{t}}{\\partial\\mathbf{o}_{L-1}^{t}}=(\\mathbf{p}-\\mathbf{y})\\hat{\\mathbf{w}}_ {L} \\tag{10}\\] where \\(\\eta\\) is the learning rate (LR). Note that the derivative of the fake quantization function of the weights \\((\\frac{\\partial\\hat{\\mathbf{w}}_{L}}{\\partial\\mathbf{w}_{L}})\\) is undefined at the step boundaries and zero everywhere, as shown in Fig. 3(a). Our training framework addresses this challenge by using the Straight-through Estimator (STE) [53], which approximates the derivative to be equal to 1 for inputs in the range \\([w_{min},w_{max}]\\) as shown in Fig. 3(b), where \\(w_{min}\\) and \\(w_{max}\\) are the minimum and maximum values of the weights in a particular layer. Note that \\(w_{min}\\) and \\(w_{max}\\) are updated at the end of every mini-batch to ensure all the weights lie between \\(w_{min}\\) and \\(w_{max}\\) during the forward and backward computations in each training iteration. Hence, we use \\(\\frac{\\partial\\hat{\\mathbf{w}}_{L}}{\\partial\\mathbf{w}_{L}}\\approx 1\\) to compute the loss gradients in Eq. 9. _Hidden layers_: The neurons in the hidden convolutional and fully-connected layers are defined by the quantized LIF model as \\[\\mathbf{u}_{l}^{t} =\\lambda_{l}\\mathbf{u}_{l}^{t-1}+\\hat{\\mathbf{w}}_{l}\\mathbf{o}_{l-1}^{t}-v_{ l}\\mathbf{o}_{l}^{t-1} \\tag{11}\\] \\[\\mathbf{z}_{l}^{t} =\\frac{\\mathbf{u}_{l}^{t}}{v_{l}}-1,\\quad\\mathbf{o}_{l}^{t}=\\begin{cases} 1,&\\text{if }\\mathbf{z}_{l}^{t}>0\\\\ 0,&\\text{otherwise}\\end{cases} \\tag{12}\\] where \\(\\lambda_{l}\\) and \\(v_{l}\\) represent the leak and threshold potential for all neurons in layer \\(l\\). All neurons in a layer possess the same leak and threshold value. This reduces the number of trainable parameters and we did not observe any significant improvement by assigning individual threshold/leak to each neuron. Given that the threshold is same for all neurons in a particular layer, it may seem redundant to train both the weights and threshold together. However, we observe that the number of time steps required to obtain the state-of-the-art classification accuracy decreases with this joint optimization. We hypothesize that this is because the optimizer is able to reach an improved local minimum when both parameters are tunable. The weight update in Q-STDB is calculated as \\[\\Delta w_{l} =\\sum_{t}\\frac{\\partial\\mathcal{L}}{\\partial w_{l}}=\\sum_{t}\\frac {\\partial\\mathcal{L}}{\\partial\\mathbf{z}_{l}^{t}}\\frac{\\partial\\mathbf{z}_{l}^{t}}{ \\partial\\mathbf{o}_{l}^{t}}\\frac{\\partial\\mathbf{o}_{l}^{t}}{\\partial\\mathbf{u}_{l}^{t}} \\frac{\\partial\\mathbf{u}_{l}^{t}}{\\partial\\hat{\\mathbf{w}}_{l}}\\frac{\\partial\\hat{\\mathbf{w} }_{l}}{\\partial\\mathbf{w}_{l}}\\] \\[\\approx\\sum_{t}\\frac{\\partial\\mathcal{L}}{\\partial\\mathbf{z}_{l}^{t}} \\frac{\\partial\\mathbf{z}_{l}^{t}}{\\partial\\mathbf{o}_{l}^{t}}\\frac{1}{v_{l}}\\mathbf{o}_{l- 1}^{t}\\cdot 1 \\tag{13}\\] where \\(\\frac{\\partial\\hat{\\mathbf{w}}_{l}}{\\partial\\mathbf{w}_{l}}\\) and \\(\\frac{\\partial\\mathbf{z}_{l}^{t}}{\\partial\\mathbf{o}_{l}^{t}}\\) are the two discontinuous gradients. We calculate the former using STE described above, while the latter is approximated using surrogate gradient [48] shown below. \\[\\frac{\\partial\\mathbf{z}_{l}^{t}}{\\partial\\mathbf{o}_{l}^{t}}=\\gamma\\cdot max(0,1-|\\mathbf{ z}_{l}^{t}|) \\tag{14}\\] Note that \\(\\gamma\\) is a hyperparameter denoting the maximum value of the gradient. The threshold and leak update is computed similarly using BPTT [32]. ## IV Proposed Architectures We developed two models, a 3-D and a hybrid fusion of 3-D and 2-D convolutional architectures, that are inspired by the recently proposed CNN models [23, 26, 25] used for HSI classification and compatible with our ANN-SNN conversion framework. We refer to the two models CNN-3D and CNN-32H. The models are trained without the bias term because it complicates parameter space exploration which increases the conversion difficulty and tends to increase conversion loss. The absence of the bias term implies that batch normalization [54] cannot be used as a regularizer during the training process. Instead, we use dropout [55] as the regularizer for both ANN and SNN training. Moreover, our models employ ReLU nonlinearity after each convolutional and linear layer (except the classifier layer) to further decrease the conversion loss due to the similarity between ReLU and LIF neurons. Also, our pooling operations use average pooling because for binary spike based activation layers, max pooling incurs significant information loss. Additionally, we modified the number of channels and convolutional layers to obtain a reasonable trade-off between accuracy and compute efficiency. 2-D patches of sizes \\(5\\times 5\\) and \\(3\\times 3\\) were extracted for CNN-3D and CNN-32H respectively, without any reduction in dimensionality from each dataset. Higher sized patches increase the computational complexity without any significant improvement in test accuracy. Our model architectures are explicitly described in Table I. ## V Experiments ### _Datasets_ We used three publicly available datasets, namely Indian Pines, Pavia University, and Salinas scene. A brief description follows for each one [56]. _Indian Pines_: The Indian Pines (IP) dataset consists of \\(145\\times 145\\) spatial pixels and \\(220\\) spectral bands in a range of \\(400-2500\\) nm. It was captured using the AVIRIS sensor over North-Western Indiana, USA, with a ground sample distance (SGD) of \\(20\\) m and has \\(16\\) vegetation classes. _Pavia University_: The Pavia University (PU) dataset consists of hyperspectral images with \\(610\\times 340\\) pixels in the spatial dimension, and \\(103\\) spectral bands, ranging from \\(430\\) to \\(860\\) nm in wavelength. It was captured with the ROSIS sensor with GSD of \\(1.3\\) m over the University of Pavia, Italy. It has a total of 9 urban land-cover classes. _Salinas Scene_: The Salinas Scene (SA) dataset contains images with \\(512\\times 217\\) spatial dimension and \\(224\\) spectral bands in the wavelength range of \\(360\\) to \\(2500\\) nm. The \\(20\\) water absorbing spectral bands have been discarded. It was captured with the AVIRIS sensor over Salinas Valley, California with a GSD of \\(3.7\\) m. In total \\(16\\) classes are present in this dataset. For preprocessing, images in all the data sets are normalized to have a zero mean and unit variance. For our experiments, all the samples are randomly divided into two disjoint training and test sets. The limited 40% samples are used for training and the remaining 60% for performance evaluation. ### _Experimental Setup_ #### V-B1 ANN Training We performed full-precision ANN training for \\(100\\) epochs using the standard SGD optimizer with an initial learning rate (LR) of \\(0.01\\) that decayed by a factor of 0.1 after \\(60\\), \\(80\\), and \\(90\\) epochs. #### V-B2 Conversion and SNN Training We first examine the distribution of the neuron input values over the total number of time steps across all neurons of the first layer for a small batch of HSI images (of size \\(50\\) in our case) and set the layer threshold to the \\(99.7\\) percentile of the scaled value of the evaluated threshold [32]. In our experiments we scale the initial thresholds by 0.8. Similarly, we then compute the thresholds of the subsequent layers sequentially by examining the distribution of their input values. Note that we use \\(100\\) time steps to evaluate the thresholds, while the SNN training and inference are performed with only \\(5\\) time steps. We keep the leak of each layer set to unity while evaluating initial thresholds. At the start of SNN training, we initialize the weights with those from the trained ANN and initialize the leak parameters to \\(1.0\\). We then perform the quantization-aware SNN training described in Section III for another \\(100\\) epochs. We set \\(\\gamma\\) = \\(0.3\\)[48] and used the ADAM optimizer with a starting LR of \\(10^{-4}\\) which decays by a factor of \\(0.5\\) after \\(60\\), \\(80\\), and \\(90\\) epochs. ### _ANN & SNN Inference Results_ We have used the Overall Accuracy (OA), Average Accuracy (AA), and Kappa Coefficient evaluation measures to evaluate the HSI classification performance for our proposed architectures, similar to [23]. Here, OA represents the number of correctly classified samples out of the total test samples. AA represents the average of class-wise classification accuracies, and Kappa is a statistical metric used to assess the mutual agreement between the ground truth map and classification map. Column-\\(2\\) in Table II shows the ANN accuracies, column-\\(3\\) shows the accuracy after ANN-\\(5\\)N conversion with \\(50\\) timesteps. Column-\\(4\\) shows the accuracy when we perform our proposed training without quantization, while columns 5 to 7 shows the SNN test accuracies obtained with Q-STDB for different bit precisions (4 to 6 bits) of the weights. We observe that for all the datasets, SNNs trained with 6-bit weights result in \\(5.33\\times\\) reduction in bit-precision compared to full-precision (32-bit) models and perform almost at par with the full precision ANNs on both the architectures. 4-bit weights do not incur significant accuracy drop as well, and can be used for applications demanding high energy-efficiency and low latency. Fig. 5 shows the confusion matrix for the HSI classification performance of the ANN and proposed SNN over the IP dataset for both the architectures. Although the membrane potentials do not need to be quantized as described in Section III, we observed that the model accuracy does not drop significantly even if we quantize them, and hence, the SNN results shown in Table II correspond to 6-bit membrane potentials. Moreover, quantized membrane potentials can reduce the data movement cost as discussed in Section VI-B. The performance of our ANNs and SNNs trained via Q-STDB are compared with the current state-of-the-art ANNs used for HSI classification in Table III. Note that mere porting the ANN architectures used in [23, 26] to SNNs, and performing 6-bit Q-STDB results in significant accuracy drop, and hence, shows the efficacy of our proposed architectures. #### V-B1 Q-STDB vs Post-Training Quantization (PTQ) PTQ cannot always yield ultra low-precision SNNs with SOTA test accuracy. For example, for the IP dataset and CNN-32H architecture, the lowest bit precision of the weights that the SNNs can be trained with PTQ for no more than \\(1\\%\\) reduction in SOTA test accuracy is \\(12\\), if we limit the total number of time steps to \\(5\\). Fig. 4(b) shows the test accuracies for different bit precisions (\\(\\leq\\)\\(12\\)) of weights with PTQ on IP dataset. The weights can be further quantized to \\(8\\)-bits if we increase the time steps to \\(10\\), which increases the latency. On the other hand, Q-STDB results in accurate (\\(\\leq\\)\\(1\\%\\) deviation from ANN accuracy) \\(6\\) bit SNNs with only \\(5\\) time steps, which improves both the energy-efficiency and latency. The energy-efficiency of our proposed architectures trained with Q-STDB are quantified in Section VI. #### V-B2 Affine vs Scale Quantization during Training As illustrated in Section III, performing scale quantization during the forward path in training degrades the SNN accuracy significantly. Fig. 4(a) shows the test accuracies for affine and scale quantization during training with CNN-3D architecture on IP dataset. ## VI Improvement in Energy-Delay Product ### _Spiking Activity_ To model energy consumption, we assume a generated SNN spike consumes a fixed amount of energy [43]. Based on this assumption, earlier works [42, 31] have adopted the average spiking activity (also known as average spike count) of an SNN layer \\(l\\), denoted \\(\\zeta^{l}\\), as a measure of compute-energy of the model. In particular, \\(\\zeta^{l}\\) is computed as the ratio of the total spike count in \\(T\\) steps over all the neurons of the layer Fig. 4: (a) Test accuracies for affine and scale quantization with CNN-3D over IP dataset (b) Test accuracies with 6, 9 and 12-bit weight precisions for post-training quantization with CNN-32H on IP dataset. \\(l\\) to the total number of neurons in that layer. Thus lower the spiking activity, the better the energy efficiency. Fig. 6 shows the average number of spikes for each layer with Q-STDB when evaluated for \\(200\\) samples from the IP testset for the CNN-3D architecture. Let the average be denoted by \\(\\zeta_{l}\\) which is computed by summing all the spikes in a layer over \\(5\\) time steps and dividing by the number of neurons in that layer. For example, the average spike count of the \\(3^{rd}\\) convolutional layer of the SNN is \\(0.568\\), which implies that over a \\(5\\) timestep period each neuron in that layer spikes \\(0.568\\) times on average over all input samples. ### _Floating point operations count (FLOPs) & Total Energy_ Let us assume a 3-D convolutional layer \\(l\\) having weight tensor \\(\\mathbf{W}^{l}\\in\\mathbb{R}^{H_{l}^{T}\\times H_{l}^{T}\\times C_{l}^{T}\\times C _{l}^{T}}\\) that operates on an input activation tensor \\(\\mathbf{I}^{l}\\in\\mathbb{R}^{H_{l}^{T}\\times W_{l}^{T}\\times C_{l}^{T}\\times D _{l}^{T}}\\), where the notations are similar to the one used in Section III. We now quantify the energy consumed to produce the corresponding output activation tensor \\(\\mathbf{O}^{l}\\in\\mathbb{R}^{H_{l}^{T}\\times W_{l}^{T}\\times C_{l}^{T}}\\) for an ANN and SNN, respectively. Our model can be extended to fully-connected layers with \\(f_{l}^{i}\\) and \\(f_{l}^{o}\\) as the number of input and output features respectively, and to 2-D convolutional layers, by shrinking a dimension of the feature maps. In particular, Fig. 5: Confusion Matrix for HSI test performance of ANN and proposed 6-bit SNN over IP dataset for both CNN-3D and CNN-32H. The ANN and SNN confusion matrices look similar for both the network architectures. CNN-32H incurs a little drop in accuracy compared to CNN-3D due to shallow architecture. Fig. 6: Layerwise spiking activity plots for CNN-3D on Indian Pines, Salinas Scene and Pavia University datasets. for an ANN, the total number of FLOPS for layer \\(l\\), denoted \\(F_{l}^{ANN}\\), is shown in row \\(1\\) of Table IV [58, 59]. The formula can be easily adjusted for an SNN in which the number of FLOPs at layer \\(l\\) is a function of the average spiking activity at the layer \\((\\zeta_{l})\\) denoted as \\(F_{l}^{SNN}\\) in Table IV. Thus, as the activation output gets sparser, the compute energy decreases. For ANNs, FLOPs primary consist of multiply accumulate (MAC) operations of the convolutional and linear layers. On the contrary, for SNNs, except the first and last layer, the FLOPs are limited to accumulates (ACs) as the spikes are binary and thus simply indicate which weights need to be accumulated at the post-synaptic neurons. For the first layer, we need to use MAC units as we consume analog input1 (at timestep one). Hence, the compute energy for an ANN \\((E^{ANN})\\) and an iso-architecture SNN model \\((E^{SNN})\\) can be written as Footnote 1: Note that for the hybrid coded data input we need to perform MAC at the first layer at \\(t=1\\), and AC operation during remaining timesteps at that layer. For the direct coded input, only MAC during the \\(1^{st}\\) timestep is sufficient, as neither the inputs nor the weights change during remaining timesteps (i.e. \\(5\\geq t\\geq 2\\)). \\[E^{ANN}=(\\sum_{l=1}^{L}F_{l}^{SNN})E_{MAC} \\tag{15}\\] \\[E^{SNN}=(F_{1}^{ANN})E_{MAC}+(\\sum_{l=2}^{L}F_{l}^{SNN})E_{AC} \\tag{16}\\] where \\(L\\) is the total number of layers. Note that \\(E_{MAC}\\) and \\(E_{AC}\\) are the energy consumption for a MAC and AC operation respectively. As shown in Table V, \\(E_{AC}\\) is \\(\\sim\\)\\(32\\times\\) lower than \\(E_{MAC}\\)[60] in \\(45\\) nm CMOS technology for 32-bit precision. To compute \\(E_{MAC}\\) and \\(E_{AC}\\) for any arbitrary bit precision \\(Q\\) (6-bits in our work), we use \\(E_{MAC}\\)\\(\\propto\\)\\(Q^{1.25}\\)[61], and \\(E_{AC}\\)\\(\\propto\\)\\(Q\\)[62]. These numbers may vary for different technologies, but generally, in most technologies, an AC operation is significantly less expensive than a MAC operation and its' energy scales close to linearly with bit precision. Fig. 7 illustrates the compute energy consumption and FLOPs for full precision ANN and 6-bit quantized SNN models of the two proposed architectures while classifying the IP, PU, and SS datasets, where the energy is normalized to that of an equivalent ANN. We also consider 6-bit ANN models to compare the energy-efficiency of low-precision ANNs and SNNs. As seen in Fig. 7, 6-bit ANN models are \\(12.5\\times\\) energy-efficient compared to 32-bit ANN models due to the similar factor of improvement in MAC energy (see Table V). Note that we can achieve the HSI test accuracies shown in Table II with quantized ANNs as well. The FLOPs for SNNs obtained by our proposed training framework is smaller than that for an ANN with similar number of parameters due to low spiking activity. Moreover, because the ACs consume significantly less energy than MACs for all bit precisions, SNNs are significantly more compute efficient. In particular, for CNN-3D on IP, our proposed SNN consumes \\(\\sim\\)\\(199.3\\times\\) and \\(\\sim\\)\\(15.9\\times\\) less compute energy than an iso-architecture full-precision and 6-bit ANN with similar parameters respectively. The improvements become \\(\\sim\\)\\(560.6\\times\\) and \\(\\sim\\)\\(44.8\\times\\) respectively on averaging across the two network architectures and three datasets. Note that we did not consider the memory access energy in our evaluation because it is dependent on the underlying system architecture. In general, SNNs incur significant data movement because both the membrane potentials and weights need to be fetched from the on-chip memory. Q-STDB addresses the memory cost by reducing their bit precisions by \\(5.33\\times\\) (see Section V-C) compared to full-precision models. Moreover, there have been many proposals to reduce the memory cost by data buffering [63], computing in non-volatile crossbar memory arrays [64], and data reuse with energy-efficient dataflows [65]. All these techniques can be complemented with Q-STDB to further decrease the memory cost. ## VII Conclusions and Broader Impact In this paper, we propose a spiking version of a 3-D and hybrid combination of 3-D and 2-D convolutional architectures for HSI classification. We present a quantization-aware training technique, that yields highly accurate low-precision SNNs, which can be accelerated by integer math units or PIM accelerators. Our quantized SNNs offer significant improvements in energy consumption compared to both full and low-precision ANNs for HSI classification. Our proposal results in energy-efficient SNN models, which can be readily deployed in HSI sensors, thereby eliminating the bandwidth and privacy concerns of going to the cloud. Since the commercial applications of HSI analysis are broadly expanding and the models required to train HSI are becoming deeper, energy-efficiency becomes a key concern, as seen in traditional computer vision tasks. To the best of our knowledge, this work is the first to address energy-efficiency of HSI models, and can hopefully inspire more research in low power algorithm-hardware co-design of neural network models for HSI classification. ## References * [1] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, \"Deep learning-based classification of hyperspectral data,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 7, no. 6, pp. 2094-2107, 2014. * [2] Y. Wan, Y. Fan, and M. Jin, \"Application of hyperspectral remote sensing for supplementary investigation of polymetallic deposits in huaniushan ore region, northwestern china,\" _Scientific Reports_, vol. 11, p. 440, 01 2021. * [3] A. Papp, J. Pegoraro, D. Bauer, P. Taupe, C. Wiesmeyr, and A. Kricchbaum-Zabini, \"Automatic annotation of hyperspectral images and spectral signal classification of people and vehicles in areas of dense vegetation with deep learning,\" _Remote Sensing_, vol. 12, no. 13, 2020. * [4] Z. Zheng, Y. Zhong, A. Ma, and L. Zhang, \"FPGA: Fast patch-free global learning framework for fully end-to-end hyperspectral image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 58, no. 8, pp. 5612-5626, 2020. * [5] F. Melgani and L. Bruzzone, \"Classification of hyperspectral remote sensing images with support vector machines,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 42, no. 8, pp. 1778-1790, 2004. * [6] M. Pal, \"Random forests for land cover classification,\" in _IGARSS 2003. 2003 IEEE International Geoscience and Remote Sensing Symposium. Proceedings (IEEE Cat. No.03CH37477)_, vol. 6, no. 1, 2003, pp. 3510-3512 vol.6. * [7] J. Xia, N. Yokoya, and A. Iwasaki, \"Hyperspectral image classification with canonical correlation forests,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 55, no. 1, pp. 421-431, 2017. * [8] B. Krishnapuram, L. Carin, M. A. T. Figueiredo, and A. J. Hartemink, \"Sparse multinomial logistic regression: fast algorithms and generalization bounds,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 27, no. 6, pp. 957-968, 2005. * [9] G. Camps-Valls, L. Gomez-Chova, J. Munoz-Mari, J. Vila-Frances, and J. Calpe-Mavilla, \"Composite kernels for hyperspectral image classification,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 3, no. 1, pp. 93-97, 2006. * [10] B. Tu, X. Zhang, X. Kang, G. Zhang, J. Wang, and J. Wu, \"Hyperspectral image classification via fusing correlation coefficient and joint sparse representation,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 15, no. 3, pp. 340-344, 2018. * [11] P. Gao, J. Wang, H. Zhang, and Z. Li, \"Boltzmann entropy-based unsupervised band selection for hyperspectral image classification,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 16, no. 3, pp. 462-466, 2019. * [12] Q. Gao, S. Lim, and X. Jia, \"Hyperspectral image classification using joint sparse model and discontinuity preserving relaxation,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 15, no. 1, pp. 78-82, 2018. * [13] J. A. Benediktsson, J. A. Palmason, and J. R. Svension, \"Classification of hyperspectral data from urban areas based on extended morphological profiles,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 43, no. 3, pp. 480-491, 2005. * [14] J. Li, P. R. Marpu, A. Plaza, J. M. Bioucas-Dias, and J. A. Benediktsson, \"Generalized composite kernel framework for hyperspectral image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 51, no. 9, pp. 4816-4829, 2013. * [15] A. Krizhevsky _et al._, \"ImageNet classification with deep convolutional neural networks,\" in _Advances in Neural Information Processing Systems_, 2012, pp. 1097-1105. * [16] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 770-778. * [17] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster R-CNN: Towards real-time object detection with region proposal networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 39, no. 6, p. 1137-1149, Jun. 2017. * [18] K. He, G. Gkioxari, P. Dollar, and R. Girshick, \"Mask R-CNN,\" _arXiv preprint arXiv:1703.06870_, 2018. * [19] V. K. Repala and S. R. Dubey, \"Dual CNN models for unsupervised monocular depth estimation,\" _arXiv preprint arXiv:1804.06324_, 2019. * [20] K. Makantakis, K. Karantzlos, A. Doulamis, and N. Doulamis, \"Deep supervised learning for hyperspectral data classification through convolutional neural networks,\" in _2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, vol. 1, no. 1, 2015, pp. 4959-4962. * [21] T. Alipour-Fard, M. E. Paoletti, J. M. Haut, H. Arefi, J. Plaza, and A. Plaza, \"Multibranch selective kernel networks for hyperspectral image classification,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 1, no. 1, pp. 1-5, 2020. * [22] W. Song, S. Li, L. Fang, and T. Lu, \"Hyperspectral image classification with deep feature fusion network,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 56, no. 6, pp. 3173-3184, 2018. * [23] A. Ben Hamida, A. Benoit, P. Lambert, and C. Ben Amar, \"3-D deep learning approach for remote sensing image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 56, no. 8, pp. 4420-4434, 2018. * [24] H. Lee and H. Kwon, \"Going deeper with contextual cnn for hyperspectral image classification,\" _IEEE Transactions on Image Processing_, vol. 26, no. 10, pp. 4843-4855, 2017. * [25] S. K. Roy, G. Krishna, S. R. Dubey, and B. B. Chaudhuri, \"HybridSN: Exploring 3-D-2-D CNN feature hierarchy for hyperspectral image classification,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 17, no. 2, pp. 277-281, 2020. * [26] Y. Luo, J. Zou, C. Yao, X. Zhao, T. Li, and G. Bai, \"HSI-CNN: A novel convolution neural network for hyperspectral image,\" in _2018 International Conference on Audio, Language and Image Processing (ICALIP)_, vol. 1, no. 1, 2018, pp. 464-469. * [27] D. Li, X. Chen, M. Becchi, and Z. Zong, \"Evaluating the energy efficiency of deep convolutional neural networks on CPUs and GPUs,\" in _2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom) (BDCloud-SocialCom-SustainCom)_, vol. 1, no. 1, 2016, pp. 477-484. * Workshops_, vol. 1, no. 1, 2010, pp. 44-51. * [29] M. Pfeiffer and T. Pfeil, \"Deep learning with spiking neurons: Opportunities and challenges,\" _Frontiers in Neuroscience_, vol. 12, p. 774, 2018. * [30] P. U. Diehl, G. Zarrella, A. Cassidy, B. U. Peloroni, and E. Neftci, \"Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware,\" in _2016 IEEE International Conference on Rebouncing Computing (ICRC)_. IEEE, 2016, pp. 1-8. * [31] A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy, \"Going deeper in spiking neural networks: VGG and residual architectures,\" _Frontiers in Neuroscience_, vol. 13, p. 95, 2019. * [32] N. Rathi and K. Roy, \"DIFT-SNN: Direct input encoding with leakage and threshold optimization in deep spiking neural networks,\" _arXiv preprint arXiv:2008.03658_, 2020. * 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, vol. 1, no. 1, 2020, pp. 8529-8533. * [34] S. R. Kheradjabishe and T. Masquelier, \"Temporal backpropagation for spiking neural networks with one spike per neuron,\" _International Journal of Neural Systems_, vol. 30, no. 06, May 2020. * [35] J. Kim, H. Kim, S. Huh, J. Lee, and K. Choi, \"Deep neural networks with weighted spikes,\" _Neurocomputing_, vol. 311, pp. 373-386, 2018. * [36] D. A. Almomani, M. Alauthman, M. Alweshah, O. Dorgham, and F. Albalas, \"A comparative study on spiking neural network encoding schema- implemented with cloud computing,\" _Cluster Computing_, vol. 22, 06 2019. * [37] G. Datta, S. Kundu, and P. A. Beerel, \"Training energy-efficient deep spiking neural networks with single-spike hybrid input encoding,\" _arXiv preprint arXiv:2107.12374_, 2021. * [38] J. H. Lee, T. Delbruck, and M. Pfeiffer, \"Training deep spiking neural networks using backpropagation,\" _Frontiers in Neuroscience_, vol. 10, p. 508, 2016. * [39] Y. Wu, L. Deng, G. Li, J. Zhu, Y. Xie, and L. Shi, \"Direct training for spiking neural networks: Faster, larger, better,\" in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol. 33, 2019, pp. 1311-1318. * [40] S. Lu and A. Sengupta, \"Exploring the connection between binary and spiking neural networks,\" _arXiv preprint arXiv:2002.10064_, 2020. Fig. 7: Comparison of FLOPs and compute energy of CNN-3D and CNN-32H between ANN and SNN models while classifying on (a) Indian Pines, (b) Salinas Scene and (c) Pavia University datasets, respectively. * [41] C. Lee, S. S. Sarwar, P. Panda, G. Srinivasan, and K. Roy, \"Enabling spike-based backpropagation for training deep neural network architectures,\" _Frontiers in Neuroscience_, vol. 14, p. 119, 2020. * [42] N. Rathi, G. Srinivasan, P. Panda, and K. Roy, \"Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation,\" _arXiv preprint arXiv:2005.01807_, 2020. * [43] Y. Cao, Y. Chen, and D. Khosla, \"Spiking deep convolutional neural networks for energy-efficient object recognition,\" _International Journal of Computer Vision_, vol. 113, pp. 54-66, 05 2015. * [44] B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu, \"Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,\" _Frontiers in Neuroscience_, vol. 11, p. 682, 2017. * [45] P. U. Diehl, D. Neil, J. Binas, M. Cook, S. Liu, and M. Pfeiffer, \"Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing,\" in _2015 International Joint Conference on Neural Networks (IJCNN)_, vol. 1, no. 1, 2015, pp. 1-8. * [46] Y. Hu, H. Tang, and G. Pan, \"Spiking deep residual network,\" _arXiv preprint arXiv:1805.01352_, 2018. * [47] P. Panda and K. Roy, \"Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition,\" _arXiv preprint arXiv:1602.01510_, 2016. * [48] G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, \"Long short-term memory and learning-to-learn in networks of spiking neurons,\" _arXiv preprint arXiv:1803.09574_, 2018. * [49] E. O. Nefici, H. Mostafa, and F. Zenke, \"Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks,\" _IEEE Signal Processing Magazine_, vol. 36, no. 6, pp. 51-63, 2019. * [50] S. R. Jain, A. Gural, M. Wu, and C. H. Dick, \"Trained quantization thresholds for accurate and efficient fixed-point inference of deep neural networks,\" _arXiv preprint arXiv:1903.08066_, 2020. * [51] N. Rathi, P. Panda, and K. Roy, \"STDP based pruning of connections and weight quantization in spiking neural networks for energy efficient recognition,\" _arXiv preprint arXiv:1710.04734_, 2017. * Taiwan_, vol. 1, no. 1, 2020, pp. 1-2. * [53] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, \"Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1,\" _arXiv preprint arXiv:1602.02830_, 2016. * [54] N. Bjorck, C. P. Gomes, B. Selman, and K. Q. Weinberger, \"Understanding batch normalization,\" in _Advances in Neural Information Processing Systems_, 2018, pp. 7694-7705. * [55] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, \"Dropout: A simple way to prevent neural networks from overfitting,\" _Journal of Machine Learning Research_, vol. 15, pp. 1929-1958, 06 2014. * [56] M. Gruna, M. A. Veganzons, and B. Ayerdi, \"Hyperspectral remote sensing scenes,\" [http://www.chu.ens/c/cewintco/index.php/Hyperspectral_Remote_Sensing_Scenes](http://www.chu.ens/c/cewintco/index.php/Hyperspectral_Remote_Sensing_Scenes). * [57] Z. Zhong, J. Li, Z. Luo, and M. Chapman, \"Spectral-spatial residual network for hyperspectral image classification: A 3-D deep learning framework,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 56, no. 2, pp. 847-858, 2018. * [58] S. Kundu, M. Nazemi, M. Pedram, K. M. Chugg, and P. Becel, \"Pre-defined sparsity for low-complexity convolutional neural networks,\" _IEEE Transactions on Computers_, 2020. * [59] S. Kundu, S. Prakash, H. Akrami, P. A. Beerel, and K. M. Chugg, \"p8Conv: A pre-defined sparse kernel based convolution for deep CNNs,\" in _2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_. IEEE, 2019, pp. 100-107. * [60] M. Horowitz, \"1.1. Computing's energy problem (and what we can do about it),\" in _2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC)_. IEEE, 2014, pp. 10-14. * [61] B. Moons, R. Uytterthoewen, W. Dehaene, and M. Verbelt, \"14.5 emision: A 0.26-to-10TOOPS/W subword-parallel dynamic-voltage-accuracy-frequency-scalable convolutional neural network processor in 28nm fdsoi,\" in _2017 IEEE International Solid-State Circuits Conference (ISSCC)_, vol. 1, no. 1, 2017, pp. 246-247. * [62] W. Simon, J. Galicia, A. Levisex, M. Zapater, and D. Atienza, \"A fast, reliable and wide-voltage-range in-memory computing architecture,\" in _2019 56th ACM/IEEE Design Automation Conference (DAC)_, vol. 1, no. 1, 2019, pp. 1-6. * [63] Y. Shen, M. Ferdman, and P. Milder, \"Escher: A CNN accelerator with flexible buffering to minimize off-chip transfer,\" in _2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)_, vol. 1, no. 1, 2017, pp. 93-100. * [64] B. Chen, F. Cai, J. Zhou, W. Ma, P. Sheridan, and W. D. Lu, \"Efficient in-memory computing architecture based on crossbar arrays,\" in _2015 IEEE International Electron Devices Meeting (IEDM)_, vol. 1, no. 1, 2015, pp. 1-4. * [65] Y.-H. Chen, J. Emer, and V. Sze, \"Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks,\" _ACM SIGARCH Computer Architecture News_, vol. 44, 06 2016. \\begin{tabular}{c c} & Gourav Datta received his bachelor's degree in Instrumentation Engineering with a minor in Electronics and Electrical Communication Engineering from Indian Institute of Technology (IIT) Kharagpur, India in 2018. He then joined the Ming Hsieh Department of Electrical and Computer Engineering at the University of Southern California where he is currently pursuing a PhD degree. He interned at Apple Inc. and INRIA Research Centre in the summers of 2019 and 2017, respectively. His research focuses on the entire computing stack, including devices, circuits, architectures and algorithms for accelerating machine learning workloads. During his tenure at IIT Kharagpur, he has received the Institute Silver medal and was adjuded the best outgoing student in academics in his batch. \\\\ \\end{tabular} \\begin{tabular}{c c} & Souvik Kundu received his M.Tech degree in Microelectronics and VLSI design from Indian Institute of Technology Kharagpur, India in 2015. He worked as R \\& D Engineer II if Supersynodis India Pvt. Ltd. and as Digital Design Engineer at Texas Instruments India Pvt. Ltd. from 2015 to 2016 and from 2016 to 2017, respectively. He is currently working towards the Ph.D. degree in Electrical and Computer Engineering at the University of Southern California, Los Angeles, CA, USA. His research focuses on energy aware sparsity, model search, algorithm-hardware co-design of robust and energy-efficient neural networks for CMOS and beyond CMOS technology. \\\\ \\end{tabular} \\begin{tabular}{c c} & Akhilesh R. Jaiswal is a Research Assistant Professor of Electrical and Computer Engineering and a Scientist at USC's Information Sciences Institute's (ISI) Application Specific Intelligent Computing (ASIC) Lab. Prior to USC/ISI, Dr. Jaiswal was a Senior Research Engineer with GLOBAL-FOUIMRES (GF) at Malta Dr. Jaiswal received his Ph.D. degree in Nano-electronics from Purdue University in May 2019. As a part of doctoral program has research focused on 1) CMOS based analog and digital in-memory and near-memory computing using standard memory bit-cells for beyond von-Neumann computing. 2) Exploration of bio-immetic devices and circuits using emerging non-volatile technologies for Neuromorphic computing. His current research interest includes exploration of 'alternate computing paradigms' using 'alternate state variables'. Dr. Jaiswal has authored several publications and holds 15+ issued and several pending patents with the USPTO. \\\\ \\end{tabular} \\begin{tabular}{c c} & Peter A. Beerel received his B.S.E. degree in Electrical Engineering from Princeton University, Princeton, NJ, in 1989 and his M.S. and Ph.D. degrees in Electrical Engineering from Stanford University, Stanford, CA, in 1991 and 1994, respectively. He then joined the Ming Hsieh Department of Electrical and Computer Engineering at the University of Southern California where he is currently a professor and the Associate Chair of the Computer Engineering Division. He is also a Research Director at the Information Science Institute at USC. Previously, he co-founded TimeLess Design Automation to commercialize an asynchronous ASIC flow in 2008 and sold the company in 2010 to Fulcurum Microsystems which was bought by Intel in 2011. His interests include a variety of topics in computer-aided design, machine learning, hardware security, and asynchronous VLSI and the commercialization of these technologies. He is a Senior Member of the IEEE. \\\\ \\end{tabular}
Hyperspectral images (HSIs) provide rich spectral-spatial information across a series of contiguous spectral bands. However, the accurate processing of the spectral and spatial correlation between the bands requires the use of energy-expensive 3-D Convolutional Neural Networks (CNNs). To address this challenge, we propose the use of Spiking Neural Networks (SNNs) that are generated from iso-architecture CNNs and trained with quantization-aware gradient descent to optimize their weights, membrane leak, and firing thresholds. During both training and inference, the analog pixel values of a HSI are directly applied to the input layer of the SNN without the need to convert to a spike-train. The reduced latency of our training technique combined with high activation sparsity yields significant improvements in computational efficiency. We evaluate our proposal using three HSI datasets on a 3-D and a 3-D/2-D hybrid convolutional architecture. We achieve overall accuracy, average accuracy, and kappa coefficient of \\(98.68\\%\\), \\(98.34\\%\\), and \\(98.20\\%\\) respectively with \\(5\\) time steps (inference latency) and \\(6\\)-bit weight quantization on the Indian Pines dataset. In particular, our models achieved accuracies similar to state-of-the-art (SOTA) with \\(\\sim\\)\\(560.6\\times\\) and \\(\\sim\\)\\(44.8\\times\\) less compute energy on average over three HSI datasets than an iso-architecture full-precision and 6-bit quantized CNN, respectively. hyperspectral images, spiking neural networks, quantization-aware, gradient descent, indian pines
Give a concise overview of the text below.
356
isprs/4275eec9_5e11_46e7_8912_0de83cfef8e1.md
IMAGE INTERPRETATION AS A CULTURAL FACTOR G.M. Lechi-G. Zani-E. Zilioli Istituto per la Geofisicadella Litosfera C.N.R. Milano, Italy INTRODUCTION The justification of such a work is to be found in a more and more developing reality which permits almost every individual to be engaged in images observation, through the newspapers, mass magazines, movies and the common ways of communication. Nowadays, the man in the street may come into contact with a photographic and literary language very closed to particular scientific disciplines, almost used in population and advertising purposes. For instance, the thermographic applications in the field of oncology are broadly vulgarized as well as the vision of heat loss from the houses' walls. And more, if we want, even the case of satellite imageries mostly used for their aesthetical contents is very common. From here arises the need of formalizing the intervening relationships between a simple outlook and the scientific interpretation of the images, i.e. the knowledge and the consciousness of such an information. Remote Sensing techniques (R.S.), based on the fundamentals of radiation physics, are instruments well performed to get a particular knowledge which overcomes the plain observation of Nature. In a completely similar way, heliocentric theory of Keplero acts as context to the Sun observation accordingly he saw it, as well as the tolemaic theory for Tycho Brahe: these contexts so manage not that each of them sees a different Sun, but that one can see the Sun in a different way from the other one. Then, interpretation means catching something visually in a certain matter and in a certain context; that is to say in a network of relations. 1- THE MEANING OF THE IMAGE (From \"Pixel\" to \"Murales\") The basic element of the image is represented by \"pixel\"(bic-ture element) whose physical property is to be an image in itself; hence, it's a bidimensional quantity which can be also figured by an absolute value. Speaking about R.S., therefore, we always refer to images, at any rate. On the other hand, images set up a peculiar kind of language which is essential in the theory of human communication. Ideas and impressions easily come out from the offered images to the observer, by the actual means of movies, television, billboards. Let us mention how the transmitted signal by images has been growing in sig- ificance, in the various human and social sciences, as in per- ception psychology and in communication theory. In our case we can identify the surfaces, either emitting or reflecting the E.M. energy at various wavelengths, as senders and we can recognize as receivers all the suitable instruments, (detectors, emulsion etc.) that are able of recording the incoming energy. In this way, for its own structures, also the human eyes can be considered as a receptive tool. Once such an information has been gained, in order to avoid being mere stored data, it requires to be interpreted and therefore it acquaints proper scientific meaning. The achievement of the interpretative stage needs a particular \"anabasis\" into the image that the following sequence tries to explain: a) Firstly, the common observer is required to identify objects by the study of associated objects, or to identify object-complexes for their component objects. This activity leads to the consideration that solution of the problem does not always consist of a single certain identification; it may be a hierarchy of possible identifications from most likely or probable to least likely or possible, by conno- tating the level of certainty with probabilistic statement. Identification is gained by the means of the nine well known elements: size, shape, shadow, tone and colour, texture, pattern, site, association, resolution. b) Most problems in image interpretation require the reader to have at his command knowledge derived not from the images themselves but from the relevant field or fields of study. At this stage, the application of the identified elements to the investigation purposes occurs. Geologists, for instance, are seldom interested in the vegetation or canopy observations, butthesesameargumentsbecomeindicatorsof geologicprocesses. Experienced interpretersmakequasi-automaticselectionoftheidentificationelements(methodology)adjustabletotheresearchprogram(aimofthework).Inthisway,complicatedobjectsoftenseemtobeinstantlyrecognizedbytheinterpreterwhileneophyteneedstomakethesameoperationsseparately,identifyingtheelementsinordertodefinetheobject.c)Thatconcernstotheinterpretationoftheacquaintedelements,inconsideringtheirusefulnesstotheresearch.Inthiscontext,theinfluenceofhumanfactorsisprevailingandthereforethesemiautomaticstageofanalysisisovercome.Theinterpretercommunicateshisresponsetoastimulusbylabelingtheidentifiedobject.Hisabilitytoidentifyobjectscannotbecmeasuredintheabsenceoflabeling,althoughitispossibletolearnwhetherhisresponsereper-toireincludestheappropriatelabelforagivenobject.Inordertocommunicateadequatelyhisinterpretationstheimageanalystmustbewellversedintheterminologyofhisownandrelatedfieldsofknowledge.d)Inouropinion,atthispoint,interpretationprocesscanaddtoitswealthofa\"cultural\"probing.Thebarriertobecovercomeisthesystematicandmethodologicattitudeoftheutilitarianresearchthatseldomprovidesaboundaryanalysisoftheinvestigatedobject.Itstandstocreasonalsoforeachimage,asseenfrominterpreter,isalwaysstressedinacontinuousandanalogicform.Acertaingreylevelcanbedistinguishedifandonlyifanotherdifferentgreyexistsclosedtoit;wewanttoas-sertthatexperiencedinterpreterisdriventoincludein-tohisstudyalltherelationsdetectedbytheimagewhich,in the case of R.S., is a representation of effectual re- ality. An example: in the land use programs and census stu- dies of a certain area, an agronomist is induced to make himself interested to the intervening relations between \"his\" object and the boundaries, either in spatial terms or in contents: road conditions, town-planning, political and social aspects; i.e. in a wider context the elements of the environment. This kind of approach only occurs through the interdisci- plinary of competences; furthermore, that means putting the technical-scientific problem into a field with wider cultural co-ordinates so that one can acquaint a vision of Science as unitary pheonomenon, out of departments. From the initial consideration on the element \"pixel\", seen as the basic of the image, to the conclusive stage of inter- pretation as we have considered, derives the necessity of col- locating image interpretation in a cultural dimension. 2- CULTURAL FEATURES OF IMAGE INTERPRETATION In the present development of epistemology, greater relief as- sumes the knowledge of Nature and of its mechanisms rather than the techniques of command and domain onto it. In this context it stands to reason how remotely sensed data interpretation becomes cognitive moment, preeminently. As knowledge, intended as research in becoming, is the core of all cultural manifestations, it follows directly that the in- terpreter must take into account the cultural content and not only the scientific one. At the present state of art, this \"cultural thickness\" just ve- rified, belongs to the field specialists, at the best. Nevertheless, in image interpretation some specific contents do exist which can favour the osmosis of such a knowledge between the field culture and the Culture. Let us see some remarkable elements, already evident: - One of the most interesting aspects of satellite imagery is the overcoming of the geopolitical limits which we are used to, not observing either place-names or nations' borders. - The user of the traditional cartography changes his own mental attitude because he becomes able to read the territory in its non-mediate representation. - The extension, within the E.M. spectrum, to different frequencies which eyes usually work at, permits a different description of the visible phenomena and shows new phenomena otherwise not detectable: there is more than meets the eyes! Furthermore, the representations in false colour composite causes an artificial description inducing each observer to take his first step to the interpretation process. So, we see an educational incentive to reasoning! - Such a kind of representation of the territory overcomes the limits put by the principles of the cadastre and of the rights of landed property. May this fact introduce an actual possibility of public and not private land-use management? - The consciousness of standing before a thermography and not of seeing a strange work of modern art, induces in the observer a reaction arising from the visualization of a physical pheonomenon -the heat transfer- that the subject had always been used to know, with the same accuracy, on- by the means of measurements made by instruments put very closed to the investigated surfaces. That means the individuals accustom themselves to config- ring usual surfaces like alive beeings and to giving the objects the dimension of behaviour. These factors just singled out take under consideration mostly the social and political aspects of the cultural influence of R.S. However,we intend to include, into the general term of culture also the manifestations of feelings and of emotions that can be translated into works of art and that usually we define as \"humanistic culture\"; i.e. literature, sculpture, painting, music and dance. Certainly, we are expecting for an influence of R.S. also in these fields whose impact will be naturally developing within a longer time, through and for the essential mediations that are necessary to receive the message contained in a scientific image. CONSEQUENCES These topics immediately refer to an ascertainment which in its turn becomes a necessity: the consideration of the social significance assumed by image interpretation, as we have analyzed and formulated. We single out, as one of the structural characteristics of the culture, the fact of being social phenomenon; hence, the expected opening toward all the manifestations requires a divulgation effort by the people firstly involved and interested in image interpretation. From this fact, for the formation of a mass scientific culture, a second essential point directly follows: the existence of carrying out an unitarian plan of the cultural components where Science has not to be distinguished as particular field of research, but it becomes a part of the relations network, immediately linked to all the other more emotional and more instinctive forms of the knowledge. REFERENCES Feyerabend, P.K.; 1979 - \"Controlo il Metodo\" - Ed. Feltrinelli, Milano Kuhn, T.S.; 1972 - \"La Rivoluzione Copernicana\" - Ed. Einaudi, Torino Reeves, R.G.; 1975 - \" Manual of Remote Sensing \" - Ed. American Soc. Photogrammetry Santambrogio, M.; 1978 - \"Sulla logicac delle teorie scientifiche\" - Quaderni della fondazione G.G. Feltrinelli, n.2, Milano
Remote Sensing can be considered as a sophisticated communication channel from sender to receiver: normally the sender is a surface and the receiver is a man who interprets the data. All the procedures by which one information may affect our mind involve not only written or oral speech, but also the cultural manifestations as language, information, education, economy, tradition, customs, behaviour. What we analyze in this work is the correlation between the activities of image interpretation and the mentioned cultural factors: roughly speaking, we will see which kind of impact the interpretation processes presently affect our culture and society.
Provide a brief summary of the text.
115
elsevier/1b7ceb44_c497_45c2_a473_4da5425e2358.md
# A high-resolution view of the recent drought trends over the Iberian Peninsula Patricia Pascoa Corresponding author. Instituto Portugues Do Mar e da Atmosfera, Lisboa, Portugal Corresponding author. Ana Russo Corresponding author. Celia M. Gouveia Corresponding author. Pedro M.M. Soares Corresponding author. Rita M. Cardoso Corresponding author. Joao A.M. Careto Corresponding author. Andreia F.S. Ribeiro Corresponding author. ###### + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) + Footnote †: journal: homepage:epage: [http://www.elsevier.com/locate/wace](http://www.elsevier.com/locate/wace) droughs with respect to different useable water resources as shown by several authors (e.g. [11, 12]). This multi-scalar character is not accounted for by another widely used index, the Palmer Drought Severity Index (PDSI, [13]), which also has the disadvantage of needing to be calibrated for different regions. Additionally, SPH has been reported to be more suitable under increasing temperatures ([12, 13, 14]), and to better reproduce hydrological drought in the Iberian Peninsula (IP), when compared to SPI or PDSI ([12, 13]). During the last decades, the IP has experienced multiple severe droughs ([12, 13, 14]), which had significant impacts on the environment and on diverse economic sectors ([12, 13, 14, 15, 16, 17, 18, 19, 20, 21]), which had significant impacts on the environment and on diverse economic sectors ([12, 13, 14, 15, 16, 17, 18, 19, 20, 21]). In the same period, the IP's climatic conditions followed a clear trend toward dry and warmer conditions on a great part of its territory ([11, 13, 14, 15, 16, 17, 18, 19, 20, 21]). The spring and winter precipitation in both countries, Portugal and Spain, has been decreasing ([13, 14, 15, 16, 17, 18, 19, 20, 21]) which has been accompanied by a significant and widespread warming, from 1976 onward ([19, 20, 21]). The coupled decrease of precipitation and increase of temperature, which in the last decades enhanced evapotranspiration, has been associated to the increase in drought severity in the IP ([12, 13, 14, 15, 16, 17, 18, 19, 20, 21]). The studies of the occurrence of drought in the IP have been based on the analysis of results from gridded datasets covering long periods (e.g. [13, 14, 15, 16, 17, 18, 19, 20, 21]), outputs of Regional Climate Models (RCM) (e.g. [13, 14, 15, 16, 17, 18, 19, 20, 21]). While gridded coarse-resolution datasets have the problem of relying on data from a small number of stations, which is later interpolated for a regular grid (e.g. CRU and E-OBS datasets), data from sparse monitoring stations have to be homogenized and missing values have to be estimated. Therefore, the use of high-resolution datasets (e.g. [13, 14, 15, 16, 17, 18, 19, 20, 21]), based on several times more data than most gridded datasets, has the advantage of having enough information to look into finer details of precipitation and temperature features down to the daily scale ([13, 14, 15, 16, 17, 18, 19, 20, 21]). Moreover, it can contribute highly for drought management at the local scale, as precipitation is much better represented than when using coarse resolution datasets like CRU (as in [13, 14]). Prior studies using high resolution precipitation for Portugal ([13]) emphasized that high resolution precipitation data is essential to further improve climate simulations and e.g. hydrological applications, given the strong dependency of precipitation on orography and a variety of precipitation regimes in the IP ([13, 14, 15, 16, 17, 18, 19, 20, 21]). Nevertheless, they used high resolution data to estimates hybrid (statistical-dynamical) long-range forecasts of the regional drought index SPI (3-months) over homogeneous regions from mainland Portugal determined by principal component analysis, which has the caveat of not considering the importance of temperature and produces an output based on the principal PC (losing the strength of using complete high resolution data). The unavoidable limitations of using gridded data, like limitations regarding complex terrain characterization or subscale heterogeneities, are still present in high-resolution datasets. Nevertheless, they are considerably lower compared to coarse-resolution datasets ([13, 14, 15, 16, 17, 18, 19, 20, 21]). Recent works covering the entire IP have used gridded datasets with different spatial resolutions: [13, 14] used the CRU dataset with a 0.5' resolution; [13] used precipitation data from GPCC, with a 0.25' resolution; and [14, 15, 16, 17, 18, 19, 20, 21]), with a 0.2' spatial resolution. [13, 14, 15, 16, 17, 18, 19, 20, 21]) developed a gridded dataset with a 1.1 km (approximately 0.01') resolution and covering Spain, which was used to study drought conditions in Spain ([13, 14, 15, 16, 17, 18, 19, 20, 21]). Recently, a new dataset for precipitation and temperature, [13, 14, 15, 16, 17, 18, 19, 20, 21], using station data from both countries, with a spatial resolution of 0.1' (Herrera et al., 2019). This new dataset encompasses all the available station data for the period 1971 to 2015 (with 3 486 and 275 stations with precipitation and temperature records, respectively). In complex topography domains, such as the IP, the station density underlying any gridded product is the main source of observational uncertainty ([13, 14, 15, 16, 17, 18, 19, 20, 21]), found that the density and quality of the underlying station data had more impact on the quality of the gridded datasets than the used interpolation technique. In this dataset, all the stations were interpolated together and the previous border problems in PT02 and Spain 02 were eliminated. Thus, this new dataset represents the best available gridded product to date. It is crucial to assess the spatial and temporal variability of drought using the most up-to-date methodologies and detailed datasets. Therefore, this work aims 1) to characterize the recent evolution of drought in the IP by means of two multi-scalar drought indices, SPI and SPI, as obtained from a very high-resolution dataset; ii) to identify the long-term trends of drought duration and mean drought intensity at grid point scale for a very high-resolution dataset and iii) to assess the added-value of the Iberia01 dataset for drought monitoring in the IP, when compared to lower-resolution datasets. The following section describes the data and methods, followed by the results and discussion with the main conclusions presented in the last section. ## 2 Data and methods Drought is quantified for the 1971-2015 period using two recognized multi-scalar drought indices, SPI and SPI (e.g. [13, 14, 15, 16, 17, 18, 19, 20, 21]). Both SPI and SPI allow for the quantification of several drought properties (e.g. intensity, magnitude, duration and affected area) based on the objective identification of the beginning and end of drought episodes ([13, 14, 15, 16, 17, 18, 19, 20, 21]). The main difference between these two indices is related to the fact that SPI encompasses both precipitation and evaporative demand ([13, 14, 15, 16, 17, 18, 19, 20, 21]), whereas SPI is computed using only precipitation ([13, 14, 15, 16, 17, 18, 19, 20, 21]). SPI and SPI were computed based on the new and high-resolution regular gridded dataset proposed by [13]. This daily dataset, Iberia01, includes maximum and minimum temperatures, and precipitation values at 0.1' spatial resolution. It covers uniformly the IP, and includes the largest amount of station observations comparatively to other studies: 27% for temperature and 3 481 for precipitation. The daily time series were converted to monthly values of precipitation and temperature. In addition, the same meteorological variables were retrieved from the CRU TS4.03 database ([13]) and used to compute SPI and SPI in the same period. This dataset has a spatial resolution of 0.5' and covers the period 1901-2018. The use of this lower-resolution dataset allowed to assess the added-value of the Iberia01 dataset. The first step to the computation of SPI is the estimation of the reference evapotranspiration (ETo). There is a panoply of available methods to estimate ETo, namely the Hargevaves ([13]), the [14, 15, 16, 17, 18, 19, 20, 21]). The advantage of using the Hargevaves or the Thorthwaite methods, comparatively to the [14, 15, 16, 17, 18, 19, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 19, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 19, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 18, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevaves is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevave is computed using the Hargevaves and the [14, 15, 16, 17, 20, 21]). The weather of the Hargevcalculation of SPEI was computed with the Hargreaves method. The distributions functions to be used on the computation of SPI and SPEI were previously tested by several authors (Vicente-Serrano et al., 2010; Angelidis et al., 2012; Vicente-Serrano and Begueria, 2015). Following their suggestions, SPI was computed using a Gamma distribution to model precipitation, whereas SPEI was computed using a Log-logistic distribution to model the water deficit (Vicente-Serrano et al., 2010; Begueria et al., 2014; Pascoa et al., 2017; Russo et al., 2019). SPI and SPEI were calculated for several timescales (1-, 3-, 6-, and 12-months). The existence of significant trends in both SPI and SPEI was assessed based on the application of the modified Mann-Kendall test (Hamed and Ramachandra Rao, 1998), which takes into account the presence of outliers and/or autocorrelation in the analysed time series. The level of significance considered was 5%, and the slope of the trend was computed using the Theil-Sen method (Theil, 1950; Sen, 1968). Time series of drought duration and mean intensity of the drought events were computed for each grid point of the IP based on SPI and SPI, excluding drought events that lasted only one month (Spinion et al., 2015; Pascoa et al., 2017). These parameters are defined as proposed by McKee et al. (1993). The trends of these time series were also assessed, using the Spearman-Rho test, provided that the time series had a minimum of 20 points (Lanzante, 1996; Pascoa et al., 2017). Following Agnew (2000), drought conditions were identified when SPI or SPEI was lower than \\(-0.84\\), which corresponds to a probability of occurrence of 20%. Finally, the area affected by drought (%) was also determined for each month, regardless of the intensity, and the results obtained with each index were compared. Figure 1: Spatial patterns of trends in the period 1971–2015 for SPI\\({}_{\\rm 19801}\\) (left) and SPI\\({}_{\\rm 19801}\\) (right) for the 1-, 3-, 6-, and 12-months timescales. The slope of the trend shown was computed for the entire period. Statistically significant trends (at the 5% level) are indicated with a dot. ## 3 Results and discussion The trends of the drought indices computed with the Iberia01 dataset (SPI\\({}_{\\text{H01}}\\) and SPEI\\({}_{\\text{H01}}\\)) in the period 1971-2015 are shown on Fig. 1, and the results for the indices computed with the CRU TS4.03 dataset (SPI\\({}_{\\text{CRU}}\\) and SPEI\\({}_{\\text{CRU}}\\)) are shown on Fig. 2 for time scales of 1-12 months. Both SPI\\({}_{\\text{H01}}\\) and SPEI\\({}_{\\text{H01}}\\) (Fig. 1) present significant trends on all time scales, and the trends are mostly negative, indicative of increased drying conditions in the period 1975-2015. The area of SPEI trends is generally larger due to the regional warming effect. Nonetheless, large areas of positive trends are present in the South of Spain, and some smaller areas are scattered across the territory. The spatial patterns show that significant trends are more frequent on the case of SPEI\\({}_{\\text{H01}}\\) (Fig. 1, right panel), but the spatial occurrence of positive/negative trends obtained with both indices is similar, especially for longer time scales. Two areas showing opposite signs are present: in the South of Spain, where SPI\\({}_{\\text{H01}}\\) shows negative trends, and in the Southeast of Portugal, where SPI\\({}_{\\text{H01}}\\) presents a positive trend (Fig. 1, left panel). The area of significant trends also increases with the time scale, probably due to the lower variability at longer time scales, which reflects the time needed for water deficits to accumulate, but the patterns of negative and positive trends are not affected. The trends obtained for SPI\\({}_{\\text{CRU}}\\) and SPEI\\({}_{\\text{GRU}}\\) (Fig. 2) also point to increased drying conditions, particularly SPEI\\({}_{\\text{GRU}}\\), which does not present positive trends. On the other hand, SPI\\({}_{\\text{GRU}}\\) points to wetting conditions in the West, with trends significant at the longest time scale. Although negative, the trends obtained by SPEI\\({}_{\\text{GRU}}\\) in this region are actually not significant, reflecting the increase in precipitation observed with SPL\\({}_{\\text{GRU}}\\). The drying conditions here identified are in agreement with previous works made for the IP (Ruiz-Singoa et al., 2011; Vicente-Serrano et al., 2014; Paskoa et al., 2017). In this territory, both precipitation and temperature are known to be drivers of drought (Vicente-Serrano et al., 2014, 2015; Spinoni et al., 2019), and for this reason, SPEI\\({}_{\\text{H01}}\\) presents a Figure 2: Spatial patterns of trends in the period 1971–2015 for SPI\\({}_{\\text{GRU}}\\) (left) and SPEI\\({}_{\\text{GRU}}\\) (right) for the 1-, 3-, 6-, and 12-months timescales. The slope of the trend shown was computed for the entire period. Statistically significant trends (at the 5% level) are indicated with a dot. larger area of significant negative trends. The drought conditions assessed with SPI in the IP generally present both negative and positive trends (Vicente-Serrano et al., 2014; Pascoa et al., 2017), reflecting the complex patterns of precipitation trends (Becker et al., 2013; Coll et al., 2016; Saccoa et al., 2017; Lausier and Jain, 2018). This is shown by SPI\\({}_{\\text{201}}\\), due to the high number of stations used to obtain the Iberia01 dataset (Herrera et al., 2019). Nonetheless, the results obtained here with SPI\\({}_{\\text{201}}\\) and SPI\\({}_{\\text{201}}\\) point to a larger area of negative trends than in previous works, likely due to the dry conditions observed in the years 2011, 2012, and 2015 (Trigo et al., 2013; Blunden and Arndt, 2016), which were not all included in the shorter periods analysed by these authors. Both SPI\\({}_{\\text{201}}\\) and SPI\\({}_{\\text{201}}\\) present a large area of significant positive trends in the South of the IP. Positive annual and seasonal precipitation trends have already been identified in this area (Coll et al., 2016; Pascoa et al., 2017), but when including the temperature effect, the results have so far suggested a drying trend (Vicente-Serrano et al., 2014; Coll et al., 2016; Pascoa et al., 2017). Considering that positive trends are identified with SPI\\({}_{\\text{201}}\\) even in areas where SPI\\({}_{\\text{201}}\\) points to negative trends, the drivers of these wetting trends are undoubtedly negative temperature trends. In fact, the trends in annual precipitation and ET\\({}_{\\text{0}}\\), as obtained from the Iberia01 dataset (Fig. S1), show very similar patterns to those obtained by SPI\\({}_{\\text{201}}\\) and SPI\\({}_{\\text{201}}\\), respectively. The wetting trends are in disagreement not only with the trends obtained with SPI\\({}_{\\text{201}}\\), but also with previous works that show that monthly temperature is either increasing or does not show a significant trend (Del Rio et al., 2011; Gonzalez-Hidalgo et al., 2015). Atmospheric evaporative demand has also been shown to be increasing (Vicente-Serrano et al., 2014; Pascoa et al., 2017), driven by the increase in temperature trends (Vicente-Serrano et al., 2014). The wetting trends obtained are driven by a decrease in maximum temperature, observed on some months (Fig. S4). The stations used to compute the temperature dataset used in the present work had a much lower density when compared to the precipitation dataset, and in this region (south of Spain), the number of stations with temperature data is very low. It is likely that some stations are showing decreased temperature trends, but the area actually showing the same behaviour is probably smaller than the area obtained here. On the other hand, the positive trends identified in North Spain and Northeast Portugal may be driven by both an increase in precipitation, as shown by increase in SPI, as well as a decrease in temperature (Fig. S4). Unlike the South of Spain, these areas have a high number of temperature stations, which are located at different elevations. Moreover, Moratiel et al. (2017) found negative trends in annual and seasonal temperature obtained from weather stations near these areas, for the period 1981-2010. Although most of the negative trends are not significant, they are statistically significant in Autumn (Moratiel et al., 2017). The results obtained here may be reflecting the influence of elevation on the climate variables, which is possible to achieve with due to its high spatial resolution. Figs. 3 and 4 present the trends obtained for the time series of drought duration, and the mean intensity of the drought events, as obtained with SPI\\({}_{\\text{201}}\\) and SPI\\({}_{\\text{201}}\\) (Fig. 3), and with SPI\\({}_{\\text{201}}\\) and SPI\\({}_{\\text{201}}\\) (Fig. 4). The area of positive and negative significant trends obtained for each time series and drought index is presented on Table 1. The trends of drought duration are clearly positive in most of the territory, as obtained with both indices and the two datasets. The trends in drought duration are not significant in the areas where positive drought trends were identified with the Iberia01 dataset (Fig. 1). Nonetheless, when computed with the CRU dataset, the area is larger for SPI and smaller for SPI, compared to the Iberia01 dataset, as shown on Table 1. As with the trends in the drought indices (Figs. 1 and 2), the higher number of stations used in the Iberia01 dataset may explain these differences. Despite already showing similar results, on previous works these positive trends occurred on much smaller areas (Spinoni et al., Figure 3: Significant trends in duration and mean intensity of droughts based on SPI\\({}_{\\text{201}}\\) (left) and SPI\\({}_{\\text{201}}\\) (right). Positive (negative) trends are shown in red (blue) and non-significant trends are shown in white. Grid points where the time series has less than 20 points are shown in grey. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) 2014; Pascoa et al., 2017), although different periods and thresholds were used to identify drought. On the other hand, the sign of the trends of the mean intensity of the drought events is almost always negative. The area is smaller when SPEI is used, reflecting the effect of the increased ETo. Taking into account that Iberia01 better represents the spatial pattern of drought in the IP than low resolution datasets, the area under drought conditions was calculated with SPI\\({}_{\\text{IB01}}\\) and SPI\\({}_{\\text{IB01}}\\) (Fig. 5). According to these results, drought events affecting more than half of the territory are frequent. Considering the mean area of the entire period, SPEI identifies a slightly larger area under drought conditions (Fig. 5). Nonetheless, the mean areas presented by SPEI and SPI (22% and 19%, respectively), are very close to the theoretical probability of occurrence (20%, Agnew, 2000). The extreme drought event of 2004/05 stands out for its extent and duration, but other large events are also apparent, such as the drought of 2012. This hydrological year presented the second lowest accumulated precipitation in the period 1950-2012, only outdone by the extreme drought of 2004/5 (Trigo et al., 2013b). There are also long periods of low values of area under drought conditions obtained for the late 1970's and the mid 1990's pointing to the absence of drought conditions. The results obtained for the area affected by drought are in agreement with previous works made for the IP (Vicente-Serrano et al., 2014; Coll et al., 2016; Pascoa et al., 2017), although time scales shorter than 12 months were not used in these previous studies. The correct identification of the extent of the drought events is a strong indicator of the quality of the dataset, suggesting that the issue already identified was constrained to some regions only. Previous works identified the fragility of the development of high-resolution grids with few or none stations per grid point (Herrera et al., 2019). This was accounted for when developing the Iberia01 \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline & \\multicolumn{2}{c}{Duration} & \\multicolumn{3}{c}{Intensity} \\\\ \\hline Time Scale & & Positive & Negative & NA & Positive & Negative & NA \\\\ \\hline \\(\\text{SPI}_{\\text{IB01}}\\) & 1 & 36.4 & – & 5.2 & 0.4 & 7.4 & 5.2 \\\\ & 3 & 43.9 & – & 0.3 & 0.3 & 7.8 & 0.3 \\\\ & 6 & 37.9 & – & 0.2 & 0.4 & 6.1 & 0.2 \\\\ & 12 & 16.9 & – & 54.2 & 0.7 & 0.6 & 54.2 \\\\ \\(\\text{SPLE}_{\\text{IB01}}\\) & 1 & 45.0 & 0.0 & 0.3 & 0.8 & 2.7 & 0.3 \\\\ & 3 & 50.3 & 0.0 & 0.2 & 1.2 & 2.0 & 0.2 \\\\ & 6 & 36.4 & 0.1 & 0.7 & 1.2 & 3.0 & 0.8 \\\\ & 12 & 23.9 & 0.1 & 42.6 & 1.4 & 0.8 & 42.6 \\\\ \\(\\text{SPLE}_{\\text{ICU}}\\) & 1 & 29.5 & 0.3 & 0.3 & – & 16.1 & 0.3 \\\\ \\(\\text{SPLE}_{\\text{ICU}}\\) & 3 & 25.3 & – & 0.3 & – & 20.2 & 0.3 \\\\ \\(\\text{6}\\) & 28.4 & – & – & – & – & 19.5 & – \\\\ \\(\\text{12}\\) & 21.9 & – & 53.4 & – & 2.1 & 53.4 \\\\ \\(\\text{SPLE}_{\\text{ICU}}\\) & 1 & 52.7 & – & – & – & 1.4 & – \\\\ \\(\\text{3}\\) & 57.9 & – & – & – & 1.7 & – \\\\ & 6 & 47.6 & – & – & – & 2.4 & – \\\\ & 12 & 43.2 & – & 21.58 & – & 1.0 & 21.6 \\\\ \\hline \\end{tabular} \\end{table} Table 1Area (%) of positive and negative significant trends, and not-accounted (NA) points obtained for drought duration and drought intensity. Figure 4: As Fig. 3 but for SPI\\({}_{\\text{ICU}}\\) and SPI\\({}_{\\text{ICU}}\\). dataset through the limitation of the horizontal resolution, and the authors did not provide higher-resolution intermediate products (Herrera et al., 2019). Nevertheless, for an illustrative example of a convective high-resolution extreme precipitation event that occurred on 4-5 November 1997, characterized by heavy precipitation over most of the IP, Iberia01 was able to correctly represent the spatial patterns, with the main differences appearing in the Guadalquivir and Guadiana basins and the Pyrenean range (Herrera et al., 2019). Iberia01 has also much improved precipitation rendering compared to other datasets (Herrera et al., 2019), which propagates to the calculation of SPEI and SPI. Nevertheless, it should be accounted that in the case of temperature, the number of used stations is not substantially different from other datasets. Therefore, the major differences between SPEI and SPI identified in the south region might be associated to the fact that the number of stations which monitor temperature are low. On the northern region, like in the Pyrenean range, deficient representation of orographic precipitation might be the cause. ## 4 Conclusions In this work, recent drought trends in the IP were analysed, using two known drought indices, SPI and SPEL. A recently developed high-resolution dataset, Iberia01, and a lower-resolution dataset, CRU, were used to compute these indices, and the results obtained were compared, in order to assess the added value of the new dataset, Iberia01. A drying trend was observed when using both datasets. This trend was driven by both precipitation and temperature, as evident by the negative trends of both indices, as well as the positive trends of drought duration. The Iberia01 dataset made evident complex spatial patterns of drought trends, which were mostly related to precipitation trends. In some areas, positive trends were identified due to a decrease in temperature, although the number of stations used in these areas is rather small. Although in agreement with previous studies performed for the IP, the results presented here also constitute the first evaluation of a high-resolution gridded dataset, showing a regional focus over the IP, which is a climate change hot spot in the Mediterranean region. Iberia01 has the advantage of having been built based on several times more data than most gridded datasets and also of having a much better orographic representation, which allows for a more detailed analysis of precipitation and temperature and resultant products (e.g. SPEI and SPI). This is of high importance in the case of the analysis of regional phenomena such as droughts. The introduction of these type of high-resolution datasets contributes deeply to a more detailed understanding of drought and its impacts. The results concerning the identification of regions over the IP that showed a clear pattern of increased dryness will surely be useful, supplying managers with robust tools aiming for the development of mitigation plans that will be crucial in the case of the occurrence of more frequent drought events over the IP within a context of climate change. Figure 5: Percentage of area under drought conditions (SPEI or SPI \\(<\\) –0.84), as obtained with the Iberia01 dataset. ## Authors' contribution PP, AR, CMG, PMMS, and AFSR participated in the conceptual design of the study, the interpretation, and the redaction of the manuscript. RMC and PMMS defined the meteorological datasets to be used for the study. PP, AR, CMG, and PMMS defined the approach used for the meteorological analysis. AR calculated the meteorological indices, and PP, AR, and JAMC calculated the remaining drought parameters and trends. Each of the co-authors performed a thorough revision of the manuscript, provided useful advices on the intellectual content and improved the English language. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements This work was partially supported by national funds through FCT (Fundacao para a Ciencia e a Tecnologia, Portugal) through project IMPECAF (PTDC/CTA - CLI/28902/2017), and project SOLAR (PTDC/GEOMET/7078/2014). All authors are grateful for the FCT funding UIDB/50019/2020 - Instituto Dom Luiz. Supporting information that may be useful in reproducing the authors' work is available from the authors upon request ([email protected]). ## Appendix A Supplementary data Supplementary data to this article can be found online at [https://doi.org/10.1016/j.wace.2021.100320](https://doi.org/10.1016/j.wace.2021.100320). ## References * Aneev (2000) Aneev, C.T., 2000. Using the SPI to identify drought. Drought Network News 12, 6-12. * Allen et al. (1998) Allen, R., Pereira, L., Rales, D., Smith, M., 1998. Crop neurotransmission: Guidelines for Computing Crop Requirements. FAO Integration and Drainage paper 56. * Andrade & Debretz (2015) Andrade, C., Debretz, M., 2015. Assessment of droughs in the Iberian Peninsula using the wasp-index. Muncs. Sci. Lett. 16 (3), 208-218. [https://doi.org/10.1002/ssl.2542](https://doi.org/10.1002/ssl.2542). * Angelida et al. (2012) Angelida, P., Maris, F., Kotsowinos, N., Hissanthon, V., 2012. Computation of drought index SPI with alternative distribution functions. Water Resour. Manag. 26, 2453-2473. [https://doi.org/10.1007/s1126-012-002-00](https://doi.org/10.1007/s1126-012-002-00). * Beeker et al. (2013) Beeker, Q., Finger, P., Meyer-Christiffer, A., Rudolf, B., Schamm, K., et al., 2013. A description of the global land-surface precipitation data products of the global precipitation climatology centre with sample applications including centennial (trend) annual losspoint-present, Earth Syst. Sci. Data 5 (1), 71-99. [https://doi.org/10.1594/s4085-71-2013](https://doi.org/10.1594/s4085-71-2013). * Begeira et al. (2014) Begeira, S., Vicente-Serrano, S.M., Reig, F., Latorre, B., 2014. Standardized precipitation evapotranspiration index (SPIE) revisited: parameter fitting, evapotranspiration models, datasets and drought monitoring. Int. J. Climatol. 34 (10), 3001-3023. [https://doi.org/10.1002/v3e887](https://doi.org/10.1002/v3e887). * De-Pereira et al. (2011) De-Pereira, M., Dutra, E., Viterbo, P., 2011. Evaluation of global precipitation data sets over the Iberian Peninsula. J. Geophys. Res. Manos. [https://doi.org/10.1029/2010/D015481](https://doi.org/10.1029/2010/D015481). * Berg et al. (2016) Berg, P., Norin, L., Olson, J., 2016. Creation of a high resolution precipitation data set by merging gridded gauge data and radar observations for Sweden. J. Hydrol. 541, 6-13. [https://doi.org/10.1016/j.hydrol.2015.11.031](https://doi.org/10.1016/j.hydrol.2015.11.031). * Buflico et al. (2014) Buflico, C., Rego, R., Dias, S., Stagg, J.H., 2014. Assessing the association of drought indicators in impacts: the results for areas burned by wildings in Portugal. Advances in forest fire research 1054-1066. [https://doi.org/10.1149/978-989-26-0884-6](https://doi.org/10.1149/978-989-26-0884-6). * Blashut et al. (2015) Blashut, V., Gudmundsson, L., Stahl, K., 2015. Towards pan-European drought risk maps quantifying the link between drought indices and reported drought impacts. Environ. Res. Lett. 10, 104008. [https://doi.org/10.1088/1748-93296-01/01/4008](https://doi.org/10.1088/1748-93296-01/01/4008). * Blunden (2016) Blunden, J., Arndt, D., 2016. State of the climate in 2015. Bull. Am. Meteorol. Soc. 97, SI-SI-Express. [https://doi.org/10.1175/2016ABMSMSoftheclimate.1](https://doi.org/10.1175/2016ABMSMSoftheclimate.1). * Cardoso et al. (2013) Cardoso, R.M., Soares, P.M.M., Miranda, P.M.A., Belo-Pereira, M., 2013. WRF high resolution simulation of Iberian mean and extreme precipitation climate. Int. J. Climatol. 33 (1), 2591-2608. [https://doi.org/10.1002/v3e6.3616](https://doi.org/10.1002/v3e6.3616). * Coll et al. (2016) Coll, J.R., Aguilar, E., Ashcroft, L., 2016. Drought variability and change across the Iberian peninsula. Theor. Appl. Climatol. 1-16. [https://doi.org/10.1007/s00704-016-1920-0](https://doi.org/10.1007/s00704-016-1920-0). * Dai (2011) Dai, A., 2011. Drought under Global Warming: A Review. ISSN 17577799. * De Lisi et al. (2010) De Lisi, M., Brunetti, M., Gonzalez-Hidalgo, J., Longares, L., Martin-Vide, J., 2010. J. Changes in seasonal precipitation in the Iberian Peninsula during life-dot-2005. Global Planet. Change 74, 27-33. [https://doi.org/10.1016/j.globacla.2010.06.006](https://doi.org/10.1016/j.globacla.2010.06.006). * Del Rio et al. (2011) Del Rio, S., Herrero, L., Pinto-Goanes, C., Penas, A., 2011. Spatial analysis of mean temperature trends in Spain over the period 1961-2006. Global Planet. Change 78 (1), 65-75. [https://doi.org/10.1016/j.globacla.2011.05.012](https://doi.org/10.1016/j.globacla.2011.05.012). * Droogers & Allen (2002) Droogers, P., Allen, R., 2002. Estimating reference evapotranspiration under inaccurate data conditions. Irright. Dyna. Syst. 16, 33-45. [https://doi.org/10.1023/v.1015508322413](https://doi.org/10.1023/v.1015508322413). * Garcia-Herrera et al. (2007) Garcia-Herrera, R., Hernandez, R., Barrisopteo, D., Paredes, D., Trigo, R.M., et al., 2007. The outstanding 2004/05 Group in the Iberian peninsula: associated atmospheric circulation. J. Hydrocroctarol. 8 (3), 483-498. [https://doi.org/10.1175/B04578.1](https://doi.org/10.1175/B04578.1). * Gonzalez-Hidalgo et al. (2015) Gonzalez-Hidalgo, J.C., Penas-Angulo, D., Brunetti, M., Moders, Corcisi, N., 2015. A new monthly temperature database for mainland Spain and the trend in temperature (1951-2010). Int. J. Climatol. [https://doi.org/10.1002/v4298](https://doi.org/10.1002/v4298). * Gonzalez-Hidalgo et al. (2018) Gonzalez-Hidalgo, J.C., Vicente-Serrano, S.M., Pena-Angulo, D., Salinas, C., Tomas-Barguera, M., Regueria, S., 2018. High-resolution spatio-temporal analyses of drought episodes in the western Mediterranean basin (Spanish mainland, Iberian peninsula). Acta Geophys. Geophys. 6 (3), 381-392. [https://doi.org/10.1007/s11600-018-38](https://doi.org/10.1007/s11600-018-38). * Gouveus et al. (2012) Gouveus, C.M., Bastos, A., Trigo, R.M., Dacamara, C.C., 2012. Drought impacts on vegetation in the pre- and post-fire events over Iberian Peninsula. Nat. Hazards Earth Syst. 151 (2), 1032-1337. [https://doi.org/10.1510/s12615-2012-3012](https://doi.org/10.1510/s12615-2012-3012). * Hamel et al. (1998) Hamel, K., Ramachandra Rao, A., 1998. A modified Mann-Kendall trend test for an autocorrident data. J. Hydrol. 204, 182-196. [https://doi.org/10.1016/S0022-1694](https://doi.org/10.1016/S0022-1694)(97)0015-X. * Hargraveet & Samani (1985) Hargraveet, G., Samani, Z., 1985. Reference crop evapotranspiration from temperature. Appl. Eng., Ser. 16, 9-10. [https://doi.org/10.1002/v429](https://doi.org/10.1002/v429). * Harris et al. (2020) Harris, L., Osborn, J.L., Jones, P., Lister, D., 2020. Version 4 of the CRU TS monthly high-resolution gridded multivariate climate dataset. Scientific Data 7 (109). [https://doi.org/10.1008/s0070-01631-1995000](https://doi.org/10.1008/s0070-01631-1995000). * Heereas et al. (2012) Heereas, S., Gutierrez, J.M., Anceli, R., Poms, M.R., Frias, M.D., Fernandez, J., 2012. Development and analysis of a 50-year high-resolution daily gridded precipitation dataset over Spain (SpanishO2). Int. J. Climatol. 32 (1), 74-85. [https://doi.org/10.1002/joc.2256](https://doi.org/10.1002/joc.2256). * Herrera et al. (2019a) Herrera, S., Cardoso, R.M., Soares, P.M.A., Espittot-Santo, F., Viterbo, P., Gutierrez, J.M., 2019a. Inequal: a new gridded dataset of daily precipitation and temperatures over Iberia. Earth Syst. Sci. Data 11, 1947-1956. [https://doi.org/10.5194/s0019-05](https://doi.org/10.5194/s0019-05). * Herrera et al. (2019b) Herrera, S., Rudolf, S., Soares, P.M.A., Cardoso J.A., et al., 2019b. Uncertainty in gridded precipitation products: influence of station density, interpolation method and grid resolution. Int. J. Climatol. 39, 3717-3729. [https://doi.org/10.1002/v5e8878](https://doi.org/10.1002/v5e8878). * Herrera et al. (2020) Herrera, S., Gutierrez, J.M., Cardoso, R.M., Gutierrez, J.M., 2020. Evaluation of the EURO-ORLE regional climate models over the Iberian Peninsula. Observational uncertainty analysis. J. Geophys. Res. Manos. 125 (12), 1032020. [https://doi.org/10.1029/v00238202038280](https://doi.org/10.1029/v00238202038280). * Hofstra et al. (2009) Hofstra, N., Haylock, M., New, M., Jones, P.D., 2009. Testing E-OBS European high-resolution gridded state of daily precipitation and surface temperature. J. Geophys. Res. 114, 2021. [https://doi.org/10.1029/v00311709](https://doi.org/10.1029/v00311709). * IOC (2014) IOC, 2014. Climate Change 2014. Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment, Report of the Intergovernmental Panel on Climate Change. * Jere et al. Moreira, E.E., Meais, J.T., Pereira, L.S., 2012. Are drought occurrence and severity aggravating? A study of SP1 drought class transitions using log-linear models and ANOVA-like inference. Hydrol. Earth Syst. Sci. 16 (8), 3011-3028. [https://doi.org/10.5194/news-16:3011-2012-9](https://doi.org/10.5194/news-16:3011-2012-9). * Palmer (1965) Palmer, W.C., 1965. Meteorological Drought. Research Paper NO. 45. * Picoso et al. (2017) Picoso, P., Gourais, C., Russo, A., Tiago, R., 2017. Drought trends in the Iberian Peninsula over the last 112 years. Adv Meteorol 2017. [https://doi.org/10.1155/2017/6635126](https://doi.org/10.1155/2017/6635126). * Ribeiro and Perez (2015) Ribeiro, A.F.S., Perez, C.A.L., 2015. Seasonal drought predictability in Portugal using statistical-dynamical techniques. Phys. Chem. Earth 94, 155-166. [https://doi.org/10.1016/j.geo.2015.04.003](https://doi.org/10.1016/j.geo.2015.04.003). * Ribeiro et al. (2019) Ribeiro, A.F.S., Russo, A., Gourais, C.M., Piscuca, P., 2019. Copuls-based agricultural drought risk of ransified cropping systems. Agric. Water Manag. 223 [https://doi.org/10.1016/j.awan.2019.016898](https://doi.org/10.1016/j.awan.2019.016898). * Ruiz-Singomo et al. (2011) Ruiz-Singomo, D., Garcia-Moria, R., Martinez-Murillo, J.F., 2011. Predicting dynamics in southern solar penas: trends and cycles. Int. J. Climatol. 31 (15), 2281-2289. [https://doi.org/10.1002/j.co.2235](https://doi.org/10.1002/j.co.2235). * Russo et al. (2017) Russo, A., Gourais, C.M., Pinson McCamras, C.C., Sousa, P.M., Trigo, R.M., 2017. Assessing the role of drought events on ruffles in the Iberian Peninsula. Agric. For. Meteorol. 237-238. 50-59. [https://doi.org/10.1016/j.awan.2017.01.021](https://doi.org/10.1016/j.awan.2017.01.021). * Russo et al. (2019) Russo, A., Gourais, C.M., Dutta, E., Soares, M., Trigo, R.M., 2019. The synergy between drought and extremely hot summers in the Mediterranean, Environ. Res. Lett. 14 (13) [https://doi.org/10.1088/s.74932.s30496-s](https://doi.org/10.1088/s.74932.s30496-s). * Santro et al. (2014) Santro, F., Ramos, A., de la Mima, Trigo, R., 2014. Seasonal changes in daily precipitation extremes in mainland Portugal from 1941 to 2007. Res. Inviron. Change 14, 1765-1768. [https://doi.org/10.1007/s10113-015-015-015](https://doi.org/10.1007/s10113-015-015-015). * Santro et al. (2014) Santro, F., de la Mima, R., Ramos, A., Trigo, R., 2014. Trends in seasonal surface air temperature in mainland Portugal, since 1941. Int. J. Climatol. 34, 1814-1837. [https://doi.org/10.1002/j.soc.3803](https://doi.org/10.1002/j.soc.3803). * Schubert et al. (2016) Schubert, S.D., Stewart, R.E., Wang, H., Barlow berbery, E.H., et al., 2016. Global meteorological drought: a synthesis of current understanding with a focus on SST drivers of precipitation deficits. J. Clim. [https://doi.org/10.1155/JCLI.0-15:0452.1](https://doi.org/10.1155/JCLI.0-15:0452.1). * Sen (1968) Sen, O.K., 1968. Estimates of the regression coefficient based on Kendall's tau. J. Am. Stat. Assoc. 639, 1379-1389. * Sousa et al. (2011) Sousa, P.M., Trigo, R.M., Aipururua, P., Nieto, R., Gimeno, L., Garcia-Herrera, R., 2011. Trends and extremes of drought indices throughout the 20th century in the Mediterranean, Nat. Harvard Earth Sys. 11 (3), 33-51. [https://doi.org/10.5194/nhess-11:33-2011](https://doi.org/10.5194/nhess-11:33-2011). * Spinoni et al. (2014) Spinoni, J., Naumann, G., Carrao, H., Barbosa, P., Vogt, J., 2014. World drought frequency, duration, and severity for 1951-2010. Int. J. Climatol. [https://doi.org/10.1002/j.co.3875](https://doi.org/10.1002/j.co.3875). * Spinoni et al. (2015) Spinoni, J., Naumann, G., Vogt, J.V., Barbosa, P., 2015. The biggest drought events in Europe from 1950 to 2012. J Hydrol Reg Stud 3, 509-524. [https://doi.org/10.1016/j.ejrh.2015.01.001](https://doi.org/10.1016/j.ejrh.2015.01.001). * Spinoni et al. (2017) Spinoni, J., Naumann, G., Vogt, J.V., 2017. Pan-European seasonal trends and recent changes of drought frequency and severity. Global Planet. Change 148, 113-130. [https://doi.org/10.1016/j.globala.2016.11.013](https://doi.org/10.1016/j.globala.2016.11.013). * Spinoni et al. (2019) Spinoni, J., Barbosa, P., Jager, A.D., McCormick, N., Naumann, G., et al., 2019. A new global database of meteorological drought events from 2016 to 2016. J Hydrol Reg Stud 22, 106953. [https://doi.org/10.1016/j.ejrh.2019.10053](https://doi.org/10.1016/j.ejrh.2019.10053). * Thel (1950) Thel, H., 1950. A rank-invariant method of linear and polynomial regression analysis. Proc. R. Neth. Acad. Arts Sci. 53 (Part I), 386-392. Part II: 521-525, Part II: 1397. * Thothravitz (1948) Thothravitz, C., 1948. An approach toward a rational classification of climate. Geogr. Rev. 38 (5-94). * Trigo et al. (2012) Trigo, R., Barrojedon, D., Ramos, A.M., Parker, D., Muhr, B., et al., 2012. Iseria, in state of the climate in 2012. Bull. Anth. Meteorol. Soc. 94, S176-S178. [https://doi.org/10.1175/201384386.pdf](https://doi.org/10.1175/201384386.pdf). * Trigo et al. (2013) Trigo, R., A., Jager, J., Garrojedon, D., Garcia-Herrera, R., Gimeno, L., et al., 2013. The record water drought of 2011:12 in the Iberian Peninsula, in Explaining Extreme Events of 2012 from a Climate Perspective. Bull. Am. Meteorol. Soc. 94, S41-S45. [https://doi.org/10.1175/S/S4DS-103-0005.1](https://doi.org/10.1175/S/S4DS-103-0005.1). * Vincanto et al. (2014) Vincanto, S.M., Beyeri, S., Lopez-Moreno, J.I., 2014. A millicaxide drought index sensitive to global warming: the standardized precipitation evapotranspiration index. J. Clim. 27 (3), 1794-178. [https://doi.org/10.175/200341.LZ1990](https://doi.org/10.175/200341.LZ1990). * Vicente-Sermzo et al. (2011) Vicente-Sermzo, S.M., Lopez-Moreno, J.I., Drumond, A., Gimeno, L., Nieto, R., et al., 2011. Effects of warming processing on troughs and water resources in the NW Iberian Peninsula (1930-2006). Clim. Res. 48 (2-3), 203-212. [https://doi.org/10.3354/c01002](https://doi.org/10.3354/c01002). * Vicente-Sermzo et al. (2014) Vicente-Sermzo, S.M., Lopez-Moreno, J.I., Bogueria, S., Lorenzo-Lavruz, J., Sanchez-Lorenzo, J.M., 2014. Evidence of increasing drought severity caused by temperature rise in southern Europe. Environ. Res. Lett. 9 (4), 044001. [https://doi.org/10.1088/1748-9329/4/044001](https://doi.org/10.1088/1748-9329/4/044001). * Vicente-Sermzo et al. (2015) Vicente-Sermzo, S.M., der Schrier, G.V., Begueria, S., Arorio-Molina, C., Lopez-Moreno, J.I., 2015. Contribution of precipitation and reference evapotranspiration degradation to drought indices under different climates. J. Hydrol. 526, 42-54. [https://doi.org/10.1016/j.hydrol.2014.11.025](https://doi.org/10.1016/j.hydrol.2014.11.025). * Vicente-Sermzo et al. (2017) Vicente-Sermzo, S.M., Young-Burgeria, M., Begueria, S., Reig, F., Latorore, B., et al., 2017. A high resolution dataset of drought indices for Spain. Data 2 (3). [https://doi.org/10.1309/data.203002.22](https://doi.org/10.1309/data.203002.22). * Vicente-Sermzo et al. (2019) Vicente-Sermzo, S.M., Quiring, M., Perrin-Gallardo, M., Yuan, S., Dominguez-Castro, 2019. A review of environmental droughts increased risk under global warming? Earth Sci. Rev. 102953. [https://doi.org/10.1016/j.eenvenv.2019.102953](https://doi.org/10.1016/j.eenvenv.2019.102953). * Willine et al. (2007) Willine, D., Svoboda, M., Hayes, M., 2007. Understanding the complex impacts of drought: a key to enhancing drought mitigation and preparedness. Water Resour. Manag. 21 (5), 768-774. [https://doi.org/10.1007/s11260-006-906-906-5](https://doi.org/10.1007/s11260-006-906-906-5). * Zolina et al. (2014) Zolina, O., Simmer, C., Kapala, A., Shahanov, P., Becker, P., et al., 2014. Precipitation variability and extremes in central Europe: new view from STAM/RX results. Bull. Am. Meteorol. Soc. 95 (7), 995-1002. [https://doi.org/10.1175/SAMS-0.12](https://doi.org/10.1175/SAMS-0.12).
Droughs are a long-term weather-driven extreme event which occurs worldwide with great socio-economic impacts, namely in the Mediterranean and the Iberian regions. In a changing climate with rising temperatures, extreme events, such as droughs, are expected to increase in frequency and intensity, particularly in Mediterranean climates. In this context, the assessment of the evolution of drought in terms of its duration and intensity in the Iberian Peninsula (IP) is paramount, as it affects several socio-economic activities. The use of new high-resolution gridded datasets allows for the identification of patterns with finer temporal and spatial scales. In the current study, drought assessment in the IP was accomplished with both the Standardized Precipitation Evaporation Index (SPEI) and the Standardized Precipitation Index (SPI) for short-, medium- and long-timescales. A recently developed high-resolution dataset, Iberia01, was used, for the period 1971-2015, with 0.1' horizontal resolution. The lower-resolution GRU dataset was also used. A clear drying trend in most of the IP is identified with both indices and both datasets. The trends of drought duration are also positive in most of the territory, whereas the mean drought intensity decreases slightly. The drivers of this drying trend are both the decreased precipitation and the increased reference evapotranspiration. The Iberia01 dataset allowed to identify more complex patterns of drought trends, mainly due to the improved representation of precipitation.
Provide a brief summary of the text.
302
iopscience/bc942ea5_466e_412a_99b0_09b8e1adb153.md
IOP Conference Series: Earth and Environmental Science ## Open Access Comparison of pixel -based and artificial neural networks classification methods for detecting forest cover changes in Malaysia To cite this article: B R Deilmai _et al_2014 _IOP Conf. Ser.: Earth Environ. Sci._**18** 012069 * [1] You may also like * [2] \"Causality of Sentinel-I. Synthetic Agenture * [3] \"Baxter polarimetric change detection for * [4] \"burned area extraction in South * [5] \"Sylamani * [6] \"Sensitivity of principal components to * [7] \"system changes in the presence of non: * [8] \"stationarity * [9] Henrik M Bente, Michael Schreckenberg * [10] \"Thomas Guhr * [11] \"Chapena Detection Based on I-R-MAD * [12] \"Model for G-F-S remote Sensing/Image\" * [13] \"Gubin Xu, Huafeng Li, Yuwei Zang et al. Comparison of pixel -based and artificial neural networks classification methods for detecting forest cover changes in Malaysia B R Deilmai Department of Geoinformation, Faculty of Geoinformation and Real Estate, Universiti Teknologi Malaysia, 81310 Johor, Malaysia [email protected] K D Kanniah Department of Geoinformation, Faculty of Geoinformation and Real Estate, Universiti Teknologi Malaysia, 81310 Johor, Malaysia [email protected] A W Rasib and A Ariffin Department of Geoinformation, Faculty of Geoinformation and Real Estate, Universiti Teknologi Malaysia, 81310 Johor, Malaysia [email protected] ## 1 Introduction Land use and land cover (LULC) change detection is important for many decisions making and management activities related to the earth surface like hydrological modeling and environmental management [1]. LULC provides key environmental information for many scientific purposes, and also to a range of human activities such as urban planning [2]. The growth of population associated with the climate change is found to be the main reason for the loss of forest cover over time [3]. Deforestation in particular has a large impact on the catchment process and biochemical cycles like carbon and nitrogen, soil erosion and flood [1]. Remote sensing is a valuable tool to get quick information about various LULC types and to monitor their changes over time [4]. In this paper, we have employed two Landsat Thematic Mapper images covering years 2000 and 2009 to (i) classify different LULC types in the state of Johor using traditional pixel based and Artificial Neural Network image classification techniques and (ii) to detect the LULC changes using a post classification method. Pixel based classification is implemented based on statistical probability method [5]. Neural network is a supporting tool for image processing and remotely sensed change detection. It is based on back propagation training algorithm [6]. Many researches [7; 8; 9; 10; and 11] showed that the classification accuracy is improved by neural network in comparison to the pixel based method mainly because the data distributions are strongly non-Gaussian in ANN whereas; the MLC uses Gaussian distribution parameter [12]. ## 2 Data and methodology IOP Conf. Series: Earth and Environmental Science **18** (2014) 012069 doi:10.1088/1755-1315/18/1/012069 The study area considered in this study is the entire state of Johor. Johor is one of the developed states in Peninsula Malaysia (Figure 1). In term of the area and population, Johor is the fifth largest and second populous state in Malaysia, with a total area of 19,210 km\\({}^{2}\\) and population of 3,233,434 in 2010 [13]. The largest land uses in Johor are oil palm and forest; however, most of the forested areas had been changed to oil palm plantations in recent years [14]. ### Data used Enhanced Thematic Mapper (ETM+) (2000) and Landsat Thematic Mapper (2009) data were used to perform the LULC classification. These data (level 1T) were downloaded from the Earth Explorer website [16], and they are corrected for geometric and topographic errors. We used 6 spectral bands (visible, near infra-red and shortwave infra-red) except for the thermal band to perform the classification. These data with 30 m spatial resolution enable the generation of moderate resolution LULC classes covering the entire state of Johor. ### Methods In order to obtain more accurate results, before performing the change detection both images were atmospherically corrected (because of the difference in months and sun angle) using Atcor2 program available in the Erdas Imagine software. Subsequently mosaicking and finally co-registration of ETM+ and TM data were performed. Before classification, the land cover types in the study area were defined with the help of a land use map produced by the department of Agriculture Malaysia (year 2008). The main land cover types are forest, oil palm, urban area, rubber and water bodies. After that, each mosaicked images were classified using maximum likelihood (ML) and neural network classification techniques. ML algorithm was performed as supervised classification, and it is based on user defined spectral signature (training area). Training areas were selected according to the land use maps of years 2000 and 2008. Finally, the ML supervised classification was performed. This classification is a standard pixel based technique which is based on a multivariate probability density function of classes [6]. Whilst, Artificial Neural Network (ANN) is a technique that can simulate functions and it is synonymic to human brain [9]. Three types of networks are commonly used in remote sensing namely: Hopefield networks, Kohonen networks, and the multi-layered feed forward networks [5]. Unsupervised and semi-supervised classifications commonly use Kohonen networks, whilst Hopfield network is used in stereo matching [4]. In land cover classification feed forward networks are most commonly used and they are usually trained by back propagation algorithm. There are three layers included in the network namely (i) the input layer (i.e. spectral bands used for classification) (ii) the output layer is the number of land-cover categories to be generated and (iii) the hidden layer that connects components of the input layer and the output layer by a weighted channel [12]. In this study, the input layer is the 12 input nodes representing the spectral bands (two multispectral images) and the output layer has 6 nodes, which are the 6 land cover classes, (including clouds). The rate of training was kept to 0.2 and the training momentum rate of 0.9 was used. The training root mean square error (RMSE) was set to 0.1, and then the classification was performed. After classification, the accuracy of the classified images was assessed using reference data (land use maps of year 2000 and 2009). A total of 250 random points were selected from the images generated via a stratified random sampling method. The accuracy was assessed using error matrices (overall, user's and producer's accuracies and Kappa statistics). Finally, Figure 1: Study area showing the state of Johor [15]. ## 3 Results and discussions ### Landuse/landcover classification The final LULC (Land use and land cover) maps presented in figure 2 and figure 3 show that the major classes are forest, oil palm, rubber, city and water. An evaluation of accuracy of the classified images (table 1) shows that the overall accuracy for Artificial Neural Network (ANN) classification is 75% (year 2000) and 80% (year 2009), and it is higher than the pixel based classification result of 68% (2000 image), and 75% (2009 image). ## 4 Conclusion This study used two classification methods namely Artificial Neural Networks (ANN) and Maximum likelihood (ML) to classify different land use and land cover types in the state of Johor. The highest classification accuracy was obtained by ANN, and the Landsat images classified using this method was used to detect the change notably in forest cover between 2000 and 2009. It was found that during a period of 10 years Johor lost approximately 28% of forested areas. It is suggested that the forested areas must be monitored on a continuous manner to detect any illegal deforestation. Also, the state government should make all forested areas as protected forest in order to prevent further loss of this valuable natural resource in the state. #### Acknowledgements This study is funded by the Ministry of Higher Education, Malaysia and Universiti Teknologi Malaysia research grant (Q.J130000.2527.03H23) and we acknowledge Earth Explorer ([http://earthexplorer.usgs.gov](http://earthexplorer.usgs.gov)) for providing the Landsat images. ## References * [1] Ahmad F 2012 A review of remote sensing data change detection: Comparison of Faisalaba and Multan Districts, Punjab Province, Pakistan _J. Geography.REG. Plan_**5**236-251 * [2] Janetos A and Justice C 2000 Land cover and global productivity: A measurement strategy for the NASA programme Int J. _Remote. Sens_**21** 1491-1512 * [3] Lang S 2008. Object-based image analysis for remote sensing applications: modeling reality- dealing with complexity. _Object-based image analysis_ (Berlin Springer) chapter 1.1 3- 27. * [4] Lee J J, Shim J C and Ho Ha Yand 1994. Stereo correspondence using the Hopfield neural network of a new energy function. _Pattern. Recogn_**27** 1513-1522 * [5] Lewis DJ, Corr DG, Gent CR and Sheppard CP 1992 Semi-supervised artificial neural networks for classification of remotely sensed images. _Remote. Res Oper. Remote Sensing Society, Nottingham_ 489-497 \\begin{table} \\begin{tabular}{c c c c c c} \\hline Land cover & Maximum & Neural network & Maximum & Neural network \\\\ & Likelihood & classification2000 & Likelihood & classification & 2009 \\\\ & classification & (area km\\({}^{2}\\)) & & 2009 (area km\\({}^{2}\\)) & (area km\\({}^{2}\\)) \\\\ \\hline Water & 1037.058 & Water with & 1100.38 & Water with \\\\ & shadow :2681.6 & & shadow 2535.36 \\\\ Forest & 5970.48 & 6191.5 & 4311.86 & 4893.37 \\\\ Oil palm & 6494.59 & 5811.53 & 8067.85 & 8202.67.27 \\\\ City & 3302.1(because & 1517.6 & 1600.46 & 1619.12 \\\\ & of haze) & & & \\\\ Rubber & 675.56 & 683.59 & 3100.65+cloud & 694.12 \\\\ Cloud & 1974.5 & 2855.82+haze & 1567.68 & 1520.35 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Areas change in Johor for 2000 and 2009* [6] Lillesand T M., Kiefer R W and Chipman, J 2008 _Remote sensing and image interpretation 6th edition_ (New York:John Wiley & Sons Ltd) * [7] Macleod R D and Congalton R G 1998 A quantitative comparison of change-detection algorithms for monitoring Eglgrass from remotely sensed data. _Photogramm. Eng Rem S_**64** 207-216 * [8] Mas J F and Flores J J 2008 The application of artificial neural networks to the analysis of remotely sensed data. J. Remote. Sens **29** 617-663 * [9] Neagoo V E, Nebhina M, and Datcu M 2012 Neural Network Techniques for Automated Land-Cover Change Detection in Multispectral Satellite Time Series Imagery. _Int. J. Math. Models Methods. Appl. Sci_ 131-139. * [10] Zaares G 2012 A Hopfield neural network for image change detection _IEEE.T. Neural. Network_**17** 1250-1264 * [11] Pijanowski, B C Brown D G Shellito B A and Manik G A 2002 Using neural networks and GIS to forecast land use changes: a land transformation model. _Comput, Environ. Urban. Syst_, **26** 553-575 * [12] Yuan H, Van Der Wiele C F and Khorram S. 2009 An automated artificial neural network system for land use/land cover classification from Landsat TM imagery. _Remote. Sens_**1** 243-265 * [13][http://en.wikipedia.org/wiki/Johor_Bahru](http://en.wikipedia.org/wiki/Johor_Bahru) \\(\\backslash\\) * [14][http://homeguides.sfgate.com/major-reasons-deforestation-malaysia-78510.html](http://homeguides.sfgate.com/major-reasons-deforestation-malaysia-78510.html) * [15][https://maps.google.com/](https://maps.google.com/) * [16][http://earthexplorer.usgs.gov/](http://earthexplorer.usgs.gov/)
According to the FAO (Food and Agriculture Organization), Malaysia lost 8.6% of its forest cover between 1990 and 2005. In forest cover change detection, remote sensing plays an important role. A lot of change detection methods have been developed, and most of them are semi-automated. These methods are time consuming and difficult to apply. One of the new and robust methods for change detection is artificial neural network (ANN). In this study, (ANN) classification scheme is used to detect the forest cover changes in the Johor state in Malaysia. Landsat Thematic Mapper images covering a period of 9 years (2000 and 2009) are used. Results obtained with ANN technique was compared with Maximum likelihood classification (MLC) to investigate whether ANN can perform better in the tropical environment. Overall accuracy of the ANN and MLC techniques are 75%, 68 % (2000) and 80%, 75 % (2009) respectively. Using the ANN method, it was found that forest area in Johor decreased as much as 1298 km2 between 2000 and 2009. The results also showed the potential and advantages of neural network in classification and change detection analysis.
Write a summary of the passage below.
251
arxiv-format/2110_15681v1.md
# False Positive Detection and Prediction Quality Estimation for LiDAR Point Cloud Segmentation 1st Pascal Colling _Department of Mathematics_ _University of Wuppertal, Germany_ [email protected] 2nd Matthias Rottmann _Department of Mathematics_ _University of Wuppertal, Germany_ [email protected] 3rd Lutz Roese-Koerner _Aptiv, Wuppertal, Germany_ [email protected] 4th Hanno Gottschalk _Department of Mathematics_ _University of Wuppertal, Germany_ [email protected] ## I Introduction In the field of automated driving, scene understanding is essential. One possible solution for the semantic interpretation of scenes captured by multiple sensor modalities is LiDAR point cloud segmentation [1, 2, 3, 4] (in the following LiDAR segmentation for brevity) where each point of the point cloud is assigned to a class of a given set. A segment is an area of points of the same class. Compared to camera images, a LiDAR point cloud is relatively sparse, but provides accurate depth information. Furthermore, since the LiDAR sensor in general is rotating, 360 degrees of the environment are considered. A summary for sensor modalities is given in [5]. In recent years, the performance of LiDAR segmentation networks has increased enormously [1, 2, 3, 4, 6], but there are only few works on uncertainty quantification [2]. In applications of street scene understanding, safety and reliability of perception systems are just as important as their accuracy. To tackle this problem, we introduce a post-processing tool, called _LidarMetaSeg_, which estimates the segmentwise (i.e., per connected component of the predicted segmentation) prediction quality in terms of segmentwise intersection over union [7] (\\(IoU\\)) of the LiDAR segmentation model, see also fig. 1. This provides not only uncertainty quantification per predicted segment but also an online assessment of prediction quality. State-of-the-art LiDAR segmentation models are based on deep neural networks and can be grouped into two main approaches: projection-based (2D) and non-projection-based (3D) networks, cf. [8]. Projection-based networks like [1, 2, 9] use a spherical (2D) image representation of the point cloud. The predicted semantic categories on the image are thereafter reinserted along the spherical rays into the 3D point Fig. 1: A visualization of LidarMetaSeg containing the ground truth (bottom left), the LiDAR segmentation (bottom right), the LiDAR segmentation quality (top left) as the \\(IoU\\) of prediction and ground truth and its estimation obtained by LidarMetaSeg (top right). The higher the \\(IoU\\), the better the prediction quality. cloud. This may contain some post-processing steps, like the k-nearest neighbor (kNN) approach, see [1]. Due to the representation of point clouds as projected images, the networks employed for LiDAR segmentation have architectures that often resemble image segmentation architectures. The non-projection-based networks, e.g. [3, 10, 11], process the point cloud directly in 3D space with or without different 3D representation approaches. For example, in [11], the network operates on the 3D point cloud without introducing an additional representation while in [3] the authors perform a 3D cylinder partition. A combination of a 2D and 3D representation of the point cloud is used in [4]. All current architectures, using a 2D or 3D representation or a combination of both provide the segmentation of the point cloud. Therefore, it is also possible to output the probabilities, which is the only prerequisite required for LidarMetaSeg. Concerning uncertainty quantification in deep learning, Bayesian approaches like Monte Carlo (MC) dropout [12] are commonly used, e.g. in image-based object detection [13], image segmentation [14] and also in LiDAR object detection [15]. In object detection and instance segmentation, so called scores containing (un)certainty information are used, while this is not the case for semantic segmentation. The network SalsaNext [2] is for LiDAR segmentation and makes use of MC dropout to output the model (epistemic) and observation (aleatoric) uncertainty. In our method LidarMetaSeg we first project the point cloud and the corresponding softmax probabilities of the network to a spherical 2D image representation, which are then used to compute different types of dispersion measures resulting in different dispersion heatmaps. To estimate uncertainty on segment level, we aggregate the dispersion measures with respect to each predicted segment. The \\(\\mathit{IoU}\\) is commonly used to evaluate the performance of a segmentation model. For each predicted segment, we compute its \\(\\mathit{IoU}\\) with the ground truth and call this _segmentwise_\\(\\mathit{IoU}\\). In our experiments we observe a strong correlation of the segmentwise \\(\\mathit{IoU}\\) with the aggregated dispersion measures. Hence, we use the aggregated dispersion measures with additional information from the point cloud input to create a set of handcrafted features. The latter are used in post-processing manner as input for training _i)_ a _meta classification model_ to detect false positive segments, i.e., if the \\(\\mathit{IoU}\\) is equal or greater than \\(0\\) and _ii)_ a _meta regression model_ to estimate the segmentwise \\(\\mathit{IoU}\\). Thus, we not only have a pointwise uncertainty quantification, given by the dispersion heatmaps, but also a false positive detection as well as a segmentation quality estimation on segment level. The idea of meta classification and regression to detect false positives and to estimate the segmentwise prediction quality was first introduced in the field of semantic segmentation of images [16], called MetaSeg. The work presented in [17] goes in a similar direction, but for brain tumor segmentation. MetaSeg was further extended in other directions, i.e., for controlled false negative reduction [18], for time dynamic uncertainty estimates for video data [19], for taking resolution-dependent uncertainties into account [20] and as part of an active learning method [21]. Inspired by the possibility of representing the point could as a 2D image, our method LidarMetaSeg is an extension and further development of the original work. Therefore MetaSeg [16] is the most related work to our approach LidarMetaSeg, which up to now together with SalsaNext [2] are the only works in the direction of uncertainty quantification in LiDAR segmentation. With MC dropout, SalsaNext follows a Bayesian approach to quantifying the model and the observation uncertainty. The uncertainty output is point-based and not segment-based, as in our approach. Also for MC dropout, the model has to infer one sample multiple times. LidarMetaSeg requires only a single network inference and estimates uncertainties by means of the network's class probabilities. In a 2D representation, these pixelwise uncertainty estimates can be viewed as uncertainty heatmaps. From those heatmaps, we compute aggregated uncertainties for each predicted segment, therefore clearly going beyond the stage of pixelwise uncertainty estimation. In contrast to MetaSeg for image segmentation, we not only use the network's output but also utilize information from the point cloud input, such as the intensity and range features provided for each point of the point cloud. LidarMetaSeg is therefore a universal post-processing tool that allows for the detection of false positive segments as well as the estimation of segmentwise LiDAR segmentation quality. Besides that, the present work is the first one to provide uncertainty estimates on the level of predicted segments. We evaluate our method on two different datasets, SemanticKITTI [22] and nuScenes [23] and with three different network architectures, two projection-based models RangeNet++ [1], SalsaNext [2] and one non-projection-based model, Cylinder3D [3]. For meta classification, we achieve area under receiver operating characteristic curve (AUROC) and area under precision recall curve (AUPRC) [24] values of up to \\(91.16\\%\\) and \\(74.35\\%\\), respectively. For the meta regression, we achieve coefficient of determination \\(R^{2}\\) values of up to \\(66.69\\%\\). We show that our aggregated measures - in terms of meta classification and regression - lead to a significant performance gain in comparison to when only considering a single uncertainty metric like the segmentwise entropy. ## II Method LidarMetaSeg is a post-processing method for LiDAR semantic segmentation to estimate the segmentwise prediction quality. It consists of a meta classification and a meta regression model that for each predicted segment classifies whether it has an \\(\\mathit{IoU}\\) equal to or greater than \\(0\\) with the ground truth and predicts the segmentwise \\(\\mathit{IoU}\\) with the ground truth, respectively. The method works as follows: in a preprocessing step we project each sample, i.e., the point cloud, the corresponding network probabilities and the labels into a spherical 2D image representation. In a next step and based on the projected data, we compute dispersion measures and other features like it is done for image data in [16, 18, 20]. Afterwards we identify the segments of a given semantic class and aggregate the pixelwise values from the previous step on a segment level. In addition, we compute the \\(\\mathit{IoU}\\) of each predicted segment with the ground truth of the same class. This results in a structured dataset, which consist of the coefficients of the aggregated dispersion measures as well as additional features and of the target variable - the \\(\\mathit{IoU}\\in[0,1]\\) for the task of meta regression or the binary variable \\(\\mathit{IoU}=0,>0\\) (\\(\\mathit{IoU}=0\\) as indicator for a false positive) for the task of meta classification - for each segment. We fit a classification and a regression model to this dataset. In the end, we re-project the meta classification and regression from the image representation to the point cloud. ### _Preprocessing_ A sample of input data for LidarMetaSeg is assumed to be given on point cloud level and contains the following: * _point cloud_\\(\\tilde{p}\\in\\mathbb{R}^{m\\times 4},\\ \\tilde{p}_{j}=(x_{j},y_{j},z_{j},i_{j})\\) with \\(x_{j},y_{j},z_{j}\\in\\mathbb{R}\\) Cartesian coordinates and intensity \\(i_{j}\\in\\mathbb{R}_{+}\\) for \\(j=1,\\ldots,m\\) with \\(m\\) the number of points in the LiDAR point cloud, * _ground truth / labels_\\(\\tilde{y}^{*}\\in\\mathcal{C}^{m}\\) with \\(\\mathcal{C}=\\{1,\\ldots,n\\}\\) the set of \\(n\\) given classes, * _probabilities_\\(\\tilde{y}^{prob}=f(\\tilde{p})\\in\\mathbb{R}^{m\\times n}_{[0,1]}\\) of a LiDAR segmentation network \\(f\\), given as softmax probabilities, * _prediction_\\(\\tilde{y}=\\operatorname*{arg\\,max}\\limits_{c\\in\\mathcal{C}}\\tilde{y}^{prob}\\). Typically, one is also interested in the range of a given point in the point cloud, which is part of most LiDAR segmentation networks' input. Since the ego car is located in the origin of the coordinate system, this quantity is given by \\(r_{j}=\\sqrt{x_{j}^{2}+y_{j}^{2}+z_{j}^{2}}\\) for each \\(\\tilde{p}_{j}\\). The projection of a point cloud to a spherical 2D image representation follows two steps: a transformation from Cartesian to spherical coordinates and then a transformation from spherical to image coordinates. The spherical coordinates are given as \\((r_{j},\\theta_{j},\\varphi_{j})\\) with range \\(r_{j}\\), polar angle \\(\\theta_{j}\\) and azimuth angle \\(\\varphi_{j}\\). The transformation for the Cartesian to spherical coordinates is given by \\[\\theta_{j}=\\arcsin\\left(\\frac{z_{j}}{r_{j}}\\right)\\quad\\text{and}\\quad\\varphi_ {j}=\\arctan\\left(\\frac{y_{j}}{x_{j}}\\right) \\tag{1}\\] and \\(r_{j}\\) for \\(j=1,\\ldots,m\\). Based on the spherical coordinates we get the image coordinates \\((u,v)\\) with the equation \\[\\begin{pmatrix}u\\\\ v\\end{pmatrix}=\\begin{pmatrix}\\frac{1}{2}\\big{[}1-(\\varphi_{j}\\pi^{-1})\\big{]} w\\\\ \\big{[}1-(\\theta_{j}+f_{ver^{+}})f_{ver}^{-1}\\big{]}h\\end{pmatrix} \\tag{2}\\] with \\((w,h)\\) the width and height of the image and \\(f_{ver}=f_{ver^{+}}+f_{ver^{-}}\\) the vertical field of view (FOV) of the LiDAR sensor. In order to get an image representation where each point correspond to one pixel and vice versa, we need the number of channels, the angular resolution and the horizontal FOV of the LiDAR sensor. To this end, we define the height \\(h\\) as the number of channels and the width \\(w\\) as the quotient of the horizontal FOV \\(f_{hor}\\) (which is in general \\(360\\), since the LiDAR sensor is rotating) and the angular resolution \\(\\alpha\\), i.e., \\[w=\\frac{f_{hor}}{\\alpha}. \\tag{3}\\] Thus, the image representation - using the explicit sensor information - has as many entries or pixels as the point cloud can have as maximum number of points. Unfortunately there are still some technical reasons, due to which it can happen that multiple points are projected to the same pixel, e.g. ego-motion compensation or overlapping channel angles. More details concerning such projection errors can be found in [25]. With the projection proposed above this happens rarely enough, so that this event is negligible. Following the projection above, we denote the projected 2D representation similar as before, but without \\(\\tilde{\\cdot}\\), i.e., * image representation (of the point cloud \\(\\tilde{p}\\)) \\(F=(F^{x},F^{y},F^{z},F^{i},F^{r}),\\ F\\in\\mathbb{R}^{w\\times h\\times 5}\\), * ground truth / labels \\(y^{*}\\in\\mathcal{C}^{w\\times h}\\), * probabilities \\(y^{prob}\\in\\mathcal{C}^{w\\times h\\times n}\\), * prediction \\(y\\in\\mathcal{C}^{w\\times h}\\). The proposed image projection yields a sparse image representation. However, our post-processing approach LidarMetaSeg is based on connected components of the segmentation. In order to identify connected components (segments) of pixels in the 2D image resulting from the projection, we fill these gaps by setting any empty entry \\(l:=(u,v)\\) (entries without a corresponding point in the point cloud) to a value of one of its nearest neighbors that received a value from the projection. An example of such a filled image representation is shown in fig. 2, left panel. In the following we only consider the filled image representations. We store the information which pixel received its value via projection and which one via fill-in in a binary mask of width \\(w\\) and height \\(h\\) denoted by \\(\\delta\\) where \\(1\\) represents a projected point and \\(0\\) a filled entry, i.e., \\[\\delta_{l}=\\begin{cases}1,&l\\text{ corresponds to a projected point},\\\\ 0,&\\text{else}.\\end{cases} \\tag{4}\\] For simplicity, we refer the filled image representations \\(F\\) (that are input quantities for the segmentation network) as feature measures. ### _Dispersion Measures and Segmentwise Aggregation_ First we define the dispersion and feature measures and afterwards the segmentwise aggregation. Dispersion and Feature MeasuresBased on the probabilities \\(y^{prob}\\in[0,1]^{w\\times h\\times n}\\), we define the dispersion measures _entropy_\\(E_{l}\\), _probability difference_\\(D_{l}\\) and _variation ratio_\\(V_{l}\\) at pixel position \\(l=(u,v)\\) as follows: \\[E_{l}=-\\frac{1}{\\log(n)}\\sum_{c=1}^{n}y_{l,c}^{prob}\\ \\log\\big{(}y_{l,c}^{prob}\\big{)}, \\tag{5}\\] \\[D_{l}=1-\\max_{c_{1}\\in\\mathcal{C}}y_{l,c}^{prob}+\\max_{c_{2}\\in \\mathcal{C}\\setminus c_{1}}y_{l,c_{2}}^{prob},\\] (6) \\[V_{l}=1-\\max_{c\\in\\mathcal{C}}y_{l,c}^{prob}. \\tag{7}\\]In addition, the feature measures _coordinates_, _intensity_ and _range_ at position \\(l\\) are given by the image representation \\[F_{l}^{\\sharp},\\;\\sharp\\in\\{x,y,z,i,r\\}. \\tag{8}\\] For the sake of brevity, we define the set of dispersion and features measures \\[\\mathcal{M}=\\{E,D,V,F^{x},F^{y},F^{z},F^{i},F^{r}\\} \\tag{9}\\] omitting the index for the position \\(l\\) as this will follow from the context. Note that, due to the position dependence, each element of \\(\\mathcal{M}\\) can be considered as a heatmap. Segmentwise AggregationFor a given prediction \\(y\\in\\mathcal{C}^{w\\times h},\\;\\mathcal{C}=\\{1,\\ldots,n\\}\\) and the corresponding ground truth \\(y^{*}\\in\\mathcal{C}^{w\\times h},\\;\\mathcal{C}=\\{1,\\ldots,n\\}\\), we denote \\(\\mathcal{K}_{y}\\) and \\(\\mathcal{K}_{y^{*}}\\) the set of connected components (segments) in the prediction and the ground truth, respectively. A connected component \\(k\\) is a set of pixels that are adjacent to each other and belong to the same class, see also fig. 2, left panel. For each segment \\(k\\in\\mathcal{K}_{y}\\), we define the following quantities. Additionally and in order to count only the pixels with a corresponding point in the point cloud, we introduce the restriction by the corresponding binary mask \\(\\delta\\) with \\(|\\cdot|_{\\delta}\\). * The interior \\(k_{in}=\\{(u,v)\\in k:[u\\pm 1]\\times[v\\pm 1]\\in k\\}\\subset k\\), i.e., a pixel \\(l=(u,v)\\) is an element of \\(k_{in}\\) if all eight neighboring pixels are an element of \\(k\\), * the boundary \\(k_{bd}=k\\setminus k_{in}\\), * the pixel sizes \\(S=|k|\\), \\(S_{in}=|k_{in}|\\), \\(S_{bd}=|k_{bd}|\\), * the segment size in the point cloud \\(SP=|k|_{\\delta}\\). Furthermore, we define the target variables \\(\\mathit{Io}\\,U\\) and the so called adjusted \\(\\mathit{Io}\\,U_{\\mathrm{adj}}\\) as follow: * \\(\\mathit{Io}\\,U\\): let \\(\\mathcal{K}_{y^{*}}|_{k}\\) be the set of all \\(k^{\\prime}\\in\\mathcal{K}_{y^{*}}\\) that have non-trivial intersection with \\(k\\) and whose class label equals the predicted class for \\(k\\), then \\[\\mathit{Io}\\,U(k)=\\frac{|k\\cap K^{\\prime}|_{\\delta}}{|k\\cup K^{\\prime}|_{ \\delta}}\\,,\\qquad K^{\\prime}=\\bigcup_{k^{\\prime}\\in\\mathcal{K}_{y^{*}}|_{k}}k^ {\\prime},\\] (10) * the adjusted \\(\\mathit{Io}\\,U_{\\mathrm{adj}}\\) does not count pixels in the ground truth segment that are not contained in the predicted segments, but in other predicted segments of the same class: let \\(Q=\\{q\\in\\mathcal{K}_{y}:q\\cap K^{\\prime}\ eq\\emptyset\\}\\), then \\[\\mathit{Io}\\,U_{\\mathrm{adj}}(k)=\\frac{|k\\cap K^{\\prime}|_{\\delta}}{|k\\cup(K^{ \\prime}\\setminus Q)|_{\\delta}}.\\] (11) In cases where a ground truth segment is covered by more than one predicted segment of the same class, each predicted segment would have a low \\(\\mathit{Io}\\,U\\), while the predicted segments represent the ground truth quite well. As a remedy, the adjusted \\(\\mathit{Io}\\,U_{\\mathrm{adj}}\\) was introduced in [16] to not punish this situation. The adjusted \\(\\mathit{Io}\\,U_{\\mathrm{adj}}\\) is more suitable for the task of meta regression. For the meta classification it holds \\(\\mathit{Io}\\,U=0,>0\\Leftrightarrow\\mathit{Io}\\,U_{\\mathrm{adj}}=0,>0\\). Based on the previous definitions, we define the dispersion and feature measures: * the mean \\(\\mu\\) and variance \\(v\\) metrics \\[\\mu M_{\\sharp}:=\\mu(M_{\\sharp})=\\frac{1}{S_{\\sharp}}\\sum_{l\\in k_{\\sharp}}M_{l}\\] (12) \\[\\upsilon M_{\\sharp}:=\\upsilon(M_{\\sharp})=\\mu(M_{\\sharp}^{2})-\\mu(M_{ \\sharp})^{2}\\] (13) for \\(\\sharp\\in\\{\\_,in,bd\\}\\) and \\(M\\in\\mathcal{M}\\), * the relative sizes \\(\\bar{S}=S/S_{bd}\\), \\(\\bar{S}_{in}=S_{in}/S_{bd}\\), * the relative mean and variance metrics \\[\\tau\\bar{M}=\\tau M\\bar{S}\\] (14) \\[\\tau\\bar{M}_{in}=\\tau M\\bar{S}_{in}\\] (15) for \\(\\tau\\in\\{\\mu,\\upsilon\\}\\) and \\(M\\in\\mathcal{M}\\), * the ratio of the neighborhood's correct predictions of each class \\[N_{c}=\\frac{1}{|k_{nb}|}\\sum_{l\\in k_{bd}}\\mathbbm{1}_{\\{c=y_{\\sharp}\\}} \\qquad\\forall c\\in\\mathcal{C}\\] (16) with \\(k_{nb}\\) the set of \\(k\\) neighbors, i.e., \\(k_{nb}=\\{l^{\\prime}\\in[u\\pm 1]\\times[v\\pm 1]\\subset w\\times h:(u,v)\\in k,l^{\\prime} \ otin k\\}\\), * the mean class probabilities \\[P_{c}=\\frac{1}{S}\\sum_{l\\in k}y_{l,c}^{prob}\\qquad\\forall c\\in\\mathcal{C}.\\] (17) Typically, the dispersion measures \\(E_{l},D_{l},V_{l}\\) are large for \\(l\\in k_{bd}\\). This motivates the separate treatment of interior and boundary measures. Furthermore we observe a correlation between fractal segment shapes and a bad or wrong prediction, which motivates the relative sizes \\(\\bar{S},\\;\\bar{S}_{in}\\). In summary, we have \\(86+2n\\) metrics: the (relative) mean and variance metrics \\(\\tau M,\\tau M_{in},\\tau M_{bd},\\tau\\bar{M},\\tau\\bar{M}_{in}\\;\\;\\forall\\tau\\in\\{ \\mu,\\upsilon\\},\\;\\forall M\\in\\mathcal{M}\\), the (relative) size metrics \\(S,S_{in},S_{bd},\\bar{S},\\bar{S}_{in},SP\\) as well as \\(N_{c},P_{c}\\;\\forall c\\in\\mathcal{C}\\). An example of the pixelwise dispersion measures as well as the segmentwise \\(\\mathit{Io}\\,U_{\\mathrm{adj}}\\) values and its prediction is shown in fig. 2, right panel. With the exception of the segmentwise \\(\\mathit{Io}\\,U\\) and \\(\\mathit{Io}\\,U_{\\mathrm{adj}}\\) values, all quantities defined above can be computed without the knowledge of the ground truth. Fig. 2: Visual examples of our method LidarMetaSeg. The left panel shows the preprocessing part: the ground truth (top left) and the prediction (top right) of the point cloud as well as the corresponding sparse (middle) and filled (bottom) image representations. The right panel visualizes a dispersion heatmap, the segmentwise prediction quality and its estimation: the probability difference heatmap of the prediction-based probabilities (top right), where higher values correspond to higher uncertainty, in the middle the true (left) and estimated (right) \\(\\mathit{Io}\\,U_{\\mathrm{adj}}\\) values for the image representation and in the bottom part the corresponding visualizations after the re-projection to the point cloud. The prediction of the point cloud and the corresponding prediction quality estimation is highlighted. ## III Numerical Experiments For numerical experiments we used two datasets: SemanticKITTI [22] and nuScenes [23]. For meta classification and regression we deploy XGBoost [26]. Other classification and regression methods like linear / logistic regression, neural networks of tree based ensemble methods [27] are also possible. However, as shown in [19], XGBoost leads to the best results. Due to the reason mentioned in the previous section, the target variable for the meta regression (and classification) is the adjusted \\(\\mathit{IoU}_{\\mathrm{adj}}\\). First, we describe the settings of the experiments for both datasets and evaluate the results for the false positive detection and the segmentwise prediction quality estimation when using all metrics presented in the previous section. Afterwards we conduct an analysis of the metrics and the meta classification model. ### _SemanticKITTI_ The SemanticKITTI dataset [22] contains street scenes from and around Karlsruhe, Germany. It provides \\(11\\) sequences with about \\(23\\)K samples for training and validation, consisting of \\(19\\) classes. The data is recorded with a Velodyne HDL-64E LiDAR sensor, which has \\(64\\) channels and a (horizontal) angular resolution of \\(0.08^{\\circ}\\). Furthermore the data is recorded and annotated with \\(10\\) frames per second (fps) and each point cloud contains about \\(120\\)K points. The authors of the dataset recommend to use all sequences to train the LiDAR segmentation model, except sequence \\(08\\), which should be used for validation. For the experiments we used three pretrained LiDAR segmentation models, two projection-based models, i.e., RangeNet++ [1] and SalsaNext [2], and one non-projection-based model, i.e., Cylinder3D [3], which followed the recommended data split. For RangeNet++ and SalsaNext, the softmax probabilities are given for the 2D image representation prediction. As we assume that softmax probabilities are given for the point cloud, we consider this representation as the starting point and re-project the softmax probabilities to the point cloud. After the re-projection from the 2D image representation prediction to the point cloud, both models have an additional kNN post-processing step to clean the point cloud from undesired discretization and inference artifacts [1], which may results in changing the semantic class of a few points. To take this post-processing step into account, we set the values of the cleaned points in the corresponding softmax probabilities of the point cloud to \\(1\\) and all other values to \\(0\\). Therefore the softmax condition (the sum of all probability values of a point is equal to \\(1\\) and all values are between \\(0\\) and \\(1\\)) is met and the adjusted prediction is equal to the argmax of the probabilities. We do not expect other approaches to significantly change the results since we aggregate our dispersion measures and the number of modified points is small. Following our method, the image representation of the point cloud data is of size \\((w,h)=(4500,64)\\), cf. (3). Most deep learning models tend to overfit. Therefore we only use samples for LidarMetaSeg, which are not part of the training data of the segmentation network, as overfitted models affects the dispersion measures. Thus, we only use sequence \\(08\\) for our experiments. Computing the connected components and metrics yields approx. \\(3.4\\)M segments for each network. Most of the segments are very small. Therefore we follow a similar segment exclusion rule as in MetaSeg [16], where segments with empty interior, \\(S_{in}=0\\), are excluded. Here, we exclude segments consisting of less than \\(10\\) LiDAR points, i.e., \\(\\mathit{SP}<10\\), also shown in gray color in fig. 2. Hence, we reduce the number of segments to approx. \\(0.45\\)M but we retain \\(99\\%\\) of the data measured in terms of the number of points. We tested the dependence of our results under variation of the exclusion size \\(\\mathit{SP}\\). The results were very similar to the results we present in the following. For training and validation of LidarMetaSeg we split sequence \\(08\\) and the corresponding connected components and metrics into \\(10\\) disjoint sub-sequences. These sub-sequences are used for a \\(10\\)-fold cross validation. A cross validation over all samples would yield highly correlated training and validation splits as all sequences are recorded with 10 fps. The results for the meta classification and regression are given in table I. For all three models we achieve a validation accuracy between \\(85.50\\%\\) and \\(88.37\\%\\), see row 'ACC LMS' (short for LidarMetaSeg). The accuracy of random guessing ('ACC naive baseline') is between \\(78.42\\%\\) and \\(84.53\\%\\) which directly amounts to percentage of segments with an \\(\\mathit{IoU}_{\\mathrm{adj}}>0\\). For each method, the accuracy values correspond to a single decision threshold. In contrast to that, the AUROC and AUPRC are obtained by varying the decision threshold of the classification output. The AUROC essentially measures the overlap of distributions corresponding to negative and positive samples; this score does not place more emphasis on one class over the other in case of class imbalance. The ACC of random guessing indicates the class imbalance: about \\(80\\%\\) of the segments have an \\(\\mathit{IoU}_{\\mathrm{adj}}>0\\) and \\(20\\%\\) of the segments have an \\(\\mathit{IoU}_{\\mathrm{adj}}=0\\), i.e., they are false positives. The underlying precision recall curve of the AUPRC ignores true negatives and emphasizes the detection of the positive class (false positives). Using the metrics of the previous section (LMS) for the meta classification yields AUROC values above \\(90\\%\\) and AUPRC up to \\(74.35\\%\\). For the meta regression we achieve \\(R^{2}\\) values between \\(61.57-66.69\\%\\). Fig. 3 depicts the quality of predicting the \\(\\mathit{IoU}_{\\mathrm{adj}}\\). A visualization of estimating the \\(\\mathit{IoU}_{\\mathrm{adj}}\\) is shown in fig. 2 and in the supplementary video1. Footnote 1: [https://youtu.be/907jJSRgHDk](https://youtu.be/907jJSRgHDk) ### _NuScenes_ The nuScenes dataset [23] contains street scenes from two cities, Boston (US) and Singapore. It provides \\(700\\) sequences for training and \\(150\\) sequences for validation. Each sequence contains about \\(40\\) samples which amounts to a total of \\(34\\)K key frames. The dataset has \\(16\\) classes and is recorded and annotated with \\(2\\) fps. The LiDAR sensor has \\(32\\) channels andan angular resolution of \\(0.33\\lx@math@degree\\). Every point cloud contains roughly \\(35\\)K points. For our experiments we used the pre-trained Cylinder3D with the recommended data split. We did not test RangeNet++ and SalsaNext since the corresponding pretrained models are not available. The image projection is of size \\((w,h)=(1090,32)\\). Computing the connected components for all samples of the \\(150\\) validation sequences yields approx. \\(1.5\\)M segments. Excluding all small segments containing less than \\(10\\) points, i.e., \\(SP<10\\), reduces that number to \\(0.34\\)M. Still, we retain \\(99\\%\\) of the data in terms of points. We performed \\(10\\)-fold cross validation where we always took \\(90\\%\\) of the \\(150\\) sequences, i.e., \\(135\\) sequences, for training and the remaining \\(10\\%\\), i.e., \\(15\\) sequences for validation of the meta models. The results are presented in table I. \\(89.85\\%\\) of all segments have an \\(\\mathit{IoU}_{\\mathrm{adj}}>0\\). With the meta classification we achieve an accuracy of \\(91.00\\%\\), AUROC of \\(90.00\\%\\) and AUPRC of \\(50.25\\%\\), see 'LMS' rows. For the meta regression we achieve \\(R^{2}=49.19\\%\\) for the validation data. The quality of predicting the \\(\\mathit{IoU}_{\\mathrm{adj}}\\) is shown in fig. 3. ### _Metric Selection_ So far, we have presented results based on all metrics from section II, indicated by LMS in table I. In order to analyze the impact of the metrics to the performance, we repeated the experiments for multiple sets of metrics. Feature MeasuresFirst, we tested the performance of the meta classification and regression model without the feature measures, i.e., the metrics based on the point cloud input features, see row 'LMS w/o features'. The performance in terms of ACC, AUROC, AUPRC and \\(R^{2}\\) for all experiments are up to \\(2\\) percentage points (pp.) lower compared to when incorporating feature measures. EntropySince the entropy is commonly used in uncertainty quantification, we tested all experiments with only using the mean entropy \\(\\mu E\\), see 'Entropy' rows. The performance for the meta classification is up to \\(12\\) pp. lower compared to LMS, for the meta regression \\(R^{2}\\) decreases by up to \\(18\\) pp. Bayesian UncertaintiesThe projection-based SalsaNext model follows a Bayesian approach as already mentioned in section I: the LiDAR model provides a model (epistemic) and observation (aleatoric) uncertainty output for the point cloud's 2D image representation prediction, estimated by MC dropout (MCDO). To get these uncertainties we followed the procedure in [2]. This ends up in epistemic \\(epi_{i}\\) and aleatoric \\(\\mathit{ale}_{l}\\) uncertainty values for each pixel position \\(l\\). We compute the same aggregated measures as for the measures \\(M\\in\\mathcal{M}\\). Adding these new metrics to the previous metrics LMS is refereed to as LMS \\(\\cup\\) MCDO. The additional Bayesian uncertainties do not improve the meta classification and regression performance significantly, see table I. We have not tested SalsaNext on nuScenes since the pretrained model is not available. For comparability of results, we only used publicly available pretrained models. Greedy HeuristicInspired by forward-stepwise selection for linear regression, we analyze different subsets of metrics by performing a greedy heuristic: we start with an empty set of metrics and iteratively add a single metric that maximally improves the performance - ACC for the false pos Fig. 3: True \\(\\mathit{IoU}_{\\mathit{adj}}\\) vs predicted \\(\\mathit{IoU}_{\\mathit{adj}}\\) for RangeNet++, SalsaNext, Cylinder3D on SemanticKITTI as well as Cylinder3D on nuScenes, from left to right. tive detection and \\(R^{2}\\) for the prediction quality estimation. We performed this greedy heuristic for both, meta classification and meta regression. The results in terms of ACC and \\(R^{2}\\) are shown in fig. 4 (only for SemanticKITTI) and in table II. For the meta classification, we observe a comparatively big accuracy gain during adding the first \\(5\\) metrics, then the accuracy increases rather moderately. For the meta regression, this performance gain in terms of \\(R^{2}\\) spreads wider across the first 10 iterations, before the improvement per iteration becomes moderate. Furthermore the results show that a small subset of metrics is sufficient for good models. We achieve nearly the same performance for both tasks with \\(15\\) metrics selected by the greedy heuristic compared to when using all metrics (LMS). Considering table II, the mean variation ratio \\(\\mu V\\) and the mean probability difference \\(\\mu D\\) in most cases constitute the initial choices. Furthermore, the mean probabilities \\(P_{i},\\ i\\in\\mathcal{C}\\), are also frequently subject to early incorporation. ### _Confidence Calibration_ The false positive detection is based on a meta classification model, which classifies whether the predicted \\(\\textit{Io}\\,\\textit{U}_{\\,\\mathrm{adj}}\\) is equal or greater than \\(0\\). In order to demonstrate the reliability of the classification model, we show that the confidences are well calibrated. Confidence scores are called _calibrated_, if the confidence is representative for the probability of correct classification, cf. [28]. The meta classification model estimates for each predicted segment the probability of being false positive, i.e., \\(\\textit{Io}\\,\\textit{U}_{\\,\\mathrm{adj}}=0\\). We group the probabilities for all meta classified segments of the validation data into \\(10\\) interval using \\((0.0,0.1],\\ (0.1,0.2],\\ \\dots,\\ (0.9,1.0]\\). The accuracy of a bin is the relative amount of true predictions; the confidence of a bin is the mean of its probabilities. The closer the accuracy and the confidence are to each other, the more reliable is the corresponding classification model. This is visualized in a so-called reliability diagram. For the evaluation of calibration, we define the maximum calibration errors (MCE) as the maximum absolute difference between the accuracy and the confidence over all bins and the expected calibration errors (ECE) as a weighted average of the bins' difference between accuracy and confidence, where the weights are proportional to the number of elements per bin. Further details are given in [28]. The reliability diagrams and the MCE as well as the ECE for all previously discussed meta classification models are shown in fig. 5. The smaller the gaps, i.e., the closer the outputs are to the diagonal, the more reliable and well calibrated is the model. The MCE and ECE are between \\(5.26\\) - \\(10.68\\) and \\(0.62\\) - \\(1.63\\), respectively. The results indicate well calibrated and reliable meta classification models. ## IV Conclusion In this work we presented our method LidarMetaSeg for segmentwise false positive detection and prediction quality estimation of LiDAR point cloud segmentation. We have shown that the more of our hand-crafted aggregated metrics we incorporate, the better the results get. This holds for all considered evaluation metrics - ACC, AUROC, AUPRC Fig. 4: Performance of the meta classification (left) and the meta regression (right) model on SemanticKITTI depending of the number of metrics, which are selected by the greedy approach. \\(R^{2}\\) and \\(R^{2}\\). Furthermore, the results show that adding Bayesian uncertainties (epistemic and aleatoric ones approximated by MC dropout) on top of our dispersion measures based on the softmax probabilities neither improves meta classification nor meta regression performance. We have demonstrated the effectiveness of the method on street scene scenarios and are positive that this method can be adapted to other LiDAR segmentation tasks and applications, e.g. indoor segmentation or panoptic segmentation. ## References * [1]S. Conjeti, N. Navab, and C. Wachinger (2018) Inherent brain segmentation quality control from fully convnet monte carlo sampling. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 664-672. Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * Volume 1: ICPRAM., pp. 51-62. External Links: Document Cited by: SSII-A. * [34]A. Colling, L. Roese-Koerner, H. Gottschalk, and M. Rottmann (2021) Metabo
We present a novel post-processing tool for semantic segmentation of LiDAR point cloud data, called LidarMetaSeg, which estimates the prediction quality segmentwise. For this purpose we compute dispersion measures based on network probability outputs as well as feature measures based on point cloud input features and aggregate them on segment level. These aggregated measures are used to train a meta classification model to predict whether a predicted segment is a false positive or not and a meta regression model to predict the segmentwise intersection over union. Both models can then be applied to semantic segmentation inferences without knowing the ground truth. In our experiments we use different LiDAR segmentation models and datasets and analyze the power of our method. We show that our results outperform other standard approaches. deep learning, lidar point cloud, semantic segmentation, uncertainty quantification, automated driving
Summarize the following text.
166
arxiv-format/2210_07617v1.md
# Quantifying Quality of Class-Conditional Generative Models in Time-Series Domain Alireza Koochali Maria Walch Sankrutyayan Thota, Peter Schichtel IAV GmbH, Trippstadterstr. 122, Kaiserslautern, 67663, Germany. Andreas Dengel Sheraz Ahmed DFKI GmbH, Trippstadterstr. 122, Kaiserslautern, 67663, Germany. ## 1 Introduction In recent years, implicit generative models have gained immense popularity due to the emergence of Generative Adversarial Networks (GANs) [1]. With the astounding success of generative models in various domains such as image, video, music, and speech, it becomes imperative to quantify their performance. So far, various qualitative and quantitative assessment methods [2, 3] have been proposed to evaluate these models' performance and make the comparison between generative models possible. For intuitive data, there are qualitative elements like human judgment to measure the performance of a generative model. Among the quantitative methods, Inception Score (\\(IS\\)) and Frechet Inception Distance (\\(FID\\)) have become the standard assessment methods in the image domain. Unfortunately, there is no consensual and reliable standard for evaluating generative models in the time-series domain. This deficiency impedes developing and applying deep generative models in the time-series domain and makes comparing the few existing models impossible. Inspired by \\(IS\\) and \\(FID\\) from the image domain, this study introduces the InceptionTime Score (\\(ITS\\)) and Frechet Inception Time Distance (\\(FITD\\)) to assess generative models in the time-series domain. In doing so, we investigate whether we can transfer the above-mentioned image domain standard to the time-series domain. In the literature, attempts to assess generative models have been proposed, but most notable are \\(TSTR\\) and \\(TRTS\\) introduced by [4]. These constitute a diametral approach compared to our \\(FITD\\) and \\(ITS\\) score and thus are used within our experiments to examine and control the capabilities of our newly introduced assessment metrics. Namely, let \\(P_{data}\\) denote the data distribution and \\(P_{model}\\) the distribution that our generative model learned. Ideally, we expect \\(P_{model}\\) to be sampled from \\(P_{data}\\) and to cover its mode space. These properties should be detectable by an assessment metric \\(\\delta\\) for it to be reliable. We designed an extensive experimental setting that includes 80 datasets from the UCR archive1 to investigate the quality of the sampling induced by \\(P_{model}\\) as well as its capability to reproduce the mode space. We also involved \\(TSTR\\) and \\(TRTS\\) in the whole experimental pipeline and presented the efficacy so that intended researchers with an interest in conditional GANs for time-series can understand it more intuitively and gain confidence in the efficacy of the assessment metrics presented in this paper. Footnote 1: [https://www.cs.ucr.edu/](https://www.cs.ucr.edu/)\\(\\sim\\)eamonn/time_series_data_2018/ ## 2 Related Work The effectiveness of generative models is normally assessed by gauging the gain in performance on the downstream task. This methodology of evaluation holds independent of the modality, i.e., image, audio, time-series, etc. Haradal et al. [5] used the improvement in a classification task to measure the quality of their generative model. Wiese et al. [6] described the performance of their generative model on the finance domain using statistical properties of data that are most relevant for their target domain. Another popular method is evaluating a generative model based on its performance on a surrogate task, such as supervised classification. Esteban et al. [4] specified their generative model performance based on \\(TSTR\\) (Train on Synthetic, Test on Real) and \\(TRTS\\) (Train on Real, Test on Synthetic). The \\(TRTS\\) is defined by training a classifier on real data and testing it on synthetic data. Similarly, \\(TSTR\\) is calculated by training a classifier on synthetic data and testing it on real data. Smith et al. [7] employed a similar method for quantifying the performance of TSGAN. Furthermore, the authors defined 1D \\(FID\\) by training a simple classifier separately on each dataset and using this network for \\(FID\\) calculation. However, the 1D \\(FID\\) was not aligned with the visual observation from generated samples in some cases. The authors of T-CGAN [8] outlined the performance of their model based on the \\(TSTR\\) only and reported AUROC instead of accuracy. While \\(TSTR\\) and \\(TRTS\\) can provide an indirect assessment of a generative model, they rely heavily on the choice of the classifier. Furthermore, they cannot reflect diversity in generated samples [2]. ## 3 Quantitative Assessment for Deep Generative Models on the Time-Series Domain This section introduces different methods available for the assessment/evaluation of generative models with a focus on the time-series domain. All these methods employ a classifier in their pipeline, either for calculating the score Figure 1: The proposed evaluation pipeline for \\(FITD\\) and \\(ITS\\). or extracting features from input. To have comparable results across various studies, it is crucial to use the same testbed. For instance, on the image domain, a pre-trained inception network [9] trained on ImageNet dataset [10] is employed for computing assessment metrics. Therefore, in this study, we propose to adopt InceptionTime [11] for determining our evaluation metrics. InceptionTime is a CNN-based time-series classifier that acquired impressive accuracy on the time-series classification task. In this study, we employed a similar network structure across all datasets; however, due to the high variance between the dynamics of various time-series datasets, it is not viable to utilize a single pre-trained network across different datasets. Hence, the InceptionTime network is trained separately for each dataset. We adopt the same network structure and training pipeline as the authors of InceptionTime provided in the project git repository2. An overview of our evaluation pipeline is represented graphically in Fig. 1. Footnote 2: [https://github.com/hfawaz/InceptionTime](https://github.com/hfawaz/InceptionTime) ### InceptionTime Score (\\(Its\\)) Inspired by IS for assessing generative models in the image domain, we proposed the InceptionTime Score (\\(ITS\\)) as the evaluation metric for the quality synthetic data in the time-series domain. Given \\(x\\) as the set of synthetic time-series samples and \\(y\\) as their corresponding labels, we expect high-quality generated data to have low entropy conditional label distribution \\(p(y\\mid x)\\). This is to be compared with the data's marginal distribution \\(p(y)\\), which is expected to be high for diverse samples. Thus, in the ideal case, the shapes of \\(p(y\\mid x)\\) and \\(p(y)\\) are opposite: namely narrow vs uniform. The score should reflect this property and be higher the more the conditional label and the marginal distributions differ. This is achieved by taking the exponentiation of their respective KL divergence: \\[\\begin{split}\\text{ITS}&=\\exp\\left(H(y)-\\mathbb{E} _{\\mathbf{x}}[H(y\\mid\\mathbf{x})]\\right)\\\\ &=\\exp\\left(\\mathbb{E}_{\\mathbf{x}}[\\mathbb{KL}(p(\\mathbf{y}\\mid \\mathbf{x})\\|p(\\mathbf{y}))]\\right).\\end{split} \\tag{1}\\] By definition, \\(ITS\\) is a positively oriented metric. Its lowest value is 1.0, and its upper bound is the number of classes in the dataset. To acquire the label of synthetic time-series data, we employed a pre-trained InceptionTime network. ### Frechet InceptionTime Distance (\\(Fitd\\)) \\(ITS\\) relies solely on the statistics of the generated samples and ignores real samples. Hence, it assigns a high score to a model with sharply distributed marginal and diverse training samples, regardless of whether the generated samples follow the target distribution. To address this problem on image domain, Heusel et al. [12] proposed Frechet Inception Distance (\\(FID\\)). Toexploit \\(FID\\) on time-series data, we defined Frechet InceptionTime Distance (\\(FITD\\)). We extract the feature vectors for the real and the generated samples from the penultimate layer of a pre-trained InceptionTime Classifier. We assume each of these feature vectors follows a continuous multivariate Gaussian. Subsequently, we calculate the Frechet Distance (also known as Wasserstein-2 distance) between these two Gaussian, i.e. \\[\\text{FITD(r, g)}=\\left\\|\\mu_{r}-\\mu_{g}\\right\\|_{2}^{2}+\\text{Tr}\\left(\\Sigma_ {r}+\\Sigma_{g}-2\\left(\\Sigma_{r}\\Sigma_{g}\\right)^{\\frac{1}{2}}\\right), \\tag{2}\\] where (\\(\\mu_{r}\\), \\(\\Sigma_{r}\\)) and (\\(\\mu_{g}\\), \\(\\Sigma_{g}\\)) are the mean and covariance matrices of the real data and generated data, respectively. Lower \\(FITD\\) indicates a smaller distance between data distribution and real distribution, and the minimum value is zero. \\(FITD\\) is a robust and efficient metric; however, its assumption on multivariate Gaussian distribution in feature space is not always true. ### Assessment Based on Classification Accuracy We can use a classifier to explicitly benefit from labeled data to assess the class-conditional generative models. The core idea is that if a generative model can generate realistic data samples, it should perform well in the downstream tasks. In this case, a classifier can be trained on real data and tested on synthetic data in terms of classification accuracy. This paper refers to this method as \\(TRTS\\) (Train on Real, Test on Synthetic). \\(TRTS\\) implies that if the distribution learned by the generative model \\(P_{model}\\) matches the data distribution \\(P_{data}\\), then a discriminative model trained on samples from \\(P_{data}\\) can accurately classify generated samples from \\(P_{model}\\). \\(TRTS\\) outputs low accuracy if generated samples fall out of \\(P_{data}\\). However, if \\(P_{model}\\subset P_{data}\\), then \\(TRTS\\) might assign a high accuracy, in neglection of the fact that the mode space is only partially covered by \\(P_{model}\\). Another classifier-based method is to train a model on synthetic data and test it on real data. We refer to this method as \\(TSTR\\) (Train on Synthetic, Test on Real). Like \\(TRTS\\), the \\(TSTR\\) argues that if \\(P_{model}\\approx P_{data}\\), then a classifier trained on generated samples can score high accuracy while classifying real samples. Unlike \\(TRTS\\), the \\(TSTR\\) can detect the situation where \\(P_{model}\\) partially covers \\(P_{data}\\); however, it cannot reflect the existence of synthetic samples that do not follow \\(P_{data}\\). In other words, \\(TSTR\\) provide high accuracy even if \\(P_{data}\\subset P_{model}\\). This latter case is more intuitively known as an over-parametrized model. In this study, we employed the InceptionTime model as the classier for calculating \\(TRTS\\) and \\(TSTR\\). ## 4 Evaluation Data - UCR Time-series Classification Archive The UCR archive [13] is a collection of 128 univariate time-series datasets designed for the classification task. It thus enables us to perform our experiments on a broad spread of datasets with various properties across different domains. Furthermore, the InceptionTime model has demonstrated impressive performance in the classification task on the UCR archive. As discussed above, we need highly classifiable and diverse features to precisely calculate \\(FITD\\) and \\(ITS\\). Therefore, for our experimental setting, we select a subset of datasets from the UCR archive on which the InceptionTime model acquires at least 80% accuracy, resulting in 80 datasets. Appendix A lists the names of these datasets, their properties, and the accuracy scored by the InceptionTime model. ## 5 Experiments and Results To investigate the discriminative ability of \\(ITS\\), \\(FITD\\), \\(TRTS\\), and \\(TSTR\\) in the time-series domain, we first design scenarios to replicate common problems of generative models, namely: * Decline in Quality * Mode Drop, and * Mode Collapse. Subsequently, we apply our assessment methods and study how they can indicate these problems. ### Experimental Evaluation Score In our experiments, we train InceptionTime on the train set of these datasets and calculate our scores on the respective test set to obtain the base score (\\(\\mathrm{score}_{\\mathrm{base}}\\)) on each dataset. Since the test set is obtained from data distribution, we consider \\(\\mathrm{score}_{\\mathrm{base}}\\) as the best score we can acquire on each dataset empirically. Also, we indicate the score of generated samples as \\(\\mathrm{score}_{\\mathrm{gen}}\\). Finally, we define \\[\\mathrm{rel}(\\mathrm{score})=\\mathrm{score}_{\\mathrm{base}}-\\mathrm{score}_ {\\mathrm{gen}} \\tag{3}\\] as the score of generated samples relative to the base score. We expect \\(\\mathrm{rel}(\\mathrm{ITS})\\geq 0\\:,\\:\\mathrm{rel}(\\mathrm{TRTS})\\geq 0\\:,\\: \\mathrm{rel}(\\mathrm{TSTR})\\geq 0\\:\\:\\:\\mathrm{and}\\:\\mathrm{rel}(\\mathrm{FITD})\\leq 0\\:\\:\\: \\mathrm{in}\\:\\:\\mathrm{all}\\) cases. In other words, we do not expect a better score than the base score. ### Experiment 1 - Decline in Quality An assessment method should express the quality of the generated samples quantitatively. For this experiment, we added a noise signal to the samples in the test set to simulate the decrease in quality. The noise is sampled from a Gaussian distribution with \\(\\mu=0\\), and \\(\\sigma\\) is selected from an equally spaced gridof values in \\([0,5]\\). The standard deviation value indicates the noise strength and the amount of corruption in the original data. We expect the assessment scores to worsen with the increase in standard deviation. Figure 2 presents our experiment's results on four datasets (the rest of the visualization are presented in Appendix B). **FITD**: The \\(FITD\\) response behaves differently than others. Since \\(FITD\\) does not have an upper bound, it increases with the increase of corruption into data. Other scores converge to their lower bound at some noise strength (\\(\\sigma=\\Delta\\)) and cannot indicate the increasing strength of noise on data when \\(\\sigma>\\Delta\\). **TRTS and ITS**: The behavior of \\(TRTS\\) and \\(ITS\\) are very similar Both \\(ITS\\) and \\(TRTS\\) use the InceptionTime model trained on the train set as the backbone of their computation. Once \\(\\sigma>\\Delta\\), the classifier fails to classify the samples, and its prediction is not better than a random guess. The \\(TRTS\\) converges to random guess accuracy, which depends on the number of classes on the dataset, and \\(ITS\\) converges to 1.0. **TSTR**: The \\(TSTR\\) response has more variance than \\(TRTS\\). The reason is that \\(TRTS\\) is trained on a train-set of real data, which does not change Figure 3: The comparison between original and noisy data from the Chinatown dataset. Due to the large scale of data, the introduction of noise with \\(\\sigma=5\\) does not change the data significantly to cause a response in our scores. Figure 2: Changes in the scores when data quality is declined by introducing noise into data progressively. during experiments, while \\(TSTR\\) is trained on synthetic data, and as a result, we trained a new model for each value of \\(\\sigma\\). The value of \\(\\Delta\\) depends on the scale of the data. We need more substantial noise to corrupt the data with a larger data scale. For instance, it seems that our scores cannot detect the presence of noise on data in the Chinatown data set in figure 2. However, figure 3 reveals that this data set has a great scale, ranging approximately between [0,2000]. Therefore, we need a much larger \\(\\sigma\\) to corrupt the data meaningfully. ### Experiment 2 - Mode Drop Mode Drop happens when the generative model ignores some modes of real data while generating artificial samples. This could be due to a lack of model capacity or inadequate optimization [14]. We design three experiment scenarios to evaluate the capabilities of \\(ITS\\) and \\(FITD\\) in recognizing mode drop in the time-series domain. #### 5.3.1 Single Mode Drop In the first experiment, we remove all the samples belonging to one class from the test set to simulate the mode drop scenario. We calculate all scores for the mode drop caused by removing each class. Hence, for the dataset with \\(N\\) classes, we would have \\(N\\) values for each score. Figures 4 and 5 illustrate \\(rel(score)\\) of our scores' responses on all datasets. **FITD**: The changes in \\(FITD\\) depend on the degree to which the removed class affects the properties of assumed Gaussian distribution in latent space. In most datasets, the drop of a single class did not change the Gaussian distribution properties in latent space significantly. Thus, the \\(FITD\\) reflects the single mode drop poorly. On the other hand, on a few datasets, the \\(FITD\\) response with high variance indicates that at least one of the class samples significantly impacts the mean and covariance matrix of points in latent space. Since the feature vectors are generated with a non-linear transformation to a high dimensional space, it is impossible to interpret the \\(FITD\\) response given the samples in the data space. **ITS**: The \\(ITS\\) response is mostly positive but has a great variance. When we remove a class, we change the diversity of labels. Therefore, we expect that \\(H(P(y\\mid x))\\) remains unaffected while \\(H(P(y))\\) decreases due to the reduction in diversity. The drop of each class affects \\(H(P(y))\\) differently, which results in a high variance between responses. If the distribution of labels is closer to a uniform distribution, the drop of each class will decrease the \\(H(P(y))\\) similarly. In contrast, if the label distribution is heavily unbalanced, then the drop of a major class would increase \\(H(P(y))\\). That is why we can observe the improvement in \\(ITS\\) after mode drop in some rare cases. **TRTS**: With the drop of a class, we have \\(P_{model}\\subset P_{data}\\). As we mentioned previously, we expect \\(TSTR\\) to identify this situation while \\(TRTS\\) is not capable of detecting that. Our results presented in figure 5 are aligned with these metrics' expected behavior. The \\(TRTS\\) did not change on most datasets. **TSTR**: The positive \\(rel(TSTR)\\) indicates that \\(TSTR\\) decreases in most datasets. The impact of a single mode drop is more prominent when the dataset has fewer classes. In a few datasets, the drop of single mode has improved \\(TSTR\\). The drop of a class has made the classification task easier for the classifier. Therefore, in a few datasets, although we have an increase in classification Figure 4: Relative \\(ITS\\) and \\(FITD\\) score when one mode is dropped from a dataset. Figure 5: Relative \\(TRTS\\) and \\(TSTR\\) score when one mode is dropped from a dataset. error due to the missing class, the classification error of other classes has been improved, which results in marginal improvement of overall accuracy. #### 5.3.2 Extreme Mode Drop In the second case, we simulate the extreme case of mode drop, where we keep only one of the classes in the test set. We follow the same approach as the previous experiment but retain only one class. Therefore, for the dataset with \\(N\\) classes, we would have \\(N\\) values for each score. Figures 6 and 7 portray the results. To make the comparison easier across all datasets, the cube root of Figure 6: Relative \\(ITS\\) and \\(FITD\\) score for extreme mode drop scenario. Figure 7: Relative \\(TRTS\\) and \\(TSTR\\) score for extreme mode drop scenario. \\(rel(Score)\\) for \\(FITD\\) and \\(ITS\\) has been presented. All scores respond to the extreme mode drop scenario correctly except \\(TRTS\\). **FITD**: In the case of \\(FITD\\), the extreme mode drop drastically changes the properties of the assumed Gaussian in latent space, and we can see this shift in the \\(FITD\\) response. Additionally, this change is more prominent with a large number of classes. **ITS**: If we assume error-free classification, with drop of all modes except one, the \\(ITS=1\\) since \\(H(P(y))=0\\) and \\(H(P(y\\mid x))=0\\;.\\) Hence, \\(rel(ITS)=ITS_{base}-1=N-1\\) where \\(N\\) is the number of classes. In practice and considering classification error, we still observe that the \\(ITS\\) response is close to theoretical expectation. **TRTS**: Similar to the previous experiment, \\(TRTS\\) response cannot highlight extreme mode drop since \\(P_{model}\\subset P_{data}\\). **TSTR**: The \\(TSTR\\) denotes the extreme mode drop in all datasets. Furthermore, with the increase in the number of classes, we have greater divergence from \\(TSTR_{base}\\). Please note that we have low accuracy for \\(TSTR_{base}\\) for datasets with \\(N>20\\) since we trained the base model for all datasets similarly regardless of the number of classes. #### 5.3.3 Successive Mode Dropping In our final experiment, we fill the gap between the first and second experiments, drop the modes one by one, and inspect the response of our assessment method. Figure 8 demonstrates the scores on four datasets (the rest of the visualization are presented in Appendix C). The results are consistent with previous experiments. **FITD**: \\(FITD\\) is less sensitive when a few classes are dropped. However, when the number of dropped classes crosses a certain threshold, \\(FITD\\) increases sharply. Seemingly, the properties of assumed Gaussian distribution are quite robust against removing a few samples from the test set. However, once we remove samples belonging to most classes, the distribution begins to change dramatically with every additional class we drop from the test set. **ITS and TSTR**: \\(ITS\\) and \\(TSTR\\) decrease linearly with the number of dropped classes. **TRTS**: \\(TRTS\\) does not change with successive drop modes. Figure 8: Changes in the scores when modes are removed one by one. ### Experiment 3 - Mode Collapse The mode collapse problem happens when multiple modes of real data are averaged in generated data and presented as a single mode [2]. To simulate mode collapse, we replaced samples of a class with the averaged sample. This was calculated by averaging samples in each time step as follows: Given a set of samples \\(\\{X^{0},X^{1}, ,X^{N}\\}\\) from a class where each sample consists of \\(T\\) time steps (\\(X^{i}=\\{X^{i}_{0},X^{i}_{1}, X^{i}_{T}\\}\\)), we define the averaged sample \\(\\overline{X}\\) at time step \\(t\\in T\\) as \\[\\overline{X_{t}}=\\frac{1}{N}\\sum_{i=0}^{N}X^{i}_{t}. \\tag{4}\\] Figure 10: Relative \\(TRTS\\) and \\(TSTR\\) score when mode collapse happens in a dataset. Figure 9: Cube root of relative \\(ITS\\) and \\(FITD\\) score when mode collapse happens in a dataset. Figure 9 and 10 summarize the performance of our scores relative to their base score in detecting this simulated mode collapse. **ITS**: In the presence of a perfect classifier, \\(ITS\\) should reach its maximum since then \\(H(P(y\\mid x))=0\\) in (1), and we have maximum diversity among labels, hence \\(H(P(y))=N\\), where \\(N\\) indicates the number of classes. However, the average sample might not accurately represent a class's samples. Therefore, there is a high chance of misclassification. Since our generated samples are small and are limited to a single average sample per class, any misclassification would significantly change ITS from its expected value. Therefore, we can observe in figure 9 that the \\(ITS\\) has been improved in some datasets. **FITD**: The \\(FITD\\) responds correctly to mode collapse on most datasets, but its responses' strength is inconsistent across datasets. Again, interpreting the \\(FITD\\) response depends on how samples are mapped in latent space. If the averaged samples can replicate the test set Gaussian distribution properties, we would obtain \\(FITD\\) close to \\(FITD_{base}\\). Otherwise, \\(FITD\\) would diverge from its base score. **TRTS**: The \\(TRTS\\) displays a hit-and-miss behavior. If the averaged samples can represent the original samples of the dataset, then they would classify correctly, and \\(TRTS\\) cannot detect mode collapse. Otherwise, the misclassification of averaged samples would reflect the mode collapse problem. **TSTR**: The \\(TSTR\\) can detect mode collapse in most datasets. When the mode collapse happens, the diversity of generated samples decreases. Therefore, it is difficult for a classifier to learn the probability distribution of a class accurately, given only samples from the mode of the distribution. Thus, we expect a high classification error once the classifier evaluates the real data due to the limited generalization capacity of the model. The \\(TSTR\\) behavior which is illustrated in figure 10 is aligned with our expectations. ## 6 Conclusion and Final Remarks With new advancements in the deep neural network front, the generative models are on the rise; however, their application has been hindered in the time-series domain due to the lack of a standard assessment method. In this work, we tried to alleviate this problem by introducing a framework to transform two widely used evaluation metrics on the image domain, namely \\(IS\\) and \\(FID\\), to time-series. We employed the InceptionTime classifier as the backbone of our framework and introduced \\(ITS\\) and \\(FITD\\) for quantifying the performance of the generative model on the time-series domain. We conducted various experiments on 80 datasets to investigate the capabilities of \\(ITS\\) and \\(FITD\\) in detecting common problems of generative models and compare their discriminative abilities with \\(TRTS\\) and \\(TSTR\\), two commonly used assessment methods for class-conditional generative models. Table 1 summarizes the capabilities of these metrics in detecting three problems that generative models commonly face. Furthermore, our main findings on each metric are summarized as follows:* **ITS** can respond correctly to all the studied problems in most of the datasets; however, its behavior is most consistent in detecting the Mode Drop problem. Furthermore, \\(H(P(y))\\) seems to be the most defining component of \\(ITS\\) response in detecting the studied problems. * **FITD** behavior heavily depends on how the samples are mapped into latent space. Since the transformation to latent space is complex and non-linear, the interpretation of the \\(FITD\\) response is not straightforward. Additionally, since \\(FITD\\) does not have an upper bound, it can quantify the quality of generated samples better than the other metrics. * **TRTS** performance is disappointing compared to others. In the presence of other metrics, it is unnecessary to compute \\(TRTS\\) for investigating studied problems. * **TSTR** shines when the generative model has learned a subset of the real distribution. Therefore, it is the most reliable to detect Mode Drop and Mode Collapse compared to others. This work can be extended by adopting the recent advancement of generative model assessment on image domain [3] to time-series domain. Another potential direction is to extend the list of studied problems or investigate other aspects of evaluation metrics such as computation time or sample efficiency. \\begin{table} \\begin{tabular}{l c c c} \\hline \\hline & Decline in Quality & Mode Drop & Mode Collapse \\\\ \\hline ITS & + & ++ & + \\\\ FITD & ++ & + & + \\\\ TRTS & + & - & - \\\\ TSTR & - & ++ & ++ \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: The summary of the scores’ capabilities in detecting common problems of generative models. ## Appendix B Extra Visualization for Decline in Quality Experiment Figures B2 to B8 provide visualization of studied metrics response for the decline in the quality experiment for all datasets in the UCR archive. **Fig. B2**: Changes in studied metrics when data quality is declined by introducing noise into data progressively. **Fig. B3**: Changes in studied metrics when data quality is declined by introducing noise into data progressively. **Fig. B4**: Changes in studied metrics when data quality is declined by introducing noise into data progressively. **Fig. B5**: Changes in studied metrics when data quality is declined by introducing noise into data progressively. **Fig. B6**: Changes in studied metrics when data quality is declined by introducing noise into data progressively. **Fig. B7**: Changes in studied metrics when data quality is declined by introducing noise into data progressively. ## Appendix C Extra Visualization for Successive Mode Drop Experiment Figures 14, and 15 visualize studied metrics response when data modes are dropped progressively for datasets in the UCR archive. Only those datasets with more than five classes are presented to improve visualization. **Fig. C9**: Changes in studied metrics when the modes are removed one by one from a dataset. ## References * [1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. Advances in neural information processing systems **27** (2014)* [2] Borji, A.: Pros and cons of gan evaluation measures. Computer Vision and Image Understanding **179**, 41-65 (2019) * [3] Borji, A.: Pros and cons of GAN evaluation measures: New developments. CoRR **abs/2103.09396** (2021) 2103.09396 * [4] Esteban, C., Hyland, S.L., Ratsch, G.: Real-valued (medical) time series generation with recurrent conditional gans. arXiv preprint arXiv:1706.02633 (2017) * [5] Haradal, S., Hayashi, H., Uchida, S.: Biosignal data augmentation based on generative adversarial networks. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 368-371 (2018). IEEE * [6] Wiese, M., Bai, L., Wood, B., Buehler, H.: Deep hedging: learning to simulate equity option markets. Available at SSRN 3470756 (2019) * [7] Smith, K.E., Smith, A.O.: Conditional gan for timeseries generation. arXiv preprint arXiv:2006.16477 (2020) * [8] Ramponi, G., Protopapas, P., Brambilla, M., Janssen, R.: T-cgan: Conditional generative adversarial network for data augmentation in noisy time series with irregular sampling. arXiv preprint arXiv:1811.08295 (2018) * [9] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR **abs/1512.00567** (2015) 1512.00567 * [10] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255 (2009). Ieee * [11] Ismail Fawaz, H., Lucas, B., Forestier, G., Pelletier, C., Schmidt, D.F., Weber, J., Webb, G.I., Idoumghar, L., Muller, P.-A., Petitjean, F.: Inceptiontime: Finding alexnet for time series classification. Data Mining and Knowledge Discovery **34**(6), 1936-1962 (2020) * [12] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems **30** (2017) * [13] Chen, Y., Keogh, E., Hu, B., Begum, N., Bagnall, A., Mueen, A., Batista, G.: The UCR Time Series Classification Archive (2015)* [14] Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y.: Generalization and equilibrium in generative adversarial nets (gans). In: International Conference on Machine Learning, pp. 224-232 (2017). PMLR
Generative models are designed to address the data scarcity problem. Even with the exploding amount of data, due to computational advancements, some applications (e.g., health care, weather forecast, fault detection) still suffer from data insufficiency, especially in the time-series domain. Thus generative models are essential and powerful tools, but they still lack a consensual approach for quality assessment. Such deficiency hinders the confident application of modern implicit generative models on time-series data. Inspired by assessment methods on the image domain, we introduce the InceptionTime Score (\\(\\mathbf{ITS}\\)) and the Frechet InceptionTime Distance (\\(\\mathbf{FITD}\\)) to gauge the qualitative performance of class conditional generative models on the time-series domain. We conduct extensive experiments on 80 different datasets to study the discriminative capabilities of proposed metrics alongside two existing evaluation metrics: Train on Synthetic Test on Real (\\(\\mathbf{TSTR}\\)) and Train on Real Test on Synthetic (\\(\\mathbf{TRTS}\\)). Extensive evaluation reveals that the proposedassessment method, i.e., \\(\\mathbf{ITS}\\) and \\(\\mathbf{FITD}\\) in combination with \\(\\mathbf{TSTR}\\), can accurately assess class-conditional generative model performance. Keywords:generative models; assessment; time-series + Footnote †: [https://www.cs.ucr.edu/](https://www.cs.ucr.edu/)\\(\\sim\\)eamonn/time_series_data_2018/
Summarize the following text.
320
arxiv-format/1811_11872v1.md
# Guided patch-wise nonlocal SAR despeckling Sergio Vitale, Davide Cozzolino, Giuseppe Scarpa, Luisa Verdoliva, and Giovanni Poggi S. Vitale is with the Engineering Department of University Parthenope, Naples, Italy. e-mail: [email protected]. The other authors are with the Department of Electrical Engineering and Information Technology, University Federico II, Naples, Italy. e-mail: {firstname.lastname}@unina.it. ## I Introduction Remote sensing imagery represents nowadays an invaluable source of information for the analysis of the Earth's state. Among the many types of sensors, synthetic aperture radar (SAR) systems are especially precious, since they observe features that are not captured by optical sensors, and acquire data on the target scene independently of the weather and lighting conditions. SAR images are routinely exploited in many key applicative fields, like, for example, the analysis of the environment [1, 2] or urban planning [3, 4]. Unfortunately, they are severely impaired by the presence of speckle, caused by the coherent nature of the scattering phenomena. Speckle noise strongly degrades the quality of SAR images, thereby affecting the performance of subsequent automated tasks, like segmentation [5, 6] or classification [7, 8], and causing problems even to human interpreters. To tackle this problem, the scientific community has produced a major research effort on SAR despeckling in the last decades [9]. A large number of methods have been proposed, working in the spatial domain [10, 11, 12], using wavelet transform [13, 14, 15], sparse representations [16, 17, 18], variational approaches [19, 20, 21] and, very recently, deep learning [22, 23]. As of today, however, the most successful approach to SAR despeckling appears to be the nonlocal paradigm [24, 25], which has produced powerful and widespread methods such as PPB [26] and SAR-BM3D [27]. Key to this success is the ability to recognize \"similar\" pixels, that is, pixels characterized by the same underlying signal. This allows, for each target pixel, to single out its best predictors in the whole image, and use them to perform reliable estimation. Therefore, in nonlocal filtering the main issue is how to find such good predictors. This problem is usually addressed by using suitable patch-based similarity measures [28], leveraging the contextual information conveyed by patch-wise analysis. However, speckle impacts also this process, reducing the ability to find good predictors and, eventually, impairing the filtering performance. We highlight the limits of the nonlocal approach for SAR despeckling with the help of Fig.1 which shows a single-look SAR image (top-left) together with an optical image of the same scene co-registered with it (top-right). The SAR image carries precious information on the scene which is not available in the optical bands. Nevertheless the signal of interest is overwhelmed by speckle noise: the scene structure is hardly visible, and the boundaries between different land covers can be barely detected. This impacts heavily on non-local filtering, preventing the correct localization of the best predictors. As an example, for the target patch marked by a green box in the figure, the selected predictors (red boxes) are dominated by speckle and hence spread all around the target. With such a poor selection of predictors, plain nonlocal filtering can only provide unsatisfactory results (bottom-left). On the optical image, however, which is virtually noiseless, finding good predictors is very easy. Now the selected patches (red boxes) exhibit clearly a signal content similar to the target, and are very likely the best possible predictors. Based on these Fig. 1: Optical-guided nonlocal filtering. Top: co-registered SAR and optical images with a target patch (green) and its best matches (red). The similarity measure singles out the best predictors in the optical image, not so in the SAR image. Bottom: Output of a conventional SAR-domain nonlocal filter (PPB), and of the proposed optical-guided nonlocal filter. The optical guide allows to better preserve all image structures, without introducing filtering artifacts. observations, we decided to leverage optical data to improve nonlocal SAR despeckling, obtaining promising results, as shown in the figure (bottom-right). Of course, data fusion is nothing new in remote sensing. The large abundance of imagery from sensors of different types offers a wealth of opportunities [29, 30, 31, 32] that can be exploited for many remote sensing applications [33, 34, 35, 36]. However, using optical data to support SAR despeckling requires great care. The example of Fig.2 provides some insight into this point. In fact, despite the careful co-registration, and the obvious correspondence of the observed scene, important differences exist between optical and SAR images. These regard not only the signal amplitudes, which show no obvious relationship due to the completely different imaging mechanisms, but also the signal structure, especially in the presence of man-made objects and regions characterized by a significant orography (not present in this example). Therefore, while optical data can be certainly helpful to guide the despeckling process, there is the risk to inject alien information into the filtered SAR image, generating annoying artifacts. Based on these concepts, in [37] we proposed a nonlocal despeckling technique for SAR images, driven by co-registered optical images. Within the frame of a generalized bilateral filtering, optical data were used to properly weight predictor pixels for the current target. To prevent the injection of alien optical structures, the SAR image was preliminary classified, and the optical guide was used only in low-activity areas, switching to a full SAR-domain technique in high-activity areas. In this work (a preliminary version of which was presented in [38]) we keep pursuing the same general approach but propose a much more effective and simple optical-guided SAR despeckling method. We replace the pixel-wise bilateral filter of [37] with patch-wise nonlocal means. Moreover, to avoid optical-related artifacts, we use a simple statistical test which discards unreliable predictors on the fly, during the filtering process. Extensive experiments on real-world imagery prove the potential of the proposed method, also in comparison with state-of-the-art reference methods and with our own previous proposal. In addition, by avoiding the preliminary classification phase and the external complementary filter, the method is much faster and easy to use than [37]. Note that, with respect to our conference paper [38], we introduce here a reliability test which allows us to despeckle effectively also high-activity areas, keeping all available information and removing only bad predictors. Moreover, we perform a theoretical analysis of the proposed test, and carry out a much deeper experimental analysis of performance. In the rest of the paper after recalling previous work (Section 2), we describe the proposed solution (Section 3), study the effect of key parameters on performance (Section 4), discuss experimental results (Section 5), and eventually draw conclusions (Section 6). The SAR-domain distance used in the reliability test is analyzed in Appendix A. ## II Previous work The optical-guided despeckling paradigm was first proposed in [37]. It was observed, there, that a virtually noiseless optical image, co-registered with the target SAR image, can provide precious information to support the speckle removal process. Although SAR and optical images refer to completely different imaging mechanisms, and hence there is no relationship between their signal amplitudes, they share important structural information. A boundary between two fields, for example, keeps the same geometry in both the optical and the SAR image. This structural information can be exploited by means of guided filtering. However, one must also be aware that such a structural similarity does not always hold. This is the case of man-made areas, for example, characterized by intense double reflection lines in SAR images that have no correspondence in optical images. Therefore, care must be taken not to generate filtering artifacts due to the optical guide. In [37] the problem was solved by introducing a preliminary soft classification phase. A high-level scheme of the method is shown in Fig.3. The single-look input SAR image is filtered twice: by means of a guided despeckling tool, leveraging the co-registered optical image, and by means of a conventional SAR-domain despeckling filter. A soft classifier [39] distinguishes between low-activity and high-activity regions, the latter possibly related to man-made areas where optical and SAR geometries differ. The output image is then obtained as a linear combination of the two filtered images, with weights given by the continuous-valued classifier. In low-activity areas, Fig. 3: Block diagram of the optical-guided despeckling method of [37]. Fig. 2: Differences between co-registered SAR and optical images. In the presence of man-made structures the images are profoundly different, and using optical data to guide despeckling may cause serious filtering artifacts. the guided filter prevails, while the opposite happens in high-activity areas. For the despeckling of high-activity areas we used SAR-BM3D [27, 40], which is known [25] to guarantee a good preservation of fine details and man-made structures. The guided filter for low-activity areas, instead, is a generalization of the bilateral filter [41]. Assuming the usual multiplicative model, \\(z(t)=x(t)w(t)\\), with \\(x(t)\\) and \\(z(t)\\) the true and observed intensity values at pixel \\(t\\), and \\(u(t)\\) a Gamma random variable (RV) modeling the speckle, the estimated intensity \\(\\widehat{x}(t)\\) is given by the weighted average of predictors \\(z(s)\\) drawn from a small neighborhood \\(\\Omega(t)\\) of \\(t\\): \\[\\widehat{x}(t)=\\sum_{s\\in\\Omega(t)}w(s,t)\\:z(s) \\tag{1}\\] The weight associated with the predictor at location \\(s\\) is computed as \\[w(s,t) = C\\exp\\left\\{-\\alpha\\|s\\!-\\!t\\|^{2}\\!+\\right.\\] \\[\\left.-\\lambda_{O}d_{O}[o(s),o(t)]-\\lambda_{S}d_{S}[z(s),z(t)]\\right\\}\\] where \\(o(\\cdot)\\) indicates the vector-valued optical data, and \\(C\\) is a normalizing constant. The weights depend both on spatial distance, \\(\\|s\\!-\\!t\\|^{2}\\), like in a ordinary Gaussian filter, and amplitude distances in the SAR domain, \\(d_{S}[z(s),z(t)]\\), and in the optical domain, \\(d_{O}[o(s),o(t)]\\). A simple Euclidean norm is used for the optical-domain distance, while in the SAR-domain we use the dissimilarity measure (loosely referred to as distance in the following) proposed in [26] for multiplicative noise. The weights \\(\\alpha,\\lambda_{O}\\) and \\(\\lambda_{S}\\) are set by preliminary experiments on training data. When \\(\\lambda_{O}=0\\) a simple bilateral filter in the SAR domain is obtained. On the contrary, if \\(\\alpha=\\lambda_{S}=0\\), the weights depend only on the optical-domain distances. However, it is worth underlining that in the filtering procedure there is no leakage of optical data in the filtered output. The filtered value in Eq.(1) is a linear combination of exclusively SAR-domain original values, and optical data impact only on their weighting. Likewise, the soft classifiers uses only SAR data as input. Hence, the optical guide only helps locating predictors that are most similar to the target or, under a different perspective, de-emphasizing the contributes of predictors that are not really similar to the target despite their low spatial and SAR-domain distances. ## III Proposed method This work introduces two major improvements with respect to [37], consisting in * replacing the pixel-wise generalized bilateral filter with patch-wise nonlocal means; * using a reliability test to reject poor predictors on the fly and prevent structural leakages from optical data. The resulting filter, besides providing a much better performance, is much simpler and easy to use, since we remove altogether the activity-based classifier, and do not need external filters to manage high-activity areas. ### _Going patch-wise_ In recent years, there has been a steady trend towards patch-based processing for SAR imagery [24]. Patch-wise nonlocal means, in particular, is well known to significantly outperform the pixel-wise version. The key idea is to compute a large number of estimates of the same pixel, which are eventually aggregated to improve accuracy. This is obtained by applying the nonlocal weighted average to all pixels of a patch, not just its center. Let us consider a target patch, \\(\\mathbf{z}(t)=\\{z(t+k),k\\in\\mathbb{P}\\}\\), where \\(t\\) is an anchor pixel (for example, the patch center), and \\(\\mathbb{P}\\) indicates the set of spatial offsets with respect to \\(t\\). Then, an estimate of the clean patch \\(\\mathbf{x}(t)\\) is obtained through a patch-wise nonlocal average \\[\\widehat{\\mathbf{x}}(t)=\\sum_{s\\in\\Omega(t)}w(s,t)\\mathbf{z}(s) \\tag{3}\\] where the weights \\(w(s,t)\\) depend on a suitable patch-wise similarity measure. This is the same expression as in Eq.(1), except that it now involves all pixels in the target patch, namely \\[\\widehat{x}(t+k)=\\sum_{s\\in\\Omega(t)}w(s,t)\\:z(s+k)\\:\\:\\:\\forall k\\in\\mathbb{P} \\tag{4}\\] Since a pixel belongs to multiple target patches, it will be estimated several times, allowing for the eventual aggregation of all estimates. The weights are now computed as \\[w(s,t)=C\\exp\\left\\{-\\lambda[\\gamma d_{S}(s,t)+(1-\\gamma)d_{O}(s,t)]\\right\\} \\tag{5}\\] where \\(d_{S}(s,t)\\) and \\(d_{O}(s,t)\\) are suitable SAR-domain and optical-domain distances, \\(\\gamma\\in[0,1]\\) is a parameter that balances their contribution, and \\(\\lambda\\) is a parameter which determines how fast weights decay as a function of the distance, and hence impacts on the sharpness/smoothness of the filtered image. Note that, unlike in [37], the weights depend only on signal amplitudes, both SAR and optical, not on spatial distances. For the SAR domain, we use a normalized and slightly modified version of the distance proposed in [26] \\[d_{S}(s,t)=\\frac{1}{\\mu_{D}}\\frac{1}{N}\\sum_{k\\in\\mathbb{P}}\\log\\left[\\frac{z(s +k)+z(t+k)}{2\\sqrt{z(s+k)}z(t+k)}\\right] \\tag{6}\\] where \\(N=|\\mathbb{P}|\\) is the patch size, and \\(\\mu_{D}\\) (described in more detail in Subsection III.C and in the Appendix) is the mean of the single-pixel distance under same-signal hypothesis. The normalization ensures that, in strictly homogeneous regions, predictor patches have unitary average distance from the target, say \\(\\mu_{P}=1\\), with standard deviation \\(\\sigma_{P}=\\sigma_{D}/\\mu_{D}\\sqrt{N}\\). For the optical domain, instead, we consider the normalized Euclidean distance \\[d_{O}(s,t)=\\frac{1}{MN}\\sum_{i=1}^{M}\\sum_{k\\in\\mathbb{P}}[o_{i}(s+k)-o_{i}(t+k )]^{2} \\tag{7}\\] with \\(M\\) the number of bands. ### _Discarding unreliable predictors_ Patch-level processing allows for a much stronger noise suppression than pixel-level processing [24]. Nonetheless, for our application, patch-level operations entail also some risks. Consider, for example, a low-activity target patch with some high-activity patches in its neighborhood, \\(\\Omega(t)\\), maybe patches with double reflection lines. Since _all_ patches in \\(\\Omega(t)\\) are averaged to estimate the clean target, the structures observed in the predictors will be reproduced in the estimate, with an attenuation that depends on optical-domain and SAR-domain distances. Hence, insufficient attenuation of high-activity patches will generate visible artifacts. This can easily happen when the SAR-domain distance is not too large (_e.g._ just a few double-reflection pixels in the predictor) and the optical-domain distance is not particularly discriminative. Fig.4 shows a clear example of this behavior. A few high-intensity pixels, included in the estimation of surrounding patches, produce disturbing artifacts in a large area of the filtered image. Of course, to reduce such artifacts, one can modify the filter's parameters, increasing the relative weight of SAR-domain vs. optical-domain distance. However, this would reduce the benefit of the optical guide in other areas where such artifacts do not occur. To cope with this problem, in [37] we decided to avoid patch-level processing altogether and to treat differently low-activity and high-activity areas. Here, we use a more effective solution. We keep using patch-wise processing but, for each target patch, carry out a preliminary test to single out unreliable predictors and exclude them altogether from the nonlocal average. That is, we perform the nonlocal average of Eq.(3) replacing \\(\\Omega(t)\\) with a new set \\(\\Omega^{\\prime}(t)\\) such that \\[\\Omega^{\\prime}(t)=\\{s\\in\\Omega(t):d_{S}(s,t)<T\\} \\tag{8}\\] With this solution, we are free to select the filter parameters that optimize performance in low-activity areas. Moreover, by removing problematic patches beforehand, we can keep using patch-wise averages also in high-activity areas, with clear benefits in terms of speckle suppression. Only in the extreme case in which no predictor is reliable, maybe due to the presence of corner reflectors, the target patch is not filtered at all, which makes sense in this condition. However, since each pixel belongs to many patches, it is still likely that many individual pixels will be filtered anyway. Of course, the success of this strategy depends on the discriminative power of the SAR-domain distance, and on a suitable selection of the threshold. These aspects are analyzed in the following subsection. In the example of Fig.4, however, it is already clear that this simple test impacts heavily on the quality of filtered image. A further problem, besides filtering artifacts, is image oversmoothing, and the consequent loss of resolution. This effect is especially visible at the boundary between fields, where signal differences are small both in the optical and SAR domains. In this situation, the weights depend only mildly on the signal, and more strongly on the intense SAR-domain speckle, causing an incoherent averaging of patches and ultimately the loss of details. In Fig.4, for example, a small road between two fields goes completely lost. These losses can be reduced by limiting the maximum number of predictors to \\(S_{0}<S\\) patches, with \\(S\\) the search area size, choosing those with smallest optical-domain distance. Therefore, the new set \\(\\Omega^{\\prime\\prime}(t)\\subseteq\\Omega^{\\prime}(t)\\), has cardinality \\[|\\Omega^{\\prime\\prime}(t)|=\\min\\{|\\Omega^{\\prime}(t)|,S_{0}\\} \\tag{9}\\] Thanks to this limitation, in homogeneous areas of the image many irrelevant patches are excluded from the average, emphasizing fine details that would be lost otherwise. Instead, in high-activity areas this limitation has no effect, since most predictors are already discarded by the SAR-domain test. Back to Fig.4, we see that this further limitation allows recovering the road as well as many other details, with no big loss in terms of speckle rejection. The final result of filtering shows both strong speckle suppression and good detail preservation, comparing favourably with state-of-the-art filters. ### _SAR-domain distance and reliability test_ Our filtering strategy founds heavily on the reliability test's capacity of rejecting bad predictors. Therefore, it is worth investigating in more depth the SAR-domain distance, also to gain sensitivity on how to set the decision threshold \\(T\\). Fig. 4: Top: co-registered SAR and optical images. Center: strong reflectors cause filtering artifacts in their whole neighborhood (left); these are removed thanks to the test on the SAR distance (right). Bottom: limiting the number of predictors improves resolution in low-activity areas, with a final result (left) which compares quite favourably with conventional SAR-domain filters, such as SAR-BM3D (right). Let us focus, for the time being, on the _pixel-wise_ SAR-domain distance \\[D[z(s),z(t)]=\\log\\left[\\frac{z(s)+z(t)}{2\\sqrt{z(s)\\,z(t)}}\\right] \\tag{10}\\] This is not exactly the distance proposed in [26] because of the extra 2 at the denominator, which we include to ensure zero distance for \\(z(s)=z(t)\\). Based on measured distances, we would like to identify pixels that have a similar signal component as the target, that is, \\(x(s)\\simeq x(t)\\). Of course, since the observed intensity, \\(z(s)=x(s)u(s)\\), depends also on the speckle, the distance depends on the speckle too. Let us consider two limiting cases, _(i)_ speckle-free data, and _(ii)_ homogeneous signal. In the first case, \\(u(s)=u(t)=1\\), the distance depends only on the ratio of signal intensities \\(\\rho_{x}^{2}=x(s)/x(t)\\), that is \\[D_{\\rm SF}[z(s),z(t)]=\\log\\left[\\frac{x(s)+x(t)}{2\\sqrt{x(s)\\,x(t)}}\\right]= \\log\\left[\\frac{\\rho_{x}}{2}+\\frac{1}{2\\rho_{x}}\\right] \\tag{11}\\] In Fig.5(left) we plot \\(D_{\\rm SF}\\) as a function of the signal intensity ratio. For \\(\\rho_{x}\\) close to 1, the distance is rather flat around the minimum, zero, and begins growing linearly (in semilog axes) only for much larger/smaller values. Therefore, it is not much discriminative for samples with relatively close intensity. On the other hand, with homogeneous signal, \\(x(s)=x(t)\\), the distance is the random variable \\[D[u(s),u(t)]=\\log\\left[\\frac{u(s)+u(t)}{2\\sqrt{u(s)u(t)}}\\right] \\tag{12}\\] with \\(u(s)\\) and \\(u(t)\\) independent Gamma distributed RV's, with unit mean, and shape parameter equal to the number of looks of the image, \\(L\\). In Appendix A we compute the probability density function (pdf) of \\(D\\) as a function of \\(L\\). Fig.5(right) shows two such pdf's, for \\(L\\)=1 and \\(L\\)=16. In the relatively uninteresting case of \\(L\\)=16 (low noise) the pdf is highly peaked around 0, that is, homogeneous pixels do have small SAR-domain distances. However, in the more interesting and relevant case of \\(L=1\\) (single-look images) the pdf is much flatter, and has a long non-negligible tail. The plots of Fig.5 can be used to gain insight into the discriminative power of the SAR-domain distance. As an example, for \\(x(s)/x(t)=\\rho_{x}=2\\), the speckle-free distance is about 0.2. This should allow one to recognize that these pixels are non-homogeneous. However, for homogeneous pixels, \\(x(s)=x(t)\\), the single-look distance exceeds 0.2 with probability 0.4. This means that a 0.2 distance provides little or no information on the quality of the predictor. For a more precise analysis, the pdf of the distance for arbitrary signal intensity ratio is necessary. Lacking the closed-form pdf, we resort to MonteCarlo simulation. Fig.6(left) shows the empirical pdf's for \\(\\rho_{x}^{2}=1\\) (solid red) and \\(\\rho_{x}^{2}=2\\) (dashed blue). As expected, they largely overlap, indicating that no reliable discrimination is possible, and justifying, in hindsight, the classification-based solution proposed in [37]. In this work, however, we use patch-wise distances obtained by summing pixel-wise distances over many samples. Assuming, for the sake of simplicity, two patches with constant signal intensity ratio, that is \\[x(s+k)/x(t+k)=\\rho_{x}^{2},\\ \\ \\forall k\\in\\mathbb{P} \\tag{13}\\] the patch-wise distance becomes the sum of \\(N\\) independent identically distributed RV's, well approximated by a Gaussian law. Fig.6(right) shows again the estimated pdf's for \\(\\rho_{x}^{2}=1\\) and \\(\\rho_{x}^{2}=2\\) when 10\\(\\times\\)10-pixel patches are considered. As expected, same-signal and different-signal patches have now well separated pdf's, suggesting that a test based on patch-wise distances can provide reliable indications. Note that, for good predictors, that is, patches similar to the target, the constant-ratio hypothesis with \\(\\rho_{x}=1\\) is quite reasonable. Under this hypothesis, mean and variance of the pixel-wise distance, \\(\\mu_{D}\\) and \\(\\sigma_{D}^{2}\\), are computed in Appendix A, and hence the approximating Gaussian curve is perfectly known. Therefore, we can set the threshold test with a Neyman-Pearson criterion, deciding in advance which fraction of the good predictors can be lost to ensure rejection of virtually all the bad ones. ## IV Exploring key parameters Like all numerical algorithms, the proposed method depends on several key parameters which impact significantly on performance. Some of them are related to nonlocal means and are set based on literature results, like search area, 39\\(\\times\\)39, and patch size, 8\\(\\times\\)8. Others are set based on preliminary experiments to meet all contrasting quality requirements. Of these latter, decay and balance parameters, \\(\\lambda\\) and \\(\\gamma\\), have a rather obvious meaning and need no special analysis. Instead, Fig. 5: Left: plot of the speckle-free distance \\(D(\\rho)=\\log(\\rho/2+1/2\\rho)\\) in semilog axes. Right: theoretical pdf of \\(D\\) for equal signal intensity (\\(\\rho_{x}=1\\)) and unit-mean Gamma-distributed speckle, \\(L\\)=1 (red) and \\(L\\)=16 (green). Fig. 6: Empirical pdf of D with unit-mean Gamma-distributed speckle, \\(L\\)=1, for \\(\\rho_{x}=1\\) (solid red) and \\(\\rho_{x}=2\\) (dashed blue). Left: pixel-wise distance. Right: patch-wise distance, with 10\\(\\times\\)10-pixel patches. it is worth gaining more insight into how the test threshold, \\(T\\), and the maximum number of predictors, \\(S_{0}\\), impact on performance. To this end we carry out a visual analysis on the T1 clip, shown in Fig.10, which displays both agricultural and urban areas, and hence allows for a study of all features of interest. ### _Threshold \\(T\\)_ Fig.7 refers to a 512\\(\\times\\)192 strip of the T1 image, thin enough to allow for a simple visual comparison of results. From left to right we show the original SAR strip, and 5 filtered versions obtained with threshold \\(T\\) in \\(\\{\\infty,1+4\\sigma,1+2\\sigma,1+\\sigma,1\\}\\), where \\(\\sigma=\\sigma_{P}\\) is the standard deviation of the normalized SAR distance for homogeneous signal. With \\(T=\\infty\\) the test does not operate, and bad predictors contribute to the estimate, causing a severe impairment of the filtered image. Major artifacts are visible in urban areas, due to strong reflectors, but problems arise also in other areas, for example the dark road in the top almost vanishes with filtering. The test on SAR distances solves most of these problems. Even a large threshold, \\(T=1+4\\sigma\\), which excludes only a tiny fraction of good predictors, removes most bad ones. Lowering the threshold to \\(T=1+2\\sigma\\), a more selective test is obtained, and a further quality improvement is observed. With further smaller values, however, a large part of good predictors is rejected together with the bad ones, reducing the speckle rejection ability of the filter in homogeneous areas, and causing the appearance of residual noise. Fig.8 provides further insight into how the test impacts on the number of predictors. Besides the original T1 clip, on the left, we show a false-color map of the number of patches that pass the test with the selected threshold \\(T=1+2\\sigma\\), going from 1 (dark blue), to the maximum \\(S\\)=39\\(\\times\\)39 (intense red). It clearly appears that in urban areas only a few patches survive the test, those that are structurally similar to the target, thus avoiding filtering artifacts. Instead, a very large number of patches pass the test in homogeneous regions, ensuring good speckle rejection. ### _Maximum number of predictors \\(S_{0}\\)_ To visualize the impact of the maximum number of predictors, \\(S_{0}\\), on filtering quality, we use a thin horizontal strip of the T1 image which contains mostly a mosaic of homogeneous regions and some roads. In fact, in urban areas, the number of predictors is already limited by the SAR-domain test and no further constraint is needed. Fig.9 shows, from top to bottom, the optical guide, the original SAR data, and the output of the filter with \\(S_{0}\\)=1521 (all the patches), 256, and 64. In the first case, the filtered image appears oversmoothed. For example, the thin white road on the right is lost, and the boundary between the fields on the left is much smeared. These structures and others are recovered in the second case, \\(S_{0}\\)=256. On the down side, some textures appear in the fields which cannot be spotted in the original SAR image, and may raise the suspect of incorrect behavior. However, one must remember that only original SAR data are averaged. The Fig. 8: False-color representation of the number of predictor patches passing the test at level \\(T=1+2\\sigma\\), from 0% (dark blue) to 100% (intense red). Fig. 7: Visual quality of filtered images as a function of the SAR-distance threshold. When all patches contribute to the estimate (\\(T=\\infty\\)) large filtering artifacts appear. A very small threshold (\\(T=1\\)), instead, causes the rejection of too many predictors, reducing the speckle rejection ability of the filter. optical guide can only give priority to some patches over others and, so to speak, \"combing\" the data in a certain direction, but the data are all SAR. This is confirmed by the thin white road on the right. A careful inspection reveals clear traces of the road in the original data, which are reinforced by the guided filtering. With these considerations, also the third case, with \\(S_{0}\\)=64, is probably acceptable, however we prefer to ensure a stronger speckle rejection and a smoother output and use \\(S_{0}\\)=256 as default parameter. Based on these experiments, we select eventually a configuration with parameters \\(T=1+2\\sigma=1.34,S_{0}=256,\\lambda=0.002\\) and \\(\\gamma=0.15\\), ensuring sharp details and good speckle rejection. However, we also consider a more conservative configuration, with \\(S_{0}=S\\) and \\(\\lambda=0.004\\), which outputs somewhat smoother images. Of course, for other datasets, these parameters may require some fine tuning, also due to different dynamics. With our COSMO dataset, for example, we only needed to multiply all \\(\\lambda\\)'s by a factor 4. ## V Experimental analysis In this Section, we discuss the results of several experiments on two real-world datasets, where the proposed method is compared with several state-of-the-art references. In the following, we describe datasets, quality assessment criteria, reference methods and, finally, numerical and visual results. ### _Datasets_ We designed two SAR/optical datasets, called for short T-SAR and COSMO, in the following. The T-SAR dataset includes four 512\\(\\times\\)512-pixel clips extracted from a large single-look TerraSAR-X image (courtesy of (c)Infoterra GmbH) acquired over Rosenheim (D) in spotlight mode with single HH polarization on January 27th, 2008. For this image, we do not have an optical reference of comparably high quality, therefore we resort to the freely available RGB optical images provided by Google Earth Pro. This \"barebone\" setting is of particular interest for us, since we want the proposed method to be adopted with minimal effort also by users with limited budget. In the Google Earth repository, we found, as of october 2018, several images of the Rosenheim region, spanning from 2002 to 2017, with the closest one acquired on December 31st, 2009, about two years after the target. In Fig.10 we show the four SAR-optical pairs, called T1, T2, T3, and T4, from now on. Each optical image was co-registered with the corresponding SAR image, used as master. The available geocoding information was used for a first raw alignment, refined manually based on a few prominent keypoints. On the average, the co-registration process took about 5 minutes per image. Despite the large temporal gap, all pairs match quite well. On the other hand, some mismatches in the test set are welcome, because good optical references may be unavailable for several reasons, like the presence of clouds, and the proposed method must provide sensible results also in the presence of mismatches or missing data. The four SAR-optical pairs of the T-SAR dataset are available online, at www.grip.unina.it, to allow other researchers Fig. 11: SAR-optical pairs of the COSMO dataset: C1 C4 from left to right. Fig. 10: SAR-optical pairs of the T-SAR dataset: T1 T4 from left to right. Fig. 9: Visual quality of filtered images as a function of the maximum number of predictors in the search area. In homogeneous areas, using all predictors, \\(S_{0}=S\\), causes oversmoothing. With too few predictors, \\(S_{0}=64\\), speckle rejection is less effective. to experiment with the very same data. Moreover, to ensure the reproducibility of our research, the executable code of the proposed method is also available online at the same address. The COSMO dataset includes four 512\\(\\times\\)512-pixel clips extracted from a large COSMO-SkyMed image acquired over the province of Caserta (I) on July 18th, 2011. In this case, as optical guide we can rely on a GeoEye-1 multispectral image, acquired over the same region on July 26th, 2010 (both images curtesy of the Italian Aerospace Research Center). The same co-registration process as for T-SAR was used. In Fig.11 we show the four SAR-optical pairs, called C1, C2, C3, and C4, from now on, with a suitable RGB rendering of the 4-band Geoye-1 images. ### _Quality assessment criteria_ Despite intense research on SAR despeckling, quality assessment is still an open problem [25]. A good filter should guarantee both effective suppression of speckle and faithful preservation of informative image details, such as region boundaries or man-made structures. These are contrasting requirements, as intense image smoothing, which ensures speckle removal, tends to smear all relevant high-frequency details. Therefore, these two aspects should be analyzed individually. To this end, we consider here two objective indicators, _i)_ the equivalent number of looks (ENL), and _ii)_ the ratio image structuredness (RIS). In any case, we leave the last word to the visual inspection of filtered and ratio images. The ENL is the squared ratio between the mean and the standard deviation of the signal computed over a homogeneous region of the image. In Fig.10 and Fig.11 the regions used to compute the ENL are shown enclosed in a white box. Before filtering, the ENL approximates the number of looks of the SAR image. After despeckling, instead, it should be as large as possible, as the filtered image is supposed to approach a constant. The RIS is computed on the ratio image, that is, the ratio between original and filtered images. With ideal filtering and fully developed speckle, the ratio image becomes a field of i.i.d. speckle samples. Imperfect filtering, instead, causes the leakage of image structures in the ratio, which looses its i.i.d. nature. Hence, neighboring pixels tend to be more similar to one another. We measure this tendency through a suitable function of their joint pdf \\[p(i,j)=\\Pr(r(t)=i,r(s)=j) \\tag{14}\\] with \\(r\\) the quantized ratio image, and \\(t,s\\) two 4-connected sites. Inspired by Gomez _et al._[42] we use the homogeneity textural descriptor proposed by Haralick [43] \\[H=\\sum_{i}\\sum_{j}p(i,j)\\frac{1}{(i-j)^{2}-1} \\tag{15}\\] and compare it with the reference value, \\(H_{0}\\), computed on the product of the marginals, \\(p_{0}(i,j)=p(i)p(j)\\), obtaining eventually the RIS index defined as \\[\\mathrm{RIS}=100\\times\\frac{H-H_{0}}{H_{0}} \\tag{16}\\] ### _Reference methods_ We compare the proposed method with a few selected reference methods, chosen because of their diffusion in the community and good performance. More precisely, we include * Enhanced-Lee [11]: an enhanced version of Lee's adaptive local filter [44], widespread in the community; * PPB [26]: a patch-based iterative nonlocal filter, where the output is given by a weighted maximum likelihood estimator with data-driven weights; * SAR-BM3D [27]: the SAR-domain adaptation of the nonlocal BM3D filter [45], with wavelet shrinkage and Wiener filtering; * FANS [40]: a faster and spatially-adaptive version of SAR-BM3D, which ensures a better speckle rejection in homogeneous areas; * G-BF [37]: our previous optical-guided pixel-wise despeckling method, based on generalized bilateral filter. PPB and SAR-BM3D, in particular, represent sort of two limiting cases, with PPB ensuring very strong speckle rejection in homogeneous areas at the cost of some smearing of high-frequency details, and SAR-BM3D much better at preserving details but less effective otherwise. For all methods, we selected parameters as suggested in the original papers or, lacking clear indications, such to optimize filtering quality. For the proposed method, we consider the two configurations described at the end of Section IV, resulting in two versions, a first one (sharp) which makes a more aggressive use of the optical guide, and a second one (smooth) more conservative. ### _Results_ Tab.I and Tab.II report the ENL results for all T-SAR and COSMO clips, respectively, with the average values in the last row. Despite the obvious variations from clip to clip, clear indications emerge from these data, well summarized by the average values, which are almost identical for the two datasets. The proposed method ensures a very strong rejection of speckle, with average ENL beyond 100 for the first version and over 600 for the second one. Among reference methods, only PPB provides comparable results, with ENL around 250, while enhanced Lee and BM3D even remain below 10. Our previous optical-guided filter also provides a good ENL, about 100 on the average. As already said, however, these results may be misleading if not accompanied by visual analysis. Therefore, we now show and comment the output of all filters for some relevant details selected from our datasets. To allow the reader to conduct a more thorough inspection, we publish online, at www.grip.unina.it, the results of all methods under comparison on all clips of our datasets. Fig.12 shows a 256\\(\\times\\)256 section of the C2 clip. Except for some buildings in the left part, the scene includes only fields and some thin roads; the region used to compute the ENL is in the upper-right corner. The visual inspection reveals a number of phenomena hardly captured by the ENL or other numerical measures. It confirms the limited speckle suppression of enhanced Lee and SAR-BM3D, but also the well-known detail preservation ability of the latter. FANS produces a smoother output, but many wavelet-related artifacts appear, which impair significantly the perceived quality. Likewise, PPB is very effective in removing speckle, but introduces \"brushstroke\" patterns and, what is worse, smears edges and buildings. Also our previous optical-guided filter generates some artifacts in smooth areas, and produces annoying halos of residual speckle in high-activity areas. The proposed optical-guided nonlocal means, in both versions, produces images of much better quality. While man-made structures are faithfully preserved (compare with the original SAR image) speckle is largely rejected in all homogeneous areas. In the \"smooth\" version, these areas become basically flat, while in the \"sharp\" version some subtle patterns emerge as a result of the SAR data \"combing\". It remains to understand whether these are real structures, hidden in the SAR data and recovered through filtering, or else they come from the alien optical guide. Lacking a clean reference, no ultimate answer can be given. Nonetheless, some of these structures can be clearly spotted in the original SAR image, though overwhelmed by speckle. For example, the diagonal dark strip in the center, or the parallel thin strips above the central bright fork, which in fact are both captured also by enhanced Lee and SAR-BM3D. Therefore, a reasonable guideline could be to use the sharp version if the optical guide is temporally close to the SAR image, and the smooth version otherwise. In any case, it is clear that both versions of G-NLM work much better than G-BF, based itself on the use of an optical guide, since they ensure a much better suppression of speckle and do not introduce filtering artifacts. All the above consideration are reinforced by further visual analyses. Here we only show another detail, in Fig.13, a 192\\(\\times\\)384 section from the bottom of the T2 clip. Again, the proposed method ensures a better speckle rejection (especially the smooth version) and detail preservation (especially the sharp version) than all references, with an excellent overall quality, considering also the very noisy single-look original data. Obviously, the comparison with conventional methods Fig. 12: Filtering results for the C2 clip. The single-look original is shown in the center for easy comparison. Reference methods all present some shortcomings: limited speckle rejection, loss of resolution, or filtering artifacts. Thanks to the optical guide, G-NLM ensures a much better image quality. Fig. 13: Filtering results for the T2 clip. is not fair, since G-NLM relies on precious auxiliary data to improve performance. On the other hand, our first aim is exactly to show that optical-guided despeckling is a simple and safe solution towards obtaining high-quality despeckled SAR data. However, to fully support this claim, we must consider more challenging scenes, man-made structures, roads, and sharp boundaries between regions of different nature. To this end, in Fig.14 we show a 256\\(\\times\\)256 detail of the C4 clip. The fields show the phenomena already described before, just note that both enhanced Lee and PPB suggest the presence of diagonal strips, further enhanced in G-NLM sharp. As for the buildings, enhanced Lee and PPB smear and sometimes lose details; SAR-BM3D and related FANS and G-BF preserve all structures very well, with an accuracy comparable to that of G-NLM. The latter, however, succeed in removing speckle even inside the man-made area on in its near proximity, providing a sharper result and contributing to a better perceived quality. All this said, if we consider our numerical measure of structuredness, the RIS, we obtain quite different indications. In fact, the results reported in Tab.III and Tab.IV for the two datasets show SAR-BM3D to have by far the lowest RIS, around 2%, followed by FANS and G-BF (which inherit some good features of SAR-BM3D), and by enhanced Lee, while both versions of G-NLM come last, together with PPB, with almost 7%. To explain such conflicting results, in Fig.15 we show the ratio images themselves. The visual inspection provides clear answers. Basically, none of the ratio images show significant leakages from the original image, some image structures are visible only in the PPB ratio and just barely in the enhanced Lee and G-NLM ratios. In these conditions, the RIS measures mostly the grain of the ratio image, which overwhelms truly structural dependencies. Under this point of view, the SAR-BM3D ratio image is clearly preferable, as it resembles very closely a white noise field. All other methods introduce some weak correlation which, however, does not seem to impact on image quality. In summary, RIS provides some valuable Fig. 14: Filtering results for the C4 clip. The single-look original is shown in the center for easy comparison. G-NLM accurately preserves man-made structures, removing speckle also in their proximity. Fig. 15: Ratio images for the C4 clip. The single-look original is shown in the center for easy comparison. Quite limited traces of signal structures are observed (mostly in PPB and G-NLM) while there is an increase in correlation (except for SAR-BM3D). information (the ratio image grain) but correlates poorly with image quality. Unfortunately, this holds for all other measures we tested, which leaves visual inspection as the most reliable form of quality assessment. Therefore, we conclude our analysis by studying, with Fig.16, a last detail, the central part of clip T1, comprising mostly urban areas. Again, G-NLM preserves faithfully all features related to man-made objects and guarantees a strong rejection of speckle in homogeneous areas, even amidst buildings. All reference methods, instead, present some shortcomings, like limited speckle rejection, loss of resolution, or the introduction of filtering artifacts. So, the analysis of this image does not seem to add new information. However, by looking at the whole set of Google Earth images of this area, we discovered some significant temporal changes, which allow us to study the robustness of the proposed method to mismatches between SAR and optical data. In particular, as shown in Fig.17, a group a buildings was leveled between 30-10-2002, date of the previous available optical image, and 31-12-2009, date of our guide. In the test SAR image, dated 27-01-2008, the buildings were still standing, as testified by several double reflection lines. Therefore, there is strong mismatch between SAR data and optical guide. Nonetheless, this does not seem to affect the filtered image, where the building-related structures are clearly visible, and look very similar to those provided by other filters, e.g., SAR-BM3D. As a further proof, we applied the proposed filter using the 2002 image as optical guide, and obtaining similar results. ## VI Conclusions In this paper, we proposed a nonlocal SAR despeckling filter which makes use of available optical imagery to improve performance. Experiments on two real-world datasets show the proposed method to provide filtered images of excellent quality, arguably out of the reach of purely SAR-domain methods. The performance is also much better than that of our own previous optical-guided filter. It is not surprising that information provided by optical imagery may help improving SAR despeckling. Patch-wise nonlocal filtering allows us to exploit this information in a seamless way, avoiding any optical-induced artifacts. However, better solutions are certainly possible, and we hope to witness increasing activity on this line of research. A crucial point along this path is user-friendliness. End users willing to obtain high-quality despeckling images look for simple and efficient plug-and-play tools, which do not require much direct involvement. The proposed method moves a step in this direction. It provides high-quality and stable results based on the freely available Google Earth images, even in the presence of temporal changes. However, the user is still required to manually co-register the optical images with their SAR data. Our future work aims at improving the automation of this latter phase. Fig. 16: Filtering results for the T1 clip. The single-look original is shown in the center for easy comparison. Fig. 17: Robustness to SAR-optical mismatches. SAR-optical inconsistencies (2009 guide) do not disrupt filtering results, which remain very similar to those obtained with a better reference (2002 guide) or with conventional filters. ## Appendix A In this appendix we compute distribution, mean and variance of the random variable \\[D=\\log\\left[\\frac{X+Y}{2\\sqrt{XY}}\\right] \\tag{17}\\] where \\(X\\) and \\(Y\\) are i.i.d. RV's, with unit-mean Gamma distribution, \\[p_{X}(x)=\\frac{L^{L}}{\\Gamma\\left(L\\right)}x^{L-1}e^{-Lx}1(x) \\tag{18}\\] which model speckle samples in a \\(L\\)-look SAR image. Since \\(X\\) and \\(Y\\) are non negative, their geometric and arithmetic means, \\(G=\\sqrt{XY}\\) and \\(A=(X+Y)/2\\), are well defined and non negative, and their ratio does not exceed 1, \\(R=G/A\\in[0,1]\\). Then, we consider the RV transformation \\[\\left\\{\\begin{array}{l}A=(X+Y)/2\\\\ R=2\\sqrt{XY}/(X+Y)\\end{array}\\right. \\tag{19}\\] with inverse transformation \\[\\left\\{\\begin{array}{l}X=A(1\\pm\\sqrt{1-R^{2}})\\\\ Y=A(1\\mp\\sqrt{1-R^{2}})\\end{array}\\right. \\tag{20}\\] and Jacobian \\(\\partial(x,y)/\\partial(a,r)\\) with determinant \\[\\left|\\begin{array}{cc}\\partial x/\\partial a&\\partial x/\\partial r\\\\ \\partial y/\\partial a&\\partial y/\\partial r\\end{array}\\right|=\\frac{2ar}{\\sqrt {1-r^{2}}} \\tag{21}\\] by which we obtain \\[\\begin{split} p_{AR}(a,r)&=p_{X}(a(1\\pm\\sqrt{1 \\!-\\!r^{2}}))\\times\\\\ &\\quad p_{Y}(a(1\\mp\\sqrt{1\\!-\\!r^{2}}))\\left|\\frac{\\partial(x,y)}{ \\partial(a,r)}\\right|\\\\ &=2\\frac{L^{2L}}{\\Gamma\\left(L\\right)^{2}}(a^{2}r^{2})^{L-1}e^{-2 La}\\frac{2ar}{\\sqrt{1\\!-\\!r^{2}}}1(a)\\Pi(r)\\end{split} \\tag{22}\\] having defined \\(\\Pi(r)=1(r)-1(r-1)\\). By rearranging the terms of the above expression we see that \\(A\\) and \\(R\\) are independent random variables \\[p_{AR}(a,r)=p_{A}(a)p_{R}(r) \\tag{23}\\] where \\(A\\) is a unit-mean Gamma RV with parameter \\(2L\\) \\[p_{A}(a)=\\frac{(2L)^{2L}}{\\Gamma\\left(2L\\right)}a^{2L-1}e^{-2La}1(a) \\tag{24}\\] while the ratio \\(R\\) has pdf \\[p_{R}(r)=\\frac{\\Gamma\\left(2L\\right)}{[2^{L-1}\\Gamma\\left(L\\right)]^{2}}\\frac {r^{2L-1}}{\\sqrt{1-r^{2}}}\\Pi(r) \\tag{25}\\] A further RV transformation \\(D=-\\log(R)\\) provides the desired pdf \\[p_{D}(d)=C(L)\\frac{e^{-2Ld}}{\\sqrt{1-e^{-2d}}}1(d) \\tag{26}\\] with \\(C(L)=\\Gamma\\left(2L\\right)/[2^{L-1}\\Gamma\\left(L\\right)]^{2}\\). To obtain mean and variance of \\(D\\) we compute its moment generating function \\[M_{D}(s) =E[e^{sD}]=\\int_{0}^{\\infty}C(L)\\frac{e^{-2\\left(L\\!-\\!s/2\\right)d }}{\\sqrt{1-e^{-2d}}}dd \\tag{27}\\] \\[=\\frac{C(L)}{C(L\\!-\\!s/2)}=\\frac{\\Gamma\\left(2L\\right)}{\\Gamma \\left(2L\\!-\\!s\\right)}\\left[\\frac{\\Gamma\\left(L\\!-\\!s/2\\right)}{\\Gamma\\left(L \\right)}\\right]^{2}2^{-s}\\] with derivatives \\[M_{D}^{\\prime}(s) =M_{D}(s)\\left[\\psi^{(0)}\\left(2L\\!-\\!s\\right)-\\psi^{(0)}\\left(L \\!-\\!\\frac{s}{2}\\right)-\\log 2\\right] \\tag{28}\\] and \\[M_{D}^{\\prime\\prime}(s) =[M_{D}^{\\prime}(s)]^{2}/M_{D}(s)+ \\tag{29}\\] \\[+M_{D}(s)\\left[\\frac{1}{2}\\psi^{(1)}\\left(L\\!-\\!s/2\\right)-\\psi ^{(1)}\\left(2L\\!-\\!s\\right)\\right]\\] expressed in terms of the \\(m\\)-order polygamma functions \\[\\psi^{(m)}\\left(x\\right)=\\frac{d^{m+1}\\log[\\Gamma\\left(x\\right)]}{dx^{m+1}} \\tag{30}\\] Therefore \\[E[D]=M_{D}^{\\prime}(0)=\\psi^{(0)}\\left(2L\\right)-\\psi^{(0)}\\left(L\\right)- \\log 2 \\tag{31}\\] and \\[\\begin{split}\\mathrm{VAR}[D]&=M_{D}^{\\prime\\prime}(0 )-[M_{D}^{\\prime}(0)]^{2}\\\\ &=\\frac{1}{2}\\psi^{(1)}\\left(L\\right)-\\psi^{(1)}\\left(2L\\right) \\end{split} \\tag{32}\\] ## References * [1] M. Latifur Rahman Sarker, J. Nichol, H. Baki Iz, B. Bin Ahmad, and A. Abdul Rahman, \"Forest Biomass Estimation Using Texture Measurements of High-Resolution Dual-Polarization C-Band SAR Data,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 51, no. 6, pp. 3371-3384, June 2013. * [2] K. Irwin, A. Braun, G. Fotopoulos, A. Roth, and B. Wessel, \"Assessing Single-Polarization and Dual-Polarization TerraSAR-X Data for Surface Water Monitoring,\" _Remote Sensing_, vol. 10, no. 6, 2018. * [3] G. He, G.-S. Xia, and H. Sun, \"An Adaptive and Iterative Method of Urban Area Extraction From SAR Images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 3, no. 4, pp. 504-507, Oct 2006. * [4] P. Gamba, M. Aldrighi, and M. Stasolla, \"Robust Extraction of Urban Area Extents in HR and VHR SAR Images,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 4, no. 1, pp. 27-34, March 2011. * [5] H. Deng and D. Clausi, \"Unsupervised segmentation of synthetic aperture Radar sea ice imagery using a novel Markov random field model,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 43, no. 3, pp. 528-538, Mar 2005. * [6] F. Wang, Y. Wu, Q. Zhang, W. Zhao, M. Li, and G. Liao, \"Unsupervised SAR Image Segmentation Using Higher Order Neighborhood-Based Triplet Markov Fields Model,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 8, pp. 5193-5205, Nov 2014. * [7] R. Dekker, \"Texture analysis and classification of ERS SAR images for map updating of urban areas in The Netherlands,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 41, no. 9, pp. 1950-1958, Sep 2003. * [8] A. Popescu, D. Faur, C. Vaduva, and M. Datcu, \"Enhanced classification of land cover through joint analysis of Sentinel-1 and Sentinel-2 data,\" in _ESA Living Planet Symposium_, May 2016, pp. 9-13. * [9] F. Argenti, A. Lapini, T. Bianchi, and L. Alparone, \"A tutorial on speckle reduction in synthetic aperture radar images,\" _IEEE Geoscience and Remote Sensing Magazine_, vol. 1, no. 3, pp. 6-35, Sept 2013. * 269, 1983. * [11] A. Lopes, R. Touzi, and E. Nerzy, \"Adaptive speckle filters and scene heterogeneity,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 28, no. 6, pp. 992-1000, Nov 1990. * [12] J. S. Lee, J. H. Wen, T. L. Ainsworth, K. S. Chen, and A. J. Chen, \"Improved Sign Filter for Speckle Filtering of SAR Imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 47, no. 1, pp. 202-213, Jan 2009. * [13] H. Guo, J. Odegard, M. Lang, R. Gopinath, L. Selesnick, and C. Burrus, \"Wavelet based speckle reduction with application to SAR based ATD/R,\" in _IEEE International Conference on Image Processing_, 1994, pp. 75-79. * [14] A. Achim, P. Tsakalides, and A. Bezzarianos, \"SAR image denoising via Bayesian wavelet shrinkage based on heavy-tailed modeling,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 41, no. 8, pp. 1773-1784, Aug 2003. * [15] F. Argenti, T. Bianchi, and A. Alparone, \"Segmentation-based MAP despeckling of SAR images in the undecimated wavelet domain,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 46, no. 9, pp. 2728-2742, Sep 2008. * [16] S. Foucher, \"SAR image filtering via learned dictionaries and sparse representations,\" in _IEEE International Geoscience and Remote Sensing Symposium_, July 2008, pp. 229-232. * [17] C. Ozcan, B. Sen, and F. Nar, \"Sparsity-Driven Despeckling for SAR Images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 13, no. 1, pp. 115-119, 2016. * [18] S. Tabti, L. Verdoliva, and G. Poggi, \"Sparse-coding adapted to SAR images with an application to despeckling,\" in _IEEE International Geoscience and Remote Sensing Symposium_, July 2018. * [19] L. Denis, F. Tupin, J. Darbon, and M. Sigelle, \"SAR image regularization with fast approximate discrete minimization,\" _IEEE Transactions on Image Processing_, vol. 18, no. 7, pp. 1588-1600, 2009. * [20] W. Feng, H. Lei, and Y. Gao, \"Speckle Reduction via Higher Order Total Variation Approach,\" _IEEE Transactions on Image Processing_, vol. 23, no. 4, pp. 1831-1843, Apr 2014. * [21] Y. Zhao, J. Liu, B. Zhang, W. Hong, and Y.-R. Wu, \"Adaptive Total Variation Regularization Based SAR Image Despeckling and Despeckling Evaluation Index,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 53, no. 5, pp. 2756-2774, May 2015. * [22] G. Chierchia, D. Cozzolino, G.Poggi, and L.Verdoliva, \"SAR image despeckling through Convolutional Neural Networks,\" in _IEEE International Geoscience and Remote Sensing Symposium_, July 2017. * [23] P. Wang, H. Zhang, and V. Patel, \"SAR Image Despeckling Using a Convolutional Neural Network,\" _IEEE Signal Processing Letters_, vol. 24, no. 12, pp. 1763-1767, 2017. * [24] C. A. Deldalle, L. Denis, G. Poggi, F. Tupin, and L. Verdoliva, \"Exploiting Patch Similarity for SAR Image Processing: The nonlocal paradigm,\" _IEEE Signal Processing Magazine_, vol. 31, no. 4, pp. 69-78, July 2014. * [25] G. Di Martino, M. Poderico, G. Poggi, D. Riccio, and L. Verdoliva, \"Benchmarking Framework for SAR Despeckling,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 3, pp. 1596-1615, 2014. * [26] C. A. Deldalle, L. Denis, and F. Tupin, \"Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,\" _IEEE Transactions on Image Processing_, vol. 18, no. 12, pp. 2661-2672, Dec 2009. * [27] S. Parrilii, M. Poderico, C. V. Angelino, and L. Verdoliva, \"A Nonlocal SAR Image Denoising Algorithm Based on LIMMSE Wavelet Shrinkage,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 50, no. 2, pp. 606-616, Feb 2012. * [28] C. Deldalle, L. Denis, and F. Tupin, \"How to compare noisy patches? Patch similarity beyond gaussian noise,\" _International Journal of Computer Vision_, vol. 99, pp. 86-102, June 2012. * [29] M. Schmitt and X. X. Zhu, \"Data fusion and remote sensing: An ever-growing relationship,\" _IEEE Geoscience and Remote Sensing Magazine_, vol. 4, no. 4, pp. 6-23, Dec 2016. * [30] N. Joshi, M. Baumann, A. Ehammer, R. Fensholt, K. Grogan, P. Hostert, M. Jepsen, T. Kuemmer, P. Meyfroidt, E. Mitchard, J. Reiche, C. Ryan, and B. Waske, \"A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring,\" _Remote Sensing_, vol. 8, no. 1, p. 70, 2016. * challenges and recent trends,\" in _IEEE International Geoscience and Remote Sensing Symposium_, July 2017. * [32] R. Bahmanyar, D. Espinoza-Molina, and M. Datcu, \"Multisensor Earth Observation Image Classification Based on a Multimodal Latent Dirichlet Allocation Model,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 15, no. 3, pp. 459-463, Mar 2018. * [33] C. Corbune, J.-F. Faure, N. Baghdadi, N. Villeneuve, and M. Petit, \"Rapid Urban Mapping Using SAR/Optical Imagery Synergy,\" _Sensors_, vol. 8, no. 11, pp. 7125-7143, 2008. * [34] G. Laurin, V. Liesenberg, Q. Chen, L. Gueriero, F. D. Frate, A. Bartolini, D. Coomes, B. Wilebore, J. Lindsell, and R. Valentini, \"Optical and SAR sensor synergies for forest and land cover mapping in a tropical site in West Africa,\" _International Journal of Applied Earth Observation and Geoinformation_, vol. 21, pp. 7-16, Apr 2013. * [35] A.Erricco, C.V.Angelino, L.Cicala, G.Perschino, C.Ferrara, M.Lega, A. Vallario, C.Parente, G.Masi, R.Gietano, G.Scarpa, D.Amitrano, G.Ruello, L.Verdoliva, and G.Poggi, \"Detection of environmental hazards through the feature-based fusion of optical and SAR data: A case study in southern Italy,\" _International Journal of Remote Sensing_, vol. 36, pp. 3345-3367, July 2015. * [36] T. Itoki, B. Haack, and R. Mahabir, \"Comparison and integration of spaceborne optical and radar data for mapping in Sudan,\" _International Journal of Remote Sensing_, pp. 1551-1569, 2015. * [37] L. Verdoliva, R. Gaetano, G. Ruello, and G. Poggi, \"Optical-Driven Nonlocal SAR Despeckling,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 12, no. 2, pp. 314-318, Feb 2015. * [38] R. Gaetano, D. Cozzolino, L. D'Amanio, L. Verdoliva, and G. Poggi, \"Fusion of SAR-optical data for land cover monitoring,\" in _2017 IEEE Geoscience and Remote Sensing Symposium_, 2017. * [39] D. Graganiello, G. Poggi, G. Scarpa, and L. Verdoliva, \"SAR Image Despeckling by Soft Classification,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 9, no. 6, pp. 2118-2130, June 2016. * [40] D. Cozzolino, S. Parrilii, G. Scarpa, G. Poggi, and L. Verdoliva, \"Fast Adaptive Nonlocal SAR Despeckling,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 11, no. 2, pp. 524-528, Feb 2014. * [41] C. Tomasi and R. Manduchi, \"Bilateral filtering for gray and color images,\" in _International Conference on Computer Vision_, Jan 1998, pp. 839-846. * [42] L. Gomez, R. Ospina, and A. C. Frery, \"Unassisted quantitative evaluation of despeckling filters,\" _Remote Sensing_, vol. 9, no. 4, p. 389, 2017. * [43] R. Haralick, K. Shanmugam, and I. Dinstein, \"Textural features for image classification,\" _IEEE Transactions on Systems, Man and Cybernetics_, vol. 3, no. 6, pp. 610-621, 1973. * [44] J. Lee, \"Digital image enhancement and noise filtering by use of local statistics,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. PAMI-2, no. 2, pp. 165-168, March 1980. * [45] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, \"Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering,\" _IEEE Transactions on Image Processing_, vol. 16, no. 8, pp. 2080-2095, Aug 2007.
We propose a new method for SAR image despeckling which leverages information drawn from co-registered optical imagery. Filtering is performed by plain patch-wise nonlocal means, operating exclusively on SAR data. However, the filtering weights are computed by taking into account also the optical guide, which is much cleaner than the SAR data, and hence more discriminative. To avoid injecting optical-domain information into the filtered image, a SAR-domain statistical test is preliminarily performed to reject right away any risky predictor. Experiments on two SAR-optical datasets prove the proposed method to suppress very effectively the speckle, preserving structural details, and without introducing visible filtering artifacts. Overall, the proposed method compares favourably with all state-of-the-art despeckling filters, and also with our own previous optical-guided filter.
Condense the content of the following passage.
165
arxiv-format/2007_04681v2.md
EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm for Constrained Global Optimization Lorenzo Federici _Department of Mechanical and Aerospace Engineering_ _Sapienza University of Rome_ Rome, Italy [email protected] Boris Benedikter _Department of Mechanical and Aerospace Engineering_ _Sapienza University of Rome_ Rome, Italy [email protected] Alessandro Zavoli _Department of Mechanical and Aerospace Engineering_ _Sapienza University of Rome_ Rome, Italy [email protected] ## I Introduction _Evolutionary Optimization at Sapienza_, or _EOS_, is an evolutionary optimization algorithm for continuous-variable problems developed at the Department of Mechanical and Aerospace Engineering of Sapienza University of Rome. Its origin dates back to 2015, when a first version of the algorithm was coded with the aim of tackling the \"nearly-impossibile\" interplanetary trajectory optimization problems proposed in the Global Trajectory Optimization Competitions [1]. Since then, _EOS_ has been continuously updated and improved, and applied with success to a broad range of unconstrained and constrained space trajectory optimization problems, as multiple gravity-assist trajectories [2], rocket ascent trajectories [3, 4], and multi-rendezvous missions [5, 6]. _EOS_ implements a multi-population, self-adaptive, \\(\\varepsilon\\)-constrained Differential Evolution (DE) algorithm, with a synchronous island-model for parallel computation. DE is a well-known population-based evolutionary algorithm, devised by R. Storn and K. Price in 1997 [7] to find the global optimum of nonlinear, non-differentiable functions of real-valued variables. Despite its simplicity, DE exhibits much better performance in comparison with several other metaheuristic algorithms on a wide range of benchmark and real-world optimization problems, defined over a continuous parameter space [8]. As other Evolutionary Algorithms (EAs) and Genetic Algorithms (GAs), DE exploits the crossover, mutation, and selection operators to generate new candidate solutions, or individuals, and to decide on their survival in successive generations. Unlike traditional EAs and GAs, each mutated solution is generated as a scaled difference of a number of distinct individuals of the current population. This self-referential mutation has the desirable property to automatically adapt the different variables of the problem to their natural scale in the solution landscape, boosting the search potential of the algorithm [9]. All these evidences contributed to selecting DE as the optimization core of _EOS_. The standard DE algorithm is neither a globally nor a locally convergent algorithm, that is, there exist problem instances for which DE is not theoretically able to identify locally optimal solutions [10]. Nevertheless, DE has proven to be capable of attaining high quality results in many practical optimization problems. However, its performance typically drop in complex optimization environments, characterized by nonlinear constraints, numerous isolated local minima, and a high-dimensional search space, all typical characteristics of space trajectory optimization problems, which are the kind of problems _EOS_ has been designed to deal with. For this reason, many researchers focused on modifying the standard DE algorithm in order to improve its effectiveness when applied to hard, constrained, global optimization problems. Several DE variants have thus been proposed, most of which are collected in [9, 11]. Four weaknesses of the classical DE algorithm have been particularly targeted by researchers: (i) (nonlinear) constraint handling [12], (ii) the tuning of the few DE control parameters that drive the evolution process [13], (iii) the lack in diversity between individuals in the population over generations [14], and (iv) the need to find a balance between a wide exploration of the solution space and a quick refinement (or exploitation) of the previously obtained solutions [15]. All these aspects have been addressed in _EOS_ by combining some of the most successful ideas found in the literature. Specifically, five major add-ons have been implemented in _EOS_ to improve the performance of the standard DE algorithm: (i) a \\(\\varepsilon\\)-constrained method, to deal with (possibly nonlinear) constraints, (ii) a self-adaptation of the control parameters, (iii) an epidemic mechanism, to maintain diversity within the population during the evolution, (iv) a pruning of the worst sections of the solution space, to speed up the convergence process, and (v) a synchronous, multi-mutation, island-model, to achieve a proper balance between exploration and exploitation of the search space. Moreover, the code has been made parallel by using a hybrid MPI-OpenMP programming, to obtain reasonable computation times even in presence of high dimensional problems and/or computationally expensive cost functions. The paper is organized as follows. After the basic notation of optimization problems is introduced in Sec. II, the standard DE algorithm and its operators are briefly described in Sec. III. The following sections give a detailed description of the improvements made in _EOS_ to DE, i.e., the self-adaptive scheme (Sec. IV), the epidemic mechanism (Sec V), the pruning technique (Sec VI), constraint handling (Sec. VII) and the parallel island-model (Sec. VIII). Finally, successful applications of _EOS_ to real-world problems borrowed from the space sector, that is, the optimization of a Multiple Gravity-Assist (MGA) trajectory, of the ascent trajectory of a multi-stage launcher and of an Active Debris Removal (ADR) trajectory, are reported in Sec. IX. ## II Optimization Problem Given a problem described by \\(D\\) real-valued variables: \\[\\mathbf{x}=\\left[x^{(1)},\\ldots,x^{(D)}\\right] \\tag{1}\\] and a cost function \\(f(\\mathbf{x})\\), the aim of an optimization process is to find the vector \\(\\mathbf{x}^{*}\\) that minimizes \\(f(\\mathbf{x})\\): \\[\\mathbf{x}^{*}=\\operatorname*{argmin}_{\\mathbf{x}\\in\\Omega}f(\\mathbf{x}) \\tag{2}\\] where \\(\\Omega\\) represents the solution space. In _unconstrained optimization problems_, \\(\\Omega\\) is a \\(D\\)-dimensional hyperrectangle, defined as the Cartesian product of the bounding intervals of the design variables \\(\\mathbf{x}_{L}=\\left[x_{L}^{(1)},\\ldots,x_{L}^{(D)}\\right]\\) and \\(\\mathbf{x}_{U}=\\left[x_{U}^{(1)},\\ldots,x_{U}^{(D)}\\right]\\): \\[\\Omega=\\Omega_{b}=\\left\\{\\mathbf{x}\\in\\mathbb{R}^{D}:\\mathbf{x}_{L}\\leq \\mathbf{x}\\leq\\mathbf{x}_{U}\\right\\}\\subset\\mathbb{R}^{D} \\tag{3}\\] In the case of _constrained optimization problems_, the solution space \\(\\Omega\\) is further reduced by the presence of \\(K\\) inequality constraints \\(\\mathbf{\\Psi}(\\mathbf{x})=\\left[\\Psi^{(1)}(\\mathbf{x}),\\ldots,\\Psi^{(K)}( \\mathbf{x})\\right]\\), which in general are nonlinear functions of the design variables: \\[\\Omega=\\left\\{\\mathbf{x}\\in\\Omega_{b}:\\mathbf{\\Psi}(\\mathbf{x})\\leq\\mathbf{0 }\\right\\}\\subset\\mathbb{R}^{D} \\tag{4}\\] The possible presence of equality constraints is already included in expression (4); indeed, any equality constraint of the form: \\(\\Phi^{(k)}(\\mathbf{x})=C\\) can be rewritten as an inequality constraint if an arbitrarily small tolerance \\(\\delta\\) is introduced: \\[\\Psi^{(k)}(\\mathbf{x})\\coloneqq|\\Phi^{(k)}(\\mathbf{x})-C|-\\delta\\leq 0 \\tag{5}\\] ## III Standard Differential Evolution A brief description of the standard DE algorithm is here provided. Let us consider the unconstrained minimization problem defined by Eqs. (2) and (3). An initial collection (or _population_) \\(pop^{0}\\) of \\(N_{p}\\) candidate solution vectors (or _individuals_) is generated by randomly sampling the solutions, as evenly as possible, in the solution space: \\[pop^{0}=\\{\\mathbf{x}_{i}\\in\\Omega\\}_{i=1,\\ldots,N_{p}} \\tag{6}\\] where: \\[x_{i}^{(j)}=x_{L}^{(j)}+p_{i}^{(j)}\\left(x_{U}^{(j)}-x_{L}^{(j)}\\right) \\tag{7}\\] for \\(j=1,\\ldots,D\\), where \\(p_{i}^{(j)}\\) indicates a random number with uniform distribution in \\([0,\\,1]\\). The value of the cost function (or _fitness_) \\(f(\\mathbf{x}_{i})\\) is evaluated for each individual (or _agent_) \\(\\mathbf{x}_{i}\\) composing the initial population. At any iteration \\(G\\) (or _generation_) of the algorithm, a new population \\(pop^{G+1}\\) is created by applying to each vector \\(\\mathbf{x}_{i}\\in pop^{G}\\) a sequence of three operations, named mutation, crossover, and selection, defined as follows. ### _Mutation_ During mutation, a mutated or _donor_ vector \\(\\mathbf{v}_{i}\\) is created as a linear combination of a few population members. Several mutation rules were proposed in the original paper by Storn and Price [7] in order to attain either a wider exploration of the search space or a faster convergence to the optimum (i.e., exploitation). Since then, many other rules have been devised, with the same purpose, by a number of researchers [16]. In the current version of _EOS_, four strategies, among those available in the literature, are adopted: \\[\\begin{array}{ll}1)&\\mathbf{v}_{i}=\\mathbf{x}_{r_{1}}+F\\left(\\mathbf{x}_{r _{2}}-\\mathbf{x}_{r_{3}}\\right)\\\\ 2)&\\mathbf{v}_{i}=\\mathbf{x}_{best}+F\\left(\\mathbf{x}_{r_{1}}-\\mathbf{x}_{r_{ 2}}\\right)\\\\ 3)&\\mathbf{v}_{i}=\\mathbf{x}_{i}+F\\left(\\mathbf{x}_{r_{3}}-\\mathbf{x}_{i} \\right)+F\\left(\\mathbf{x}_{r_{1}}-\\mathbf{x}_{r_{2}}\\right)\\\\ 4)&\\mathbf{v}_{i}=\\mathbf{x}_{best}+F\\left(\\mathbf{x}_{r_{1}}-\\mathbf{x}_{r_{ 2}}\\right)+F\\left(\\mathbf{x}_{r_{3}}-\\mathbf{x}_{r_{4}}\\right)\\end{array} \\tag{8}\\] where \\(F\\in\\mathbb{R}\\) is a parameter driving the mutation (_scale factor_), \\(\\mathbf{x}_{best}\\) is the best individual in the current population, and \\(r_{1},\\ldots,r_{4}\\) represent randomly chosen, non-repeated, indexes belonging to \\([1,N_{p}]\\setminus\\{i\\}\\). Each strategy has its own weaknesses and strengths: strategies based on the mutation of the best individual, such as strategies 2 and 4 (referred to as _DE/best/1_ and _DE/best/2_, respectively) typically show a faster rate of convergence toward an (often local) minimum, whereas strategies based on the perturbation of a randomly chosen or the current individual, such as strategies 1 and 3 (referred to as _DE/rand/1_ and _DE/current-to-rand/1_, respectively) explore to a greater extent (yet more slowly) the whole search space. ### _Crossover_ During crossover, a _trial_ vector \\(\\mathbf{u}_{i}\\) is obtained by mixing the components of the agent vector \\(\\mathbf{x}_{i}\\) and the donor vector \\(\\mathbf{v}_{i}\\). By relying on empirical comparisons [17], the _binomial_ crossover was preferred to the _exponential_ one in _EOS_, and implemented as: \\[u_{i}^{(j)}=\\begin{array}{c}\\begin{cases}v_{i}^{(j)}&\\text{if $p_{i}^{(j)}\\leq C _{r}$ or $j=j_{r}$}\\\\ x_{i}^{(j)}&\\text{otherwise}\\end{cases}\\end{array} \\tag{9}\\] for \\(j=1,\\ldots,D\\), where \\(p_{i}^{(j)}\\) is a random number with uniform distribution in \\([0,1]\\), \\(C_{r}\\in[0,1]\\) is another control parameter of the algorithm (named _crossover probability_) and \\(j_{r}\\) is a random index chosen once per individual in the range \\([1,D]\\) that ensures that at least one element in \\(\\mathbf{u}_{i}\\) is inherited from \\(\\mathbf{v}_{i}\\). At this point, the bounding intervals are enforced on the design variables: \\[u_{i}^{(j)}=\\begin{array}{c}\\begin{cases}x_{L}^{(j)}&\\text{if $u_{i}^{(j)}<x_{L}^{(j)}$} \\\\ x_{U}^{(j)}&\\text{if $u_{i}^{(j)}>x_{U}^{(j)}$}\\\\ u_{i}^{(j)}&\\text{otherwise}\\end{cases}\\end{array} \\tag{10}\\] for \\(j=1,\\ldots,D\\). ### _Selection_ Eventually, the agent and trial vectors are compared. In the case of an unconstrained optimization problem, the one that is characterized by the best fitness value is retained and inserted in the new population \\(pop^{G+1}\\): \\[\\mathbf{x}_{i}^{G+1}=\\begin{array}{c}\\begin{cases}\\mathbf{u}_{i}^{G}&\\text{ if $f(\\mathbf{u}_{i}^{G})\\leq f(\\mathbf{x}_{i}^{G})$}\\\\ \\mathbf{x}_{i}^{G}&\\text{otherwise}\\end{cases}\\end{array} \\tag{11}\\] The corresponding selection step in the case of constrained optimization is discussed in Sec. VII. ### _Termination Criteria_ This three-step process is repeated iteratively, creating at each generation a new population that replaces the previous one. Several termination criteria can be adopted, either alone or in conjunction, to stop the optimization procedure. The most common are: * maximum number of fitness evaluations (\\(FES\\)), \\(N_{F}\\); * maximum number of generations, \\(N_{G}\\); * maximum number of consecutive generations without any improvement of the best solution found, \\(N_{G,best}\\). In typical _EOS_ applications, the termination criterion is mainly based on the maximum number of generations \\(N_{G}\\). This parameter strongly depends on the complexity of the analyzed problem, and it is generally selected in such a way that the outcomes of independent runs of the algorithm bring, in almost all cases, to similar results. So, a few preliminary runs of the code are necessary to identify a suitable value for \\(N_{G}\\). ## IV Self-Adaptation of Control Parameters A common practical issue for many stochastic algorithms concerns the selection of suitable values for the control parameters, i.e., the numerical quantities that drive the different phases of the optimization process. A fine tuning of the parameters is often mandatory in order to maximize the algorithm performance while solving a given problem. The basic version of DE is characterized by only three control parameters. Apart from the population size \\(N_{p}\\), the performance of the DE algorithm depends on an appropriate selection of the scale factor \\(F\\), which drives the mutation phase, and the crossover probability \\(C_{r}\\), which drives the crossover phase. However, as for other metaheuristics, the best tuning of the control parameters, in terms of effectiveness and robustness of the algorithm, is usually related to the structure of the problem at hand [18]. In order to avoid a manual trial-and-error tuning of the DE control parameters prior to the solution of any problem, the \\(j\\)DE self-adaptive scheme proposed by Brest et al. [19] is implemented in _EOS_ for automatically adjusting the values of both \\(F\\) and \\(C_{r}\\) during the optimization, without introducing any significant computational burden. In \\(j\\)DE, each individual owns its private copy of \\(F\\) and \\(C_{r}\\), which are randomly initialized within the intervals \\([F_{min},F_{max}]\\) and \\([C_{r,min},C_{r,max}]\\), and thus different from individual to individual. Therefore, each individual evolves according to its own set of parameters. The hope is that good values of these control parameters would contribute to produce better individuals, which, being more prone to survive and produce offspring, will propagate their parameters into the population on following generations. At the end of the current generation \\(G\\), each individual \\(\\mathbf{x}_{i}\\) undergoes, with a probability \\(p_{\\tau}\\), a random mutation of its control parameters: \\[F_{i}^{G+1}=\\begin{cases}F_{min}+p_{i,1}\\Delta F&\\text{if $p_{i,2}\\leq p_{\\tau}$}\\\\ F_{i}^{G}&\\text{otherwise}\\end{cases} \\tag{12}\\] \\[C_{r,i}^{G+1}=\\begin{cases}C_{r,min}+p_{i,3}\\Delta C_{r}&\\text{if $p_{i,4}\\leq p_{ \\tau}$}\\\\ C_{r,i}^{G}&\\text{otherwise}\\end{cases}\\] where \\(p_{i,1},\\ldots,p_{i,4}\\) are random numbers sampled from a uniform distribution in \\([0,1]\\), \\(\\Delta F=F_{max}-F_{min}\\) and \\(\\Delta C_{r}=C_{r,max}-C_{r,min}\\). The hyperparameter values suggested in [19] are used in _EOS_: \\(F_{min}=0.1\\), \\(F_{max}=1\\), \\(C_{r,min}=0\\), \\(C_{r,max}=1\\) and \\(p_{\\tau}=0.1\\). The population size \\(N_{p}\\), instead, is kept fixed across generations. Its value is predetermined by looking at the problem dimension; a reasonable value for \\(N_{p}\\) generally lies between \\(5D\\) and \\(10D\\). For high-dimensional problems, e.g., \\(D>100\\), \\(N_{p}\\) is more often selected on the basis of the available computational budget. ## V Epidemic Mechanism A partial restart mechanism, named \"epidemic\", is adopted in _EOS_ in order to promote diversity between individuals of the population over the generations. In fact, the use of (guided)restart procedures in DE has demonstrated to be effective in reducing the chances of stagnation of the algorithm in isolated local minima [20, 21]. For this purpose, at each generation the population diversity score \\(\\bar{d}\\) is evaluated as the average Euclidean distance between any pair of solutions in the current population: \\[\\bar{d}=\\frac{2}{N_{p}(N_{p}-1)}\\sum_{i=1}^{N_{p}}\\sum_{\\begin{subarray}{c}j=1\\\\ j\ eq i\\end{subarray}}^{i}d_{ij} \\tag{13}\\] where: \\[d_{ij}=\\sqrt{\\sum_{k=1}^{D}\\left(\\frac{x_{i}^{(k)}-x_{j}^{(k)}}{x_{U}^{(k)}-x _{L}^{(k)}}\\right)^{2}} \\tag{14}\\] Thus, \\(d_{ij}\\) would represent the actual (Euclidean) distance between the individuals \\(\\mathbf{x}_{i}\\) and \\(\\mathbf{x}_{j}\\) if the solution space were a \\(D\\)-dimensional unitary hypercube. For this reason, \\(d_{ij}\\in[0,\\sqrt{D}]\\). When the diversity score \\(\\bar{d}\\) falls under a given small threshold \\(d_{tol}\\), an epidemic unleashes on the population. The best \\(\\rho_{elite}N_{p}\\) individuals are immune to the epidemic; instead, a large portion \\(\\rho_{ill}\\) of the remaining individuals, randomly selected, contracts the \"fatal disease\", that is, is randomly reinitialized in the whole search space. This mechanism is illustrated in Fig. 1. The epidemic cannot happen twice within a number \\(N_{G,epid}\\) of consecutive generations, so as not to compromise the search. Reasonable values for the newly introduced hyperparameters can be chosen in the following ranges: \\(N_{G,epid}=500\\div 2000\\), \\(d_{tol}=10^{-4}\\div 10^{-2}\\), \\(\\rho_{elite}=0.05\\div 0.25\\), \\(\\rho_{ill}=0.75\\div 1\\). The epidemic mechanism has proven to be particularly effective when an exploitative mutation rule (e.g., strategy 2 or 4) is chosen for DE. Figure 2 shows the effectiveness of the mechanism on two standard benchmark functions, that is, Rosenbrock function, with \\(D=100\\) and bounding intervals \\([-50,50]^{D}\\), and Rastrigin function, with \\(D=30\\) and bounding intervals \\([-5.12,5.12]^{D}\\). The plot reports the change in fitness trend (averaged on 20 independent runs) obtained when the epidemic mechanism is added to a single-population self-adaptive DE algorithm, where a self-adaption rule analogous to those in Eq. 12 was devised also to decide the mutation strategy among the four ones reported in Sec. III-A. The results were obtained with the following algorithm settings: \\(N_{p}=64\\), \\(N_{G}=20000\\), \\(N_{G,epid}=1000\\), \\(d_{tol}=10^{-3}\\), \\(\\rho_{elite}=0.1\\), \\(\\rho_{ill}=1\\). ## VI Space Pruning by Clustering The term \"space pruning\" refers to those techniques that aim at reducing the solution space during the search, focusing the optimization in a smaller area where, in accordance with some criteria, good solutions are expected to be found. Different solution methods have been proposed in the past [22, 23] that combine branching, that is, the subdivision of the feasible region into smaller subdomains (with some convergence properties) and population-based stochastic algorithms, to search in the subdomains in order to evaluate them, showing higher average performance with respect to a number of stochastic and deterministic methods on the same problems. The pruning method adopted in _EOS_ is based on the _cluster pruning algorithm_ developed by ESA's Advanced Concept Team [24] to solve Multiple Gravity-Assist (MGA) problems. The key idea is that in problems like MGA and related space trajectory optimization problems (as MGA-1DSM or multi-rendezvous ones), which are the kind of problems addressed by _EOS_, good solutions are often clustered in small regions instead of being densely distributed over the whole search space. So, during the optimization process, it is possible to progressively focus the search around the promising regions identified so far, increasing the quality of the solutions found, but accepting the risk that some possibly better solution may be potentially ruled out. The _pruning-by-clustering_ method implemented in _EOS_ makes use of \\(N_{r}\\) independent, separate, partial runs. Each partial run corresponds to a run of the same DE algorithm with a (different) random initial population. At each pre-assigned generation \\(N_{G,pr}^{i}\\), the \\(i\\)-th pruning event occurs. The \\(N_{r}\\) runs are stopped, and the best solution found by each run is collected in the set \\(pop^{best}=\\{\\mathbf{x}_{1},\\,\\mathbf{x}_{2},\\,\\ldots,\\mathbf{x}_{N_{r}}\\}\\), which is sorted according to fitness, so that \\(f(\\mathbf{x}_{1})\\leq f(\\mathbf{x}_{2})\\leq\\ldots\\leq f(\\mathbf{x}_{N_{r}})\\). A new (smaller) search space \\(\\Omega_{pr}^{i}\\) is defined as the convex hull containing the best \\(N_{p,pr}^{i}=\\lfloor\\rho_{pr}^{i}N_{r}\\rfloor\\) solutions in \\(pop^{best}\\), up to some Fig. 1: Effect of the epidemic mechanism on the population. Fig. 2: Evolution plots, averaged on 20 independent runs, obtained with and without the epidemic mechanism on two benchmark functions. relaxation factor related to \\(\\rho_{pr}^{i}\\). More precisely, for each design variable \\(x^{(j)}\\), with \\(j\\in[1,D]\\), the new bounding interval \\(\\left[x_{L,new}^{(j)},x_{U,new}^{(j)}\\right]\\) is defined as: \\[x_{L,new}^{(j)} =\\min_{k\\in[1,N_{p,pr}^{i}]}x_{k}^{(j)}-0.5(1-\\rho_{pr}^{i})\\left(x _{U}^{(j)}-x_{L}^{(j)}\\right) \\tag{15}\\] \\[x_{U,new}^{(j)} =\\max_{k\\in[1,N_{p,pr}^{i}]}x_{k}^{(j)}+0.5(1-\\rho_{pr}^{i})\\left(x _{U}^{(j)}-x_{L}^{(j)}\\right) \\tag{16}\\] where \\(\\rho_{pr}^{i}=\\rho_{pr}^{0}-i\\Delta\\rho_{pr}\\) is a tuning parameter that controls the extent of the search space region to be removed. After the pruning event, the \\(N_{r}\\) independent, partial optimization runs are restarted, randomly initializing the population over the pruned search space \\(\\Omega_{pr}^{i}\\). Nevertheless, the \\(N_{p,pr}^{i}\\) best individuals of \\(pop^{best}\\) are copied into each new population, substituting the (randomly generated) worst individuals. If \\(N_{pr}\\) pruning events are enforced, starting from generation \\(N_{G,pr}^{0}\\), the subsequent pruning events are spaced from each other by \\((N_{G}-N_{G,pr}^{0})/N_{pr}\\) generations. By choosing \\(N_{p,pr}^{i}=\\lfloor\\rho_{pr}^{i}N_{p}\\rfloor\\), one makes the pruned search space iteratively smaller, as the convex hull is created on less points. The hyperparameter values that showed the best performance in EOS are the following: \\(\\rho_{pr}^{0}=0.3\\), \\(\\Delta\\rho_{pr}=0.1\\), \\(N_{pr}=3\\) and \\(N_{G,pr}^{0}=0.4N_{G}\\). ## VII Constraint Handling In order to tackle the constrained optimization problem defined by Eqs. (2) and (4) with DE, it is necessary to modify in some way the selection step (11) to take into account the presence of inequality constraints. The simple, but promising, min-max approach proposed by Jimenez and Verdegay [25] is implemented in _EOS_ to handle constraints. The key idea is to adopt a lexicographic order in the selection process, in which the constraint violation precedes the objective function: 1. between two feasible individuals, select on the basis of the minimum value of the cost function; 2. between a feasible and an infeasible individual, select the feasible one; 3. between two infeasible individuals, select on the basis of the lowest maximum constraint violation: \\[\\Psi_{max}=\\max_{j\\in[1,K]}\\Psi^{(j)}(\\mathbf{x})\\] (17) However, as highlighted by Coello [12], this simple approach pushes the search to focus first only on the constraint satisfaction; so, if the feasible solution space is disjoint, the search could get trapped in a part of the feasible region that, in general, could be quite far from the global minimum of the problem, and, from which, it is impossible to escape. An efficient way to overcome such drawback has been proposed by Takahama and Sakai [26], who applied an \\(\\varepsilon\\)-constrained method to Differential Evolution (\"\\(\\varepsilon\\)DE\"). They suggested introducing a tolerance \\(\\varepsilon\\) for constraint violation \\(\\Psi_{max}\\), to be decreased along generations. The following rule has been adopted in _EOS_ for evaluating \\(\\varepsilon\\) at generation \\(G\\): \\[\\varepsilon^{G}=\\begin{cases}\\varepsilon^{0}&\\text{for}\\,\\,\\,G\\leq N^{0}\\\\ \\varepsilon^{0}\\left[\\frac{\\varepsilon^{\\infty}}{\\varepsilon^{0}}\\right]^{ \\frac{G-N^{0}}{N^{0}-N^{0}}}&\\text{for}\\,\\,\\,N^{0}<G<N^{\\infty}\\\\ \\varepsilon^{\\infty}&\\text{for}\\,\\,\\,G\\geq N^{\\infty}\\end{cases} \\tag{18}\\] with \\(\\varepsilon^{0},\\varepsilon^{\\infty}\\) the initial and final values of the tolerance. As for \\(N^{0}\\) and \\(N^{\\infty}\\), which define the interval in which \\(\\varepsilon\\) decreases, the following values showed the best overall results: \\(N^{0}=\\frac{N_{G}}{6}\\), \\(N^{\\infty}=N_{G}\\). By using Eq. (18), moves toward infeasible solutions are allowed at the beginning of the search, when the tolerance \\(\\varepsilon\\) is maximum and the entire solution space must be explored in order to identify promising regions. As the generation number increases, such moves become forbidden, and the search concentrates in the feasible part of the identified region, which, thanks to the initial search space exploration, is more likely to be closer to the optimum of the problem. ## VIII Island-Model During the search process, a proper balance between two opposite needs, namely \"Exploitation\" and \"Exploration\", is paramount to the performance of any EA [15]. Here exploitation refers to the capability of the evolutionary algorithm to exploit the information already collected from the population to focus the search toward the goal; exploration, instead, refers to the ability to introduce new information about the solution space into the population. In DE, each mutation strategy (see Eq. (8)) may privilege, to a greater extent, one of the two tendencies over the other [27]. It goes without saying that certain mutation strategies perform better on some optimization problems than others, but the opposite may be true on different problems. This leads to the idea of combining different strategies together within a single search process, in order to obtain a more robust and performing algorithm, capable of successfully tackling a wider range of problems [28]. In the same fashion as described in Sec. IV for the DE control parameters, a self-adaptation of the mutation strategy could be devised [29]; unfortunately, this approach suffers from the fact that greedy (i.e., exploitative) strategies tend to prevail over the others in just a few generations. _EOS_, instead, adopts a synchronous island-model paradigm as a way to concurrently handle different mutation strategies within one optimization run. An island-based EA relies on the definition of several sub-populations, or \"tribes\", each one evolving independently of the others, according to its own (preassigned) algorithm. Each tribe lives on a separate \"island\", and all the islands are arranged in a cluster, or \"archipelago\", of arbitrary topology. Information sharing among the islands occurs only in sporadic events called \"migrations\", during which the best individuals move from an island to the neighboring ones, in accordance with the pre-defined archipelago topology [30]. Numerical tests of multi-population DE-based algorithms on a broad range of low-to-high dimensional optimization problems [31, 32] showedan improvement in performance with respect to sequential versions of the algorithm. The island-model paradigm can be exploited to combine the convergence and search properties of different DE algorithms. The idea is that by using heterogeneous mutation strategies among different islands it is possible to achieve a correct balance between the search space exploration and exploitation and perform better than the best of the strategies involved in that particular problem [33]. As an example, in the radially-arranged 16-island archipelago in Fig. 3, inner rings favor exploration, featuring mutation strategies \\(1\\) and \\(3\\), while outer rings favor exploitation, featuring strategies \\(2\\) and \\(4\\). The island-model optimization process, with its alternating phases of extended, isolated computation and occasional communications, can be easily parallelized on a message-passing multi-processor environment. The MPI message-passing standard [34] is exploited in _EOS_ for this purpose: each of the \\(N_{i}\\) islands corresponds to a process, and is assigned to a different node/CPU of a cluster. The evolution phase proceeds in parallel, until communications between processes are performed during migrations. More precisely, a synchronous migration of the best \\(N_{b}=\\rho_{mig}N_{p}\\) individuals occurs between the connected islands every \\(N_{mig}\\) generations, with a probability \\(\\phi_{mig}\\). The \\(N_{b}\\) best individuals are copied in the destination islands, where they replace the \\(N_{b}\\) worst individuals. Typical values of these parameters are: \\(\\rho_{mig}=0.05\\div 0.1\\), \\(N_{mig}=100\\), \\(\\phi_{mig}=0.5\\div 1\\). Communications are easily handled through the point-to-point send/receive functions of the MPI library. In addition, MPI allows for handy implementation of Cartesian or graph process topologies, which define the \"neighbors\" of any process to/from which it can send/receive information; by exploiting such capability, it is possible to arrange the archipelago in any topology in a straightforward manner. Besides classical topologies, such as the ring topology or the Von Neumann grid topology [35], _EOS_ implements as a default a peculiar radial topology, where migration tides alternate their direction at each event, as presented in Fig. 3. This coarse-grained parallelization, in which each tribe is mapped to a CPU, can be combined with a fine-grained parallelism, in which each individual (or group of individuals) of each tribe is assigned to a different core of the CPU [36]. The fine-grained parallelization of each sub-population is realized in _EOS_ through OpenMP, which is a set of compiler directives and library routines used to perform shared-memory parallelism, e.g., between the cores of a single, multi-core CPU; so, if \\(n_{c}\\) cores are available for each CPU of the cluster, \\(N_{p}/n_{c}\\) individuals are mapped to each core through an OpenMP directive. ## IX Application to real-world space trajectory optimization problems This section presents the main results achieved by _EOS_ when applied to challenging, real-world space trajectory optimization problems. More detailed information about how the three optimization problems that follow have been formulated (that is, their objective function, design variables, bounding intervals and constraints) are reported in the corresponding references [2, 3, 6]. ### _Europa Tomography Probe Mission Design_ In 2015, a scientific and engineering team at Sapienza University of Rome, in collaboration with the Imperial College of London, carried out a feasibility study for a probe that could be launched as a piggyback payload for the NASA Europa Clipper mission, to enhance its scientific return [37]. An innovative mission concept was proposed, where a small Europa's orbiter, named Europa Tomography Probe (ETP), hosting just one magnetometer and a transponder required for the Inter-Satellite Link (ISL) with the mother spacecraft, is proved to be capable of providing crucial information on the moon's interior structure, such as depth and conductivity of the subsurface ocean. Also, ISL supports the reconstruction of the mother spacecraft orbit, significantly improving the accuracy of the topographic reconstruction of Europa surface [38]. The optimization of ETP capture trajectory was crucial for the validity of the overall proposal, as a tight requirement on the available propellant mass was enforced by the need to stay within a total probe mass specified by NASA. A mission strategy based on the \\(v_{\\infty}\\) leveraging concept [39] and the use of resonant orbits to exploit multiple gravity-assists from Europa was thus proposed [2]. Under the assumption of a patched-conic model, with radius of the sphere of influence of the secondary bodies and travel time inside these regions negligible, and an impulsive-thrust model, the problem can be posed as an unconstrained optimization problem, also known as Multiple Gravity-Assist with One Deep Space Maneuver (MGA-1DSM) [40], where the objective is to minimize the overall \\(\\Delta V\\), which is the cumulative velocity variation performed by means of the on-board propulsive system. A velocity formulation [41] of the problem was exploited: the overall capture trajectory is made up of a series of body-to-body legs, each starting with a flyby and composed of two ballistic arcs, a _propagation arc_ and a _Lambert arc_, joined by an impulsive maneuver. The \\(k\\)-th leg of the trajectory is parameterized by using four parameters: \\(\\{r_{x,k},\\beta_{k},\\Delta T_{k},\\eta_{k}\\}\\), which represent, respectively, the flyby radius, the flyby plane orientation, the leg flight-time, and the fraction of the leg flight-time at which DSM occurs. By properly selecting the bounds for variables \\(\\Delta T_{k}\\) and \\(\\eta_{k}\\) it is possible to enforce a Fig. 3: Migration tide: forward (a) and backward (b), for the 16-island case. given resonance of the probe with a Jovian moon on the \\(k\\)-th leg. An initial Lambert arc, which moves the probe from the assigned initial condition (apoaposis of a Jovian orbit in \\(4{:}1\\) resonance with Europa) to the first encounter with Europa, completes the formulation. This initial leg is described by three variables: the probe release epoch \\(t_{0}\\), and the flight time \\(\\Delta T_{0}\\) and flight angle \\(\\Delta\\theta\\) along the leg. MGA-1DSM problems are typically characterized by a huge number of local optima [42]; in this respect, either the pruning-by-clustering method (Sec. VI) or the island-model (Sec. VIII) are paramount to the success of the optimization procedure through _EOS_. In particular, Fig. 4 compares the results obtained on Mission C of Ref. [2] by performing 50 runs of a mono-population, self-adaptive DE algorithm with and without the pruning-by-clustering procedure. The evolution plots reported are obtained by considering, at any generation number, the best fitness among those found by the separate runs. The following algorithm settings were used: \\(N_{i}=1\\), \\(N_{p}=64\\), \\(N_{G}=10000\\), \\(N_{r}=50\\), \\(\\rho^{0}_{pr}=0.3\\), \\(\\Delta\\rho_{pr}=0.1\\), \\(N_{pr}=3\\) and \\(N^{0}_{G,pr}=0.4N_{G}\\). The overall-best capture trajectory obtained for EIP is reported in Fig. 5, and allows saving approximately \\(1900\\,\\mathrm{m}\\,\\mathrm{s}^{-1}\\) of \\(\\Delta V\\) with respect to a direct insertion maneuver. The trajectory exploits eight flybys of Europa and a flyby of Ganymede, for a total of 39 design variables. The result was obtained with the following algorithm settings: \\(N_{i}=8\\), \\(N_{p}=512\\), \\(N_{G}=10000\\). ### _Ascent Trajectory Optimization of VEGA Launch Vehicles_ Since 2017, _EOS_ is adopted as a solver for the optimization of the ascent trajectory of VEGA Launch Vehicle (LV) and its evolution VEGA-C in the framework of independent support and cross-check activity for ESA-ESRIN [43]. By applying a control discretization approach, the problem of optimizing the ascent trajectory of a LV from the launch pad to a target orbit can be posed as a global constrained optimization problem. This problem is highly sensitive to the value of the optimization variables and often the trajectory cannot be evaluated over the whole search space. Therefore, the use of a derivative-free optimization algorithm is mandatory. With respect to the previous application, here the constraint handling plays a major role in determining the success or failure of the optimization process. In particular, both the terminal constraints on the final LV orbital parameters and the path constraints limiting the dynamic pressure, the bending and axial loads experienced by the rocket, and the heat flux on the payload during the flight after the fairing jettisoning must be considered. As a result, the admissible portion of the search space is quite small. Traditional approaches of constraint handling for EAs based on penalty functions [44] are not very effective in this case, because they tend to create, and get the search trapped into, a number of spurious sub-optimal solutions. Moreover, they are strongly reliant on a proper choice of the constraint weighting factors. Barrier methods are not of any help too, as the feasible set in the search domain is quite limited and disconnected. On the other hand, the \\(\\varepsilon\\)-constrained method adopted in _EOS_ is really effective on this kind of problems, since starting from a value of the tolerance \\(\\varepsilon\\) sufficiently high, \\(\\varepsilon\\)-feasible solutions (i.e., which meet the imposed constraints with a tolerance \\(\\varepsilon\\)) can be obtained very soon during the search. Then, by decreasing in a slow and monotone way the parameter \\(\\varepsilon\\), the EA is able to maintain the solution feasible and, at the same time, attain better values of the fitness. As a result, at the end of the search, when \\(\\varepsilon\\) is close to zero, a feasible, good quality solution is returned by the algorithm. This process is shown in Fig. 6 for VEGA LV, by using as objective function (to maximize) the payload mass \\(M_{u}\\). A typical run for a low/medium-fidelity ascent trajectory optimization with 22 variables and 9 nonlinear inequality constraints (\\(N_{i}=1\\), \\(N_{p}=128\\), \\(N_{G}=3000\\)) requires about \\(5\\) minutes on a workstation with Intel Core i9-9900K @3.60 GHz, using a single MPI process and up to 16 OpenMP threads. An integrated optimization of the ascent trajectory and first stage Solid Rocket Motor (SRM) design of a VEGA-like Fig. 4: Best fitness trend obtained with and without the use of the pruning procedure. Fig. 5: ETP capture trajectory. multi-stage launch vehicle was also carried out through _EOS_, where the objective was to determine the optimal internal pressure law during the first stage SRM operation, together with the optimal thrust direction and other relevant flight parameters of the entire ascent trajectory, so as to maximize the payload injected into a target orbit. Multiple design constraints involving the solid rocket motor or dependent on the actual flight trajectory were also enforced. Here the problem was even more complex, because of the large number of constraints and its multi-disciplinarity (the design variables represent very distant physical quantities); nevertheless, _EOS_ was still able to attain competitive results [3]. In this case, a typical high-fidelity optimization (\\(N_{i}=8\\), \\(N_{p}=136\\), \\(N_{G}=10000\\)), with 22 variables and 12 nonlinear inequality constraints, has a run time of about 30 minutes on KNL partition of CINECAs supercomputer Marconi (3600 compute nodes with 68-core Intel(R) Knights Landing @1.40GHz and 16 GBMCDRAM + 96 GB RAM each, connected through an IntelOmniPath network @100Gb/s), by using 8 nodes and 68 CPU cores per node. ### _Active Debris Removal Missions_ An Active Debris Removal (ADR) mission can be seen as a peculiar instance of a multi-rendezvous (MRR) trajectory, where an active (chaser) spacecraft is asked to visit (that is, to perform a rendezvous with) a certain number of targets (space debris), making the best use of the on-board propellant. The optimization of a multi-target rendezvous trajectory can be posed as a mixed-integer NLP problem, involving the simultaneous optimization of both integer variables (defining the debris encounter sequence) and real-valued variables (describing the spacecraft trajectory from a debris to the next one and the encounter epochs). A bi-level approach can be pursued for solving this NP-hard problem, by isolating i) an outer-level, which concerns the definition of the encounter sequence and approximate encounter epochs, based on a rough, but fast, evaluation of each debris-to-debris cost (an _heuristic_), and ii) an inner-level, which deals with the optimization of each body-to-body transfer with full details, assuming that departure and arrival bodies are assigned; encounter epochs are also adjusted. Given a proper heuristic to estimate the cost of each transfer leg (without solving the full optimization problem), the two levels can be solved sequentially: the outer-level combinatorial problem is isolated and solved first, for example using either a Genetic Algorithm [5] or Simulated Annealing [6]; its solution is then used as initial guess for the inner-level NLP problem, that is the actual trajectory optimization problem. If a Keplerian dynamical model is assumed for both the targets and the chaser, the inner-level problem results in an unconstrained problem of real-valued design variables with the total \\(\\Delta V\\) as cost function, which can be tackled by _EOS_. When the \\(N\\) targets to remove are supposed on coplanar orbits, each debris-to-debris trajectory can be parameterized by \\(7\\) variables, one being the final encounter time and the other \\(6\\) identifying the trajectory arcs making up the transfer. So, to optimize the whole chaser trajectory, _EOS_ must deal with a \\(7N\\)-variable problem. Because of the high problem dimensions (for typical values of \\(N\\)) the island-model parallelization of _EOS_ is decisive to attain good quality results is a limited amount of time. Figure 7 shows a few results for a \\(15\\)-debris ADR mission (\\(105\\) variables) as a function of the number of islands \\(N_{i}\\), always considering approximately the same maximum number of FES. The corresponding population size and number of generations for each run are reported in Table I. The parameters \\(\\phi_{mig}=0.5\\), \\(N_{mig}=100\\) and \\(\\rho_{mig}=0.05\\) were used in these tests. For the sake of comparison, the results obtained through a single-island version of _EOS_, with self-adaptive mutation strategy, are also reported. All computations were performed on KNL partition of CINECA's supercomputer Marconi, whose characteristics are reported in Sec. IX-B. The reduction of the overall run-time with the number of islands is apparent in Fig (b)b, being a direct consequence of the parallel implementation of the algorithm. Moreover, as shown in Fig (a)a, the quality of the obtained solutions improves with the number of islands, up to \\(N_{i}=24\\); from that point on, the population size of each island probably becomes too small for the problem at hand. \\begin{table} \\begin{tabular}{c c c|c c c} \\hline \\multicolumn{3}{l}{Algorithm parameters} & \\multicolumn{3}{c}{Fitness stats, [km/s]} \\\\ \\hline \\(N_{i}\\) & \\(N_{p}\\) & \\(N_{G}\\) & \\(mean\\) & \\(std\\) & \\(best\\) \\\\ \\hline 1 & 512 & 20000 & 0.8400 & 0.0755 & 0.7310 \\\\ 4 & 256 & 10000 & 0.8165 & 0.0842 & 0.7173 \\\\ 8 & 128 & 10000 & 0.7577 & 0.0444 & 0.6779 \\\\ 12 & 85 & 10000 & 0.7575 & 0.0408 & 0.6900 \\\\ 16 & 64 & 10000 & 0.7475 & 0.0343 & 0.6957 \\\\ 20 & 51 & 10000 & 0.7369 & 0.0458 & 0.6668 \\\\ 24 & 43 & 10000 & 0.7231 & 0.2282 & 0.6935 \\\\ 28 & 37 & 10000 & 0.7389 & 0.0437 & 0.6816 \\\\ 32 & 32 & 10000 & 0.7381 & 0.0318 & 0.6768 \\\\ \\hline \\end{tabular} \\end{table} TABLE I: Results of the application of _EOS_ to the \\(15\\)-target ADR mission by varying the number of islands. Fig. 6: Typical trend of fitness \\(f\\), constraint violation \\(\\Psi_{max}\\) and \\(\\varepsilon\\) level during a run of _EOS_ for an ascent trajectory optimization problem. ## X Conclusion This paper presents the evolutionary optimization code _EOS_ developed at Sapienza University of Rome, which represents a state-of-the-art DE-based algorithm. After recalling the traditional DE algorithm, the main features added to _EOS_ to enhance the performance of DE were described in detail. These features include: the self-adaptation scheme for the DE control parameters, to avoid a manual tuning of such quantities and let them automatically evolve along the search; a partial restart (\"epidemic\") mechanism, to enhance population diversity and evade the danger to get the search trapped in local optima; a pruning technique based on clustering, to focus the search on the most promising regions of the solution space; a \\(\\varepsilon\\)-constrained method to make the algorithm capable of efficiently tackling constrained problems; a parallel synchronous island-model paradigm, to properly balance the \"explorative\" and \"exploitative\" tendencies of the algorithm and, at the same time, being able to distribute the computational load on a highly parallel environment, with a reduction of computational times and an increase in performance as immediate results. _EOS_ demonstrated to be successful when applied to hard, real-world (unconstrained and constrained) problems of space trajectory design, characterized by many variables and the presence of several local minima, such as multiple gravity-assist trajectories, the ascent trajectory of a rocket, and multi-target rendezvous missions. ## References * [1] L. Casalino, G. Colasurdo, A. Zavoli, and M. Berga, \"Gtoc8: Results and methods of team 22,\" _Advances in the Astronautical Sciences_, vol. 158, pp. 4291-4300, 2016. * [2] L. Federici, A. Zavoli, and G. Colasurdo, \"Preliminary capture trajectory design for europe tomography probe,\" _International Journal of Aerospace Engineering_, vol. 2018, 2018. * [3] L. Federici, A. Zavoli, G. Colasurdo, L. Mancini, and A. Neri, \"Integrated optimization of ascent trajectory and srm design of multistage launch vehicles,\" _Advances in the Astronautical Sciences_, vol. 168, pp. 733-752, 2019. * [4] B. Benedikter, A. Zavoli, and G. Colasurdo, \"A convex approach to rocket ascent trajectory optimization,\" in _8th European Conference for Aeronautics and Space Sciences (EUCASS)_, Madrid, Spain, 1-4 July 2019. * [5] L. Federici, A. Zavoli, and G. Colasurdo, \"Impulsive multi-rendezvous trajectory design an optimization,\" in _8th European Conference for Aeronautics and Space Sciences (EUCASS)_, Madrid, Spain, 1-4 July 2019. * [6] L. Federici, Z. Alessandro, and G. Colasurdo, \"A time-dependent TSP formulation for the design of an active debris removal mission using simulated annealing,\" in _Astrodynamics Specialist Conference_, vol. AAS 19-701, Portland, Maine, 11-15 Aug. 2019. * [7] R. Storn and K. Price, \"Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces,\" _Journal of global optimization_, vol. 11, no. 4, pp. 341-359, 1997. * [8] J. Vesterstrom and R. Thomsen, \"A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems,\" in _Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753)_, vol. 2. IEEE, 2004, pp. 1980-1987. * [9] S. Das, S. S. Mullick, and P. N. Suganthan, \"Recent advances in differential evolution-an updated survey,\" _Swarm and Evolutionary Computation_, vol. 27, pp. 1-30, 2016. * [10] M. Locatelli and M. Vasile, \"(non) convergence results for the differential evolution method,\" _Optimization Letters_, vol. 9, no. 3, pp. 413-425, 2015. * [11] S. Das and P. N. Suganthan, \"Differential evolution: a survey of the state-of-the-art,\" _IEEE transactions on evolutionary computation_, vol. 15, no. 1, pp. 4-31, 2011. * [12] C. A. C. Coello and A. Carlos, \"A survey of constraint handling techniques used with evolutionary algorithms,\" _Lania-RI-99-04, Laboratorio Nacional de Informatica Avanzada_, 1999. * [13] A. E. Eiben, R. Hinterding, and Z. Michalewicz, \"Parameter control in evolutionary algorithms,\" _IEEE Transactions on evolutionary computation_, vol. 3, no. 2, pp. 124-141, 1999. * [14] J. Lampinen, I. Zelinka _et al._, \"On stagnation of the differential evolution algorithm,\" in _Proceedings of MENDEL_, 2000, pp. 76-83. * [15] A. E. Eiben and C. A. Schippers, \"On evolutionary exploration and exploitation,\" _Fundamenta Informaticae_, vol. 35, no. 1-4, pp. 35-50, 1998. * [16] M. Ali, \"Differential evolution with generalized differentials,\" _Journal of Computational and Applied Mathematics_, vol. 235, no. 8, pp. 2205-2216, 2011. * [17] E. Mezura-Montes, J. Velazquez-Reyes, and C. A. Coello Coello, \"A comparative study of differential evolution variants for global optimization,\" in _Proceedings of the 8th annual conference on Genetic and evolutionary computation_. ACM, 2006, pp. 485-492. * [18] J. Liu, \"On setting the control parameter of the differential evolution Fig. 7: Boxplots of run time and solution quality as a function of the number of islands \\(N_{i}\\) for a \\(15\\)-target ADR mission. Statistics are evaluated over \\(20\\) independent runs, with a maximum FES \\(N_{F}=10240000\\). method,\" in _Proceedings of the 8th international conference on soft computing (MENDEL 2002)_, 2002, pp. 11-18. * [19] J. Brest, B. Boskovic, S. Greiner, V. Zumer, and M. S. Mauvec, \"Performance comparison of self-adaptive and adaptive differential evolution algorithms,\" _Soft Computing_, vol. 11, no. 7, pp. 617-629, 2007. * [20] F. Peng, K. Tang, G. Chen, and X. Yao, \"Multi-start iade with knowledge transfer for numerical optimization,\" in _2009 IEEE Congress on Evolutionary Computation_. IEEE, 2009, pp. 1889-1895. * [21] M. Vasile, E. Minisci, and M. Locatelli, \"An inflationary differential evolution algorithm for space trajectory optimization,\" _IEEE Transactions on Evolutionary Computation_, vol. 15, no. 2, pp. 267-281, 2011. * [22] M. Vasile and M. Locatelli, \"A hybrid multiagent approach for global trajectory optimization,\" _Journal of Global Optimization_, vol. 44, no. 4, pp. 461-479, 2009. * [23] B. Ghojogh, S. Sharifian, and H. Mohammadzade, \"Tree-based optimization: A meta-algorithm for metaheuristic optimization,\" _arXiv preprint arXiv:1809.09284_, 2018. * [24] D. Izzo, \"Global optimization and space pruning for spacecraft trajectory design,\" _Spacecraft Trajectory Optimization_, vol. 1, pp. 178-200, 2010. * [25] F. Jimenez and J. L. Verdejay, \"Evolutionary techniques for constrained optimization problems,\" 1999. * [26] T. Takahama and S. Sakai, \"Constrained optimization by the \\(\\varepsilon\\) constrained differential evolution with an archive and gradient-based mutation,\" in _Evolutionary Computation (CEC), 2010 IEEE Congress on_. IEEE, 2010, pp. 1-9. * [27] V. Feokitstov, _Differential evolution_. Springer, 2006. * [28] M. G. Epitropakis, V. P. Plagianakos, and M. N. Vrahatis, \"Balancing the exploration and exploitation capabilities of the differential evolution algorithm,\" in _2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)_. IEEE, 2008, pp. 2686-2693. * [29] A. K. Qin, V. L. Huang, and P. N. Suganthan, \"Differential evolution algorithm with strategy adaptation for global numerical optimization,\" _IEEE transactions on Evolutionary Computation_, vol. 13, no. 2, pp. 398-417, 2008. * [30] W. Martin, J. Lienig, and J. P. Ochoon, \"C6. 3 island (migration) models: evolutionary algorithms based on punctuated equilibria,\" _B ack et al. BFM97], Seiten C_, vol. 6, pp. 101-124, 1997. * [31] D. K. Tasoulis, N. G. Pavlidis, V. P. Plagianakos, and M. N. Vrahatis, \"Parallel differential evolution,\" in _Proceedings of the 2004 congress on evolutionary computation (IEEE Cat. No. 04TH8753)_, vol. 2. IEEE, 2004, pp. 2023-2029. * [32] M. Di Carlo, M. Vasile, and E. Minisci, \"Adaptive multi-population inflationary differential evolution,\" _Soft Computing_, pp. 1-31, 2019. * [33] D. Izzo, M. Rucinski, and C. Ampatzis, \"Parallel global optimisation meta-heuristics using an asynchronous island-model,\" in _2009 IEEE Congress on Evolutionary Computation_. IEEE, 2009, pp. 2301-2308. * [34] L. Clarke, I. Glendinning, and R. Hempel, \"The mpi message passing interface standard,\" in _Programming environments for massively parallel distributed systems_. Springer, 1994, pp. 213-218. * [35] N. Lynn, M. Z. Ali, and P. N. Suganthan, \"Population topologies for particle swarm optimization and differential evolution,\" _Swarm and evolutionary computation_, vol. 39, pp. 24-35, 2018. * [36] E. Alba and M. Tomassini, \"Parallelism and evolutionary algorithms,\" _IEEE Transactions on Evolutionary Computation_, vol. 6, no. 5, pp. 443-462, Oct 2002. * spacecraft design,\" 2016. * [38] M. Di Benedetto, L. Imperri, D. Durante, M. Dougherty, L. Iess, V. Notaro, and P. Racioppa, \"Augmenting nasa europa clipper by a small probe: Europa tomography probe (etp) mission concept,\" _Acta Astronautica_, vol. 165, pp. 211-218, 2019. * [39] J. A. Sims and J. M. Longuski, \"Analysis of \\(v_{\\infty}\\) leveraging for interplanetary missions,\" in _AIAA/AAS Astrodynamics Conference, Scottsdale, AZ_, 1994, pp. 505-513. * [40] M. Vasile and P. D. Pascale, \"Preliminary design of multiple gravity-assist trajectories,\" _Journal of Spacecraft and Rockets_, vol. 43, no. 4, pp. 794-805, 2006. * [41] M. Ceriotti, \"Global optimisation of multiple gravity assist trajectories,\" Ph.D. dissertation, 2010. * [42] D. Izzo, V. M. Becerra, D. R. Myatt, S. J. Nasuto, and J. M. Bishop, \"Search space pruning and global optimisation of multiple gravity assist spacecraft trajectories,\" _Journal of Global Optimization_, vol. 38, no. 2, pp. 283-296, 2007. * Q2 2017) Addendum, VG-SOW-0-30005-ESA_. * [44] J. T. Richardson, M. R. Palmer, G. E. Liepins, and M. R. Hilliard, \"Some guidelines for genetic algorithms with penalty functions,\" in _Proceedings of the 3rd international conference on genetic algorithms_. Morgan Kaufmann Publishers Inc., 1989, pp. 191-197.
This paper presents the main characteristics of the evolutionary optimization code named _EOS_, Evolutionary Optimization at Sapienza, and its successful application to challenging, real-world space trajectory optimization problems. _EOS_ is a global optimization algorithm for constrained and unconstrained problems of real-valued variables. It implements a number of improvements to the well-known Differential Evolution (DE) algorithm, namely, a self-adaptation of the control parameters, an epidemic mechanism, a clustering technique, an \\(\\varepsilon\\)-constrained method to deal with nonlinear constraints, and a synchronous island-model to handle multiple populations in parallel. The results reported prove that _EOS_ is capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms when applied to high-dimensional or highly-constrained space trajectory optimization problems. global optimization, evolutionary optimization, constrained optimization, differential evolution, self-adaptation, parallel computing, island-model, space trajectory optimization
Write a summary of the passage below.
193
arxiv-format/1709_02898v3.md
# Learning a Dilated Residual Network for SAR Image Despeckling Qiang Zhang 1School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China; [email protected](O.Z.): [email protected](O.Y.): [email protected](I.L.) Qiangqiang Yuan 1School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China; [email protected](O.Z.): [email protected](O.Y.): [email protected](I.L.) Jie Li 1School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China; [email protected](O.Z.): [email protected](O.Y.): [email protected](I.L.) Zhen Yang 2School of Resource and Environmental Science, Wuhan University, Wuhan 430079, China; [email protected] (Z.Y.) Ziaoshuang Ma 3School of Resources and Environmental Engineering, Anhui University, Hefei 230000, China; [email protected] (X.M.) Correspondence: [email protected](Q.Y.), [email protected](J.L.); Tel.: +86-159-7217-1792 (Q.Y.) ## 1 Introduction Synthetic aperture radar (SAR) is a coherent imaging sensor, which can access a wide range of high-quality massive surface data. Moreover, with the ability to operate at night and in adverse weather conditions such as thin clouds and haze, SAR has gradually become a significant source of remote sensing data in the fields of geographic mapping, resource surveying, and military reconnaissance. However, SAR images are inherently affected by multiplicative noise, i.e., speckle noise, which is caused by the coherent nature of the scattering phenomena [1]. The presence of speckle severely affects the quality of SAR images, and greatly reduces the utilization efficiency in SAR image interpretation, retrieval, and other applications [2-4]. Consequently, SAR image speckle reduction is an essential preprocessing step and has become a hot research topic. For the purpose of removing the speckle noise of SAR images, scholars firstly proposed spatial linear filters such as the Lee filter [5], Kuan filter [6] and Frost filter [7]. These methods usually assume that the image filtering result values have a linear relationship with the original image, through searching for a relevant combination of the central pixel intensity in a moving window with a mean intensity of the filter window. Thus, the spatial linear filters achieve a trade-off between balancing in homogeneous areas and a constant all-pass identity filter in edge included areas. The results confirmed that spatial-domain filters which are adept at suppressing speckle noise for some critical features. However, due to the nature of local processing, the spatial linear filter methods often fail tointegrally preserve edges and details, which exist the following deficiencies: 1) unable to preserve the average value, especially for the equivalent number of look (ENL) of the original SAR image is small; 2) the powerfully reflective specific targets like points and small surficial features are easily blurred or erased; and 3) speckle noise in dark scene are not removed [8]. Except the spatial-domain filters above, wavelet theory has also been applied into speckle reduction. Starck et al. [9] primarily employed ridgelet transform as a component step, and implemented curvelet sub-bands using a filter bank of the discrete wavelet transform (DWT) filters for image denoising. And for the case of speckle noise, Solbo et al. [10] utilized the DWT of the log-transformed speckled image in homomorphic filtering, which is empirically convergence in a self-adaptive strategy and calculated in the Fourier space. In summary, the major weaknesses of this type of approach are the backscatter mean preservation in homogeneous areas, details preservation, and producing artificial effect into the results such as ring effects [11]. Aimed at overcoming these deficiencies, the nonlocal means (NLM) algorithm [12-14] has provided a breakthrough in detail preservation in SAR image despeckling. The basic idea of the NLM-based methods [12] is that natural images have self-similarity and there are similar patches repeating over and over throughout the whole image. And for SAR image, Deledalle et al. [13] modified the choice of weights, which can be iteratively determined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. Besides, Parrilli et al. [14] used the local linear minimum mean square error (LLMMSE) criterion and undecimated wavelet transform considering the peculiarities of SAR images, allowing for a sparse Wiener filtering representation and an effective separation between original signal and speckle noise through predefined thresholding, which has become one of the most effective SAR despeckling methods. However, the low computational efficiency of the similar patch searching restricts its application. In addition, the variational-based methods [15-18] have gradually been utilized for SAR image despeckling because of their stability and flexibility, which break through the traditional idea of filters by solving the problem of energy optimization. Then the despeckling task is cast as the inverse problem of recovering the original noise-free image based upon reasonable assumptions or prior knowledge of the noise observation model with log-transform, such as total variation (TV) model [15], sparse representation [16] and so on. Although these variational methods have achieved good reduction of speckle noise, the result is usually dependent on the choice of the model parameters and prior information, and is often time-consuming. In addition, the variational-based methods cannot accurately describe the distribution of speckle noise, which also constraints the performance of speckle noise reduction. In general, although a lot of SAR despeckling methods have been proposed, they sometimes fail to preserve sharp features in domains of complicated texture, or even create some block artifacts in the speckled image. In this paper, considering that image speckle noise can be expressed more accurately through non-linear models than linear models, and to overcome the above-mentioned limitations of the linear models, we propose a novel deep neural network based approach for SAR image despeckling, learning a non-linear end-to-end mapping between the speckled and clean SAR images by a dilated residual network (SAR-DRN). Our despeckling model employs dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. Furthermore, skip connections are added to the despeckling model to maintain the image details and avoid the vanishing gradient problem. Compared with the traditional despeckling methods in both simulated and real SAR experiments, the proposed approach shows a state-of-the-art performance in both quantitative and visual assessments, especially for strong speckle noise. The rest of this paper is organized as follows. The SAR image speckling noise degradation model and the related deep convolution neural network method are introduced in Section 2. The network architecture of the proposed SAR-DRN and details of its structure are described in Section 3. Then, the results of the despeckling assessment in both simulated and real SAR image experiments are presented in Section 4. Finally, the conclusions and future research are summarized in Section 5. ## 2 Related Work ### SAR Image Speckling Noise Degradation Model For SAR image, the main reason for the degradation of the image quality is multiplicative speckle noise. Differing from additive white Gaussian noise (AWGN) in nature or hyperspectral image [19, 20], speckle noise is described by the multiplicative noise model: \\[\\mathcal{V}=\\mathcal{X}\\cdot n \\tag{1}\\] where \\(\\mathcal{V}\\) is the speckled noise image, \\(\\mathcal{X}\\) is the clean image, and \\(n\\) represents the speckle noise. It is well-known that, for SAR amplitude image, the speckle follows a _Gamma_ distribution [21]: \\[\\rho_{s}(n)=\\frac{L^{t}n^{t-1}\\exp(-nL)}{\\Gamma(L)} \\tag{2}\\] where \\(L\\geq 1\\), \\(n\\geq 0\\), \\(\\Gamma\\) is the _gamma_ function, and \\(L\\) is the equivalent number of looks (_ENL_), as defined in (3), which is usually regarded as the quantitative evaluation index for real SAR image despeckling experiments in the homogeneous areas. \\[ENL=\\frac{\\overline{x}}{\\text{var}} \\tag{3}\\] where \\(\\overline{x}\\) and var respectively represent the image mean and variance. Therefore, for this non-linear multiplicative noise, choosing a non-linear expression for speckle reduction is an important strategy. In the following, we briefly introduce the use of convolutional neural networks (CNNs) for SAR image despeckling, considering both the low-level features as the bottom level and the output feature representation from the top level of the network. ### CNNs for SAR Image Despeckling With recent advances made by deep learning for computer vision and image processing applications, it has gradually become an efficient tool which has been successfully applied to many computer vision tasks such as image classification, segmentation, object recognition, scene classification and so on [22, 23, 24]. CNNs can extract the internal and underlying features of images and avoid complex _priori_ constraint, organized in the \\(j\\) -th feature map \\(O_{j}^{(j)}\\) (\\(j=1,2, M^{(j)}\\)) of \\(I\\) -th layer, within which each unit is connected to local patches of the previous layer \\(O_{j}^{(j-1)}\\) (\\(j=1,2, M^{(j-1)}\\)) through a set of weight parameters \\(W_{j}^{(j)}\\) and bias parameters \\(b_{j}^{(j)}\\). The output feature map is: \\[\\hat{L}_{j}^{(j)}(m,n)=F(O_{j}^{(j)}(m,n)) \\tag{4}\\] And \\[O_{j}^{(j)}(m,n)=\\sum_{j-1}^{M^{(j)}}\\sum_{n,\ u=0}^{S-1}W_{\\mu}^{(j)}(u,\ u) \\cdot\\hat{L}_{t}^{(j-1)}(m-u,n-\ u)+b_{j}^{(j)} \\tag{5}\\] where \\(F(\\cdot)\\) is the nonlinear activation function, and \\(O_{j}^{(j)}(m,n)\\) represents as the convolutional weighted sum of the previous layer's results, to the \\(j\\)-th output feature map at pixel \\((m,n)\\). Besides, the special parameters in the convolution layer contain number of output feature maps \\(j\\), and filter kernel size \\(S\\times S\\). Particularly, the network parameters \\(W\\) and \\(b\\) need to be regenerated through back-propagation (BP) algorithm and the chain rule of derivation [25]. To ensure that the output of the CNNs is a non-linear combination of the input, due to the relationship between the input data and the output label should usually be a highly nonlinear mapping, a non-linear function is introduced as an excitation function, such as the rectified linear unit (ReLU) is defined as: \\[F(O_{j}^{(j)})=\\max(0,O_{j}^{(j)}) \\tag{6}\\] After finishing each process of forward propagation, BP algorithm starts to perform for update trainable parameters of networks, to better learn the relationships between label data and reconstructing data. From the top layer of the network to the bottom, BP updates the trainable parameters of \\(I\\)-th layer through the outputs of \\(I+1\\)-th layer. The partial derivative of loss functionwith respect to convolution kernels \\(W_{\\mu}^{(l)}\\) and bias \\(b_{j}^{(l)}\\) of \\(l\\)-th convolution layer is respectively calculated as follows: \\[\\frac{\\partial L}{\\partial W_{\\mu}^{(l)}}=\\sum_{m,n}\\delta_{j}^{(l)}(m,n)\\cdot L _{j}^{(l)}(m-u,y-\ u) \\tag{7}\\] \\[\\frac{\\partial L}{\\partial b_{j}^{(l)}}=\\sum_{m,n}\\delta_{j}^{(l)}(m,n) \\tag{8}\\] where the error map \\(\\delta_{j}^{(l)}\\) is defined as \\[\\delta_{j}^{(l)}=\\sum_{j}\\sum_{\ u=0}^{\\bar{\ u}-1}W_{\\mu}^{(l+1)}(u,\ u)\\cdot \\delta_{j}^{(l+1)}(m+u,n+\ u) \\tag{9}\\] And the iterative training rule for updating the networks parameter \\(W_{\\mu}^{(l)}\\) and \\(b_{j}^{(l)}\\) is through the gradient descent strategy as follows: \\[W_{\\mu}^{(l)}=W_{\\mu}^{(l)}-\\alpha\\cdot\\frac{\\partial L}{\\partial W_{\\mu}^{( l)}} \\tag{10}\\] \\[b_{j}^{(l)}=b_{j}^{(l)}-\\alpha\\cdot\\frac{\\partial L}{\\partial b_{j}^{(l)}} \\tag{11}\\] where \\(\\alpha\\) is a preset hyperparameter for the whole network, which is also named learning rate in deep learning framework and controls sampling interval of the trainable parameter. For natural Gaussian noise reduction, a new method named the feed-forward denoising convolutional neural network (DnCNN) [26] has recently shown excellent performances, in contrast with the traditional methods which employ a deep convolutional neural network. DnCNN employs a 20 convolutional layers structure, a learning strategy of residual learning to remove the latent original image in the hidden layers, and an output data regularization method of batch normalization [27], which can deal with several universal image restoration tasks such as blind or non-blind image Gaussian denoising, single image super-resolution and JPEG image deblocking. Recently, borrowing the thought of DnCNN model, Chierchia _et al._[28] also employed a set of convolutional layers named SAR-CNN, along with batch normalization (BN) and ReLU activation function, and a component-wise division residual layer to estimate the speckled image. As an alternative way of dealing with the multiplicative noise of SAR images, SAR-CNN uses the homomorphic approach with coupled logarithm and exponent transforms in combination with a similarity measure for speckle noise distribution. In addition, Wang _et al._[29] also used a similar structure like DnCNN, with an eight-layers of _Conv-BN-ReLU_ block, and replaced residual mean square error (MSE) with a combination of Euclidean loss and total variation loss, which is incorporated into the total loss function to facilitate more smooth results. ## 3 Proposed Method In this paper, rather than using log-transform [28] or modifying training loss function like [29], we propose a novel network for SAR image despeckling with dilated residual network (SAR-DRN), which is trained in an end-to-end fashion using a combination of dilated convolutions and skip connections with residual learning structure. Instead of relying on pre-determined image a _priori_ knowledge or a noise description model, the main superiority of using the deep neural network strategy for SAR image despeckling is that the model can directly acquire and update the network parameters from the training data and the corresponding labels, which needn't manually adjust critical parameters and can automatically learning the complex internal non-linear relations with trainable network parameters from the massive training simulative data. The proposed holistic neural network model (SAR-DRN) for SAR image despeckling contains seven dilated convolution layers and two skip connections, as illustrated in Figure 1. In addition, the proposed model uses a residual learning strategy to predict the speckled image, which adequately utilizes the non-linear expression ability of deep learning. The details of the algorithm are described in the following. ### Dilated Convolutions In image restoration problems such as single-image super-resolution (SISR) [30], denoising [31] and deblurring [32], contextual information can effectively facilitate the recovery of degraded regions. In deep convolutional networks, it mainly augments the contextual information through enlarging the receptive field. Generically, there are two ways to achieve this purpose: 1) increasing the network depth; and 2) enlarging the filter size. Nevertheless, as the network depth increases, the accuracy becomes \"saturated\" and then degrades rapidly. Enlarging the filter size can also lead to more convolution parameters, which greatly increases the calculative burden and training times. To solve this problem effectively, dilated convolutions were first proposed in [33], which can both enlarge the receptive field and maintain the filter size. Let \\(C\\) be an input discrete 2-dimensional matrix such as image, and let \\(K\\) be a discrete convolution filter of size \\((2r+1)\\times(2r+1)\\). Then the original discrete convolution operator \\(*\\) can be given as \\[(C*k)(p)=\\sum_{i\ eq j-p}C(i)\\cdot k(j) \\tag{12}\\] After defined this convolution operator \\(*\\), let \\(d\\) be a dilation factor and let \\(*_{d}\\) be equivalent to \\[(C*_{d}k)(p)=\\sum_{i\ eq d\ eq p}C(i)\\cdot k(j) \\tag{13}\\] where \\(*_{d}\\) is served as the dilated convolution or a \\(d\\)- dilated convolution. Particularly, the common discrete convolution \\(*\\) can be regarded as the \\(I\\)- dilated convolution. Setting the size of convolutional kernel with 3\\(\\times\\)3 as an example, and let \\(k_{I}\\) be the discrete 3\\(\\times\\)3 convolution filters. Consider applying the filters with exponentially increasing dilation as \\[R_{I+1}=R_{I}*_{\\phi}k_{I} \\tag{14}\\] where \\(I=0,1,\\ldots,n-2\\), \\(\\phi=2^{I}\\) and \\(R_{I}\\) represents the size of the receptive field. The common convolution receptive field has a linear correlation with the layer depth, in that the receptive field size: \\(R_{I}^{c}=(2I+1)\\times(2I+1)\\). By contrast, the dilated convolution receptive field has an exponential correlation with the layer depth, where the receptive field size: \\(R_{I}^{d}=(2^{I+1}-1)\\times(2^{I+1}-1)\\). For instance, when \\(I=4\\), \\(R_{I}^{c}=9\\times 9\\) while \\(R_{I}^{d}=31\\times 31\\) with the same layer depth. Figure 2 illustrates the dilated convolution receptive field size, where: (a) corresponds to the 1-dilated convolution, which is equivalent to the common convolution operation at this point; (b) corresponds to the 2-dilated convolution; and (c) corresponds to the 4-dilated convolution. Figure 1: The architecture of the proposed SAR-DRN. In the proposed SAR-DRN model, considering that trade-off between feature extraction ability and reducing training time, the dilation factors of the 3x3 dilated convolutions from layer 1 to layer 7 are respectively set to 1, 2, 3, 4, 3, 2, and 1, empirically. Compared with other deep neural networks, we propose a lightweight model with only seven dilated convolution layers, as shown in Figure 3. ### Skip Connections Although the increase of network layer depth can help to obtain more data feature expressions, it often results in the vanishing gradient problem, which makes the training of the model much harder. To solve this problem, a new structure called skip connection [34] has been created for the DCNNs, to obtain better training results. The skip connection can pass the previous layer's feature information to its posterior layer, maintaining the image details and avoiding or reducing the vanishing gradient problem. For the _1_- th layer, let \\(L^{(J)}\\) be the input data, and let \\(f(L^{(J)},\\{W,b\\})\\) be its feed-forward propagation with trainable parameters. The output of \\((I+K)\\)-th layer with \\(K\\)-interval skip connection is recursively defined as follows: \\[L^{(I+K)}=f(L^{(J)},\\{W,b\\}_{I+1-J+k})+L^{(J)} \\tag{15}\\] For clarity, in the proposed SAR-DRN model, two skip connections are employed to connect layer 1 to layer 3 (as shown in Figure 4(a)) and layer 4 to layer 7(as shown in Figure 4(b)), whose effects are compared with no skip connections in discussion section. Figure 4: Diagram of skip connection structure in the proposed model. (a) connecting dilated convolution layer 1 to dilated convolution layer 3. (b) dilated convolution layer 4 to dilated convolution layer 7. Figure 3: Dilated convolution in the proposed model. Figure 2: receptive field size of dilated convolution. (_d_ =1, 2 and 4) ### Residual Learning Compared with traditional data mapping, He _et al._[35] found that residual mapping can acquire a more effective learning effect and rapidly reduce the training loss after passing through a multi-layer network, which has achieved state-of-the-art performance in object detection [36], image super-resolution [37] and so on. Essentially, Szegedy _et al._[38] demonstrated that residual networks take full advantage of identity shortcut connections, which can efficiently transfer various levels of feature information between not directly connected layers without attenuation. In the proposed SAR-DRN model, the residual image \\(\\varphi\\) is defined as follows: \\[\\varphi=\\mathcal{V}_{{}_{i}}-\\mathcal{x}_{{}_{i}} \\tag{16}\\] As layer depth increasing, the degradation phenomenon manifests that common deep networks might have difficulties in approximating identical mappings by stacked non-linear layers like _Conv-BN-ReLU_ block. By contrast, it is reasonable to consider that most pixel values in residual image \\(\\varphi\\) are very close to zero, and the spatial distribution of the residual feature maps should be very sparse, which can transfer the gradient descent process to a much smoother hyper-surface of loss to filtering parameters. Thus, searching for an allocation which is on the verge of the optimal for the network's parameters becomes much quicker and easier, allowing us to add more trainable layers to the network and improve its performance. The learning procedure with residual unit is easier to approximate to the original multiplicative speckle noise through the deeper and intrinsic non-linear feature extraction and expression, which can better weaken the range difference between optical images and SAR images. Specifically for the proposed SAR-DRN, we choose a collection of \\(N\\) training image pairs \\(\\left\\{\\mathcal{x}_{{}_{i}},\\mathcal{V}_{{}_{i}}\\right\\}_{{}_{N}}\\) from the training data sets as described in 4.1 below, where \\(\\mathcal{V}_{{}_{i}}\\) is the speckled image, and \\(\\theta\\) is the network parameters. Our model uses the mean squared error (MSE) as the loss function: \\[loss(\\Theta)=\\frac{1}{2N}\\sum_{{}_{i=1}}^{N}\\left\\|\\phi(\\mathcal{V}_{{}_{i}}, \\theta)-\\varphi\\right\\|_{\\text{g}}^{2} \\tag{17}\\] In summary, with the dilated convolution, skip connections and residual learning structure, the flowchart of learning a deep network for the SAR image despeckling process is described in Figure 5. To learn the complicated non-linear relation between the speckled image \\(\\mathcal{y}\\) and original image \\(\\mathcal{x}\\), the proposed SAR-DRN model is employed with converged loss between the residual image \\(\\varphi\\) and the output \\(\\phi(\\mathcal{y},\\theta)\\), then preparing for real speckle SAR image processing as illuminated in Figure 5. ## 4 Experimental Results and Analysis ### Implementation Details #### 4.1.1 Training and Test Datasets Considering that it's quite hard to obtain clean reference training SAR images without speckle at all, we used the _UC Merced_ land-use dataset [39] as our training dataset with different numbers of looks for simulating SAR image despeckling, which contains 21 scene classes with 100 images per class. By the reason that optical images and SAR images are statistically different, the amplitude Figure 5: The framework of SAR image despeckling based on deep learning. information of optical images are processed before training for single-polarization SAR data despeckling, to better accord with the data distribution property of SAR images. To train the proposed SAR-DRN, we chose 400 images of size 256x256 from this dataset and set each patch size as 40x40 and stride equal to 10. Then 193,664 patches are cropped for training SAR-DRN with batch size as 128 for parallel computing. And the number of looks \\(L\\) was set to noise levels of 1, 2, 4, and 8 for adding multiplicative speckle noise, respectively. To test the performance of the proposed model, three examples of the Airplanes, Buildings and Rivers classes were respectively set up as simulated images. And for the real SAR image despeckling experiments, we used the classic _Flevoland_ SAR image (cropped to 500x600), _Deathvalley_ SAR image (cropped to 600x600), and _San Francisco_ SAR image (cropped to 400x400) which are commonly used in real SAR data image despeckling. **2) Parameter Setting and Network Training** Table 1 lists the network parameters of each layer for SAR-DRN. The proposed model was trained using the Adam [40] algorithm as the gradient descent optimization method, with momentum \\(\\beta_{1}=0.9\\), momentum \\(\\beta_{2}=0.999\\), and \\(\\varepsilon=10^{8}\\), where the learning rate \\(\\alpha\\) was initialized to 0.01 for the whole network. The optimization procedure is given as below. \\[m_{t}=\\beta_{1}\\cdot m_{t\\cdot 1}+(1-\\beta_{1})\\cdot\\frac{\\partial L}{ \\partial\\theta_{t}} \\tag{18}\\] \\[n_{t}=\\beta_{2}\\cdot n_{t\\cdot 1}+(1-\\beta_{2})\\cdot(\\frac{\\partial L}{ \\partial\\theta_{t}})^{2} \\tag{19}\\] \\[\\Delta\\theta_{t}=-\\alpha\\cdot\\frac{m_{t}}{\\sqrt{n_{t}}+\\varepsilon} \\tag{20}\\] where \\(\\theta\\) is the trainable parameter in the network of \\(t\\)-th iteration. The training process of SAR-DRN took 50 epochs (about 1,500 iterations), and after every 10 epochs, the learning rate was reduced through being multiplied by a descending factor \\(gamma=0.5\\). We used _Caffe_[41] framework to train the proposed SAR-DRN in the Windows 7 environment, 16 GB-RAM, with an Nvidia Titan-X (Pascal) GPU. The total training time costs about 4 hours 30 minutes, which is less than SAR-CNN [28] with about 9h 45 minutes under the same computational environment. **3) Compared Algorithms and Quantitative Evaluations** To verify the proposed method, we compared SAR-DRN method with four mainstream despeckling methods: The probabilistic patch-based (PPB) filter [13] based on patch matching, SAR-BM3D [14] based on 3-D patch matching and wavelet, SAR-POTDF [16] based on sparse representation, and SAR-CNN [28] based on deep neural network. In the simulated-image experiments, the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were employed as the quantitative evaluation indexes. And in the real-image experiments, the _ENL_ was considered as the smoothness of a homogeneous region after SAR image despeckling (the _ENL_ is commonly regarded as the quantitative evaluation index for real SAR image despeckling experiments), whose value is larger demonstrating the homogeneous region is smoother, as defined in Equation (3). \\begin{table} \\begin{tabular}{c|c} \\hline \\hline & **Configurations** \\\\ \\hline **Layer 1** & Dilated Conv + ReLU: 64\\(\\times\\)3\\(\\times\\)3, dilate=1, stride=1, pad=1 \\\\ **Layer 2** & Dilated Conv + ReLU: 64\\(\\times\\)3\\(\\times\\)3, dilate=2, stride=1, pad=2 \\\\ **Layer 3** & Dilated Conv + ReLU: 64\\(\\times\\)3\\(\\times\\)3, dilate=3, stride=1, pad=3 \\\\ **Layer 4** & Dilated Conv + ReLU: 64\\(\\times\\)3\\(\\times\\)3, dilate=4, stride=1, pad=4 \\\\ **Layer 5** & Dilated Conv + ReLU: 64\\(\\times\\)3\\(\\times\\)3, dilate=3, stride=1, pad=3 \\\\ **Layer 6** & Dilated Conv + ReLU: 64\\(\\times\\)3\\(\\times\\)3, dilate=2, stride=1, pad=2 \\\\ **Layer 7** & Dilated Conv: 1\\(\\times\\)3\\(\\times\\)3, dilate=1, stride=1, pad=1 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: **The network configuration of the SAR-DRN model**4.2 Simulated-Data Experiments To verify the effectiveness of the proposed SAR-DRN model in SAR image despeckling, four different speckle noise levels of looks \\(L\\)=1, 2, 4 and 8 were set up for the three simulated images for PPB, SAR-BM3D, SAR-POTDF, SAR-CNN and ours. The PSNR and SSIM evaluation indexes and their standard deviations of the 10 times simulated experiments with the three images are listed in Tables 2,3 and 4, respectively, where the best performance is marked in bold. As shown in Table 2, 3 and 4, the proposed SAR-DRN model obtains all the best PSNR results and nine of the twelve best SSIM results in the four noise levels. When \\(L\\)=1, the proposed method outperforms SAR-BM3D by about 1.1 dB/0.6 dB/0.6 dB for Airplane, Building and Highway image, respectively. When \\(L\\)=2 and 4, SAR-DRN outperforms PPB, SAR-POTDF, SAR-BM3D, and SAR-CNN by at least 0.5 dB/0.7 dB/0.3 dB and 0.4 dB/0.3 dB/0.2 dB for Airplane/Building/Highway, respectively. Compared with the traditional despeckling methods above, the proposed method shows superior performance over the state-of-the-art methods on both quantitative and visual assessments, especially for strong speckle noise. Figure 6, Figure 7 and Figure 8 correspondingly show the filtered images for the Airplane/Building/Highway images contaminated by 2-look speckle, 4-look speckle and 4-look speckle, respectively. It can be clearly seen that PPB has a good speckle-reduction ability, but PPB simultaneously creates many texture distortions, especially around the edges of the airplane, building and highway. SAR-BM3D and SAR-POTDF perform better than PPB on both the Airplane, Building and Highway images, especially for strong speckle noise such as \\(L\\)=1, 2 or 4, which reveals an excellent speckle-reduction ability and local detail preservation ability. Furthermore, they generate fewer texture distortions, as shown in Figure 6, 7 and 8. However, SAR-BM3D and SAR-POTDF also simultaneously result in over-smoothing, to some degree, as they mainly concentrate on some complex geometric features. SAR-CNN also shows a good speckle-reduction ability and local detail preservation ability, but it introduces some radiation distortions in homogeneous regions. Compared with the other algorithms above, SAR-DRN achieves the best performance in speckle reduction, concurrently avoiding introducing radiation and geometric distortion. In addition, from the red boxes of the Airplane and Building images in Figure 6, 7 and 8, respectively, it can be clearly seen that SAR-DRN also shows the best local detail preservation ability, while the other methods either miss partial texture details or produce blurry results, to some extent. ### Real-Data Experiments As shown in Figure 9, Figure 10 and Figure 11, we also compared the proposed method with the four state-of-the-art methods described above on three real SAR images. These three SAR images are all acquired by the Airborne Synthetic Aperture Radar(AIRSAR), which are all 4-look data. In Figure 9, it can be clearly seen that the result of SAR-BM3D still contains a great deal of residual speckle noise, while the results of PPB, SAR-POTDF, SAR-CNN, and the proposed SAR-DRN method reveal good speckle-reduction ability. PPB performs very well in speckle reduction, but it generates Figure 8: **Filtered images for the Highway image contaminated by 4-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.** Figure 7: **Filtered images for the Building image contaminated by 4-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.**a few texture distortions in the edges of prominent objects. In homogeneous regions, SAR-POTDF does not perform as well in speckle reduction as the proposed SAR-DRN. As for SAR-CNN, its edge-preserving ability is weaker than that of SAR-DRN. Visually, SAR-DRN achieves the best performance in speckle reduction and local detail preservation, performing better than the other mainstream methods; In Figure 10, all the five methods can well reduce the speckle noise, but PPB obviously exists over-smoothing phenomenon; Besides, in Figure 11, the result of SAR-CNN still contains some residual speckle noise. Simultaneously, PPB, SAR-BM3D and SAR-POTDF also result in over-smoothing phenomenon, to some degree, as shown in the marked regions with complex geometric features. And it can be clearly seen that the proposed method has both well speckled noise reduction ability and preserving detail ability for the edge and texture information. In addition, we also evaluated the filtered results, through _ENL_ in Table 5 and EPD-ROA [15] in Table 6 to measure the speckle-reduction and edge-preserving ability, respectively. Because it's difficult to find homogeneous regions in Figure 11, the _ENL_ values were respectively estimated from four chosen homogeneous regions of Figure 9 and Figure 10 (the red boxes in Figure 9(a) and Figure 10(a)). Clearly, SAR-DRN has a much better speckle-reduction ability than the other methods, which is consistent with the visual observation. Figure 11: Filtered images for the _San Francisco_ SAR image contaminated by 4-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN. Figure 10: Filtered images for the _Deathvalley_ SAR image contaminated by 4-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN. ### Discussion 1) Dilated Convolutions and Skip Connections As mentioned in Section III, dilated convolutions are employed in the proposed method, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections are also added to the despecking model to maintain the image details and reduce the vanishing gradient problem. To verify the effectiveness of the dilated convolutions and skip connections, we implemented four sets of experiments in the same environment as that shown in Figure 12: 1) with dilated convolutions and skip connections (the red line); 2) with dilated convolutions but without skip connections (the green line); 3) without dilated convolutions but with skip connections (the blue line); and 4) without dilated convolutions and skip connections (the black line). As Figure 12 implies, the dilated convolutions can effectively reduce the training loss and enhance the despecking performance (the less training Loss and the best PSNR), which also testifies that augmenting the contextual information through enlarging the receptive field is effective for recovery the degraded image, as demonstrated in Section III for dilated convolution. Meanwhile, the skip connections also accelerate the convergence speed of the network and enhance the model stability, as comparison with or without skip connection in Figure 12. Besides, the combination of dilated convolution and skip connections can promote each other's effect, up from about 1.1 dB in PSNR compared with the combination of without dilated convolution and without skip connections. 2) With or Without Batch Normalization (BN) in the Network \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline Data & PPB & SAR-BM3D & SAR-POTDF & SAR-CNN & SAR-DRN \\\\ \\hline **Figure 9** & 0.619 & 0.733 & 0.714 & 0.748 & **0.754** \\\\ \\hline **Figure 10** & 0.587 & 0.714 & 0.702 & 0.698 & **0.723** \\\\ \\hline **Figure 11** & 0.632 & **0.685** & 0.654 & 0.621 & 0.673 \\\\ \\hline \\end{tabular} \\end{table} Table 6: **EPD-ROA indexes for the real despecking results** Figure 12: **The simulated SAR image despeckling results of the four specific models in (a) training loss and (b) average PSNR, with respect to iterations. The four specific models were different combinations of dilated convolutions (Dconv) and skip connections (SK), and were trained with 1-look images in the same environment. The results were evaluated on the _Set14_[43] dataset.** \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline \\multicolumn{2}{|c|}{Data} & Original & PPB & SAR-BM3D & SAR-POTDF & SAR-CNN & SAR-DRN \\\\ \\hline \\multirow{2}{*}{**Figure 9**} & Region I & 4.36 & 122.24 & 67.43 & 120.32 & 86.29 & **137.63** \\\\ \\cline{2-7} & Region II & 4.11 & **56.89** & 24.96 & 38.90 & 23.38 & 45.64 \\\\ \\hline \\multirow{2}{*}{**Figure 10**} & Region I & 5.76 & 14.37 & 12.65 & 12.72 & 13.26 & **14.58** \\\\ \\cline{2-7} & Region II & 4.52 & 43.97 & **55.76** & 44.87 & 37.45 & 48.32 \\\\ \\hline \\end{tabular} \\end{table} Table 5: _ENL_ results for the _Flevoland_ and _Deathvalley_ imagesUnlike the methods proposed in [28] and [29], which utilize batch normalization to normalize the output features, SAR-DRN does not add this preprocessing layer, considering that the skip connections can also maintain the outputs of the data distribution in the different dilated convolution layers. The quantitative comparison of the two structures for SAR image despeckling is provided in Section IV. Furthermore, getting rid of the BN layers can simultaneously reduce the amount of computation, saving about 3 hours of training time in the same environment. Figure 13 shows that this modification improves the despeckling performance and reduces the complexity of the model. Regarding this phenomenon, we suggest that a probable reason is that the input and output have a highly similar spatial distribution for this regression problem, while the BN layers normalize the hidden layers' output, which destroys the representation of the original space [44]. 3) Runtime Comparisons For evaluating the efficiency of despeckling algorithms, we make statistics of runtime under the same environment with MALAB R2014b, as listed in Table 7. Distinctly, SAR-DRN exhibits the lowest run-time complexity than other algorithms, because of the lightweight model with only 7 layers than other deep learning method like SAR-CNN [28] with 17 layers. ## 5 Conclusion In this paper, we have proposed a novel deep learning approach for the SAR image despeckling task, learning an end-to-end mapping between the noisy and clean SAR images. Differently from common convolutions operation, the presented approach is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size with a lightweight structure. Furthermore, skip connections are added to the despeckling model to maintain the image details and avoid the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed SAR-DRN approach shows a state-of-the-art performance in both simulated and real SAR image despeckling experiments, especially for strong speckle noise. In our future work, we will investigate more powerful learning models to deal with the complex real scenes in SAR images. Considered that the training of our current method performed for each number of looks, we will explore an integrated model to solve this problem. Furthermore, the proposed approach will be extended to polarimetric SAR image despeckling, whose noise model is \\begin{table} \\begin{tabular}{c|c c c c c} \\hline \\hline Method & PPB & SAR-BM3D & SAR-POTDF & SAR-CNN & Ours \\\\ \\hline Runtime & 10.13 & 16.48 & 12.83 & 1.13 & **0.38** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: Runtime comparisons for five despeckling methods with an image of size 256 × 256 (Seconds) Figure 13: The simulated SAR image despeckling results of the two specific models with/without batch normalization (BN). The two specific models were trained with 1-look images in the same environment, and the results were evaluated on the _Set14_[43] dataset. much more complicated than that of single-polarization SAR. Besides, for better reducing speckle noise in more complex real SAR image data, some _prior_ constraint like multi-channel patch matching, band selection, location _prior_ and locality adaptive discriminant analysis [45; 46; 47; 48], can also be considered for improve precision of despeckling results. In addition, we will try to collect enough SAR images and then train the model with multi-temporal data [49] for SAR image despeckling, which will be sequentially explored to the future studies. This work was supported by grants from the National Natural Science Foundation of China under Grants 61671334. Qiang Zhang proposed the method and performed the experiments; Qiang Zhang, Qiangqiang Yuan., Jie Li. and Zhen Yang conceived and designed the experiments; Qiang Zhang, Qiangqiang Yuan., Jie Li. Zhen Yang and Xiaoshuang Ma wrote the manuscript. All the authors read and approved the final manuscript. The authors declare no conflict of interest. ## References * J (1976) J, Goodman. Some fundamental properties of speckle. _J. Opt. Soc. Am._**1976**, 66, 1145-1150. * Li et al. (2013) Li, H.; Hong, W.; Wu, Y.; Fan, P. Bayesian Wavelet Shrinkage with Heterogeneity-Adaptive Threshold for SAR Image Despeckling Based on Generalized Gamma Distribution. _IEEE Trans. Geosci. Remote Sens._**2013**, 51, 2388-2402. * Xu et al. (2015) Xu, B.; Cui, Y.; Li, Z.; Yang, J. An Iterative SAR Image Filtering Method Using Nonlocal Sparse Model. _IEEE Geosci. Remote Sens. Lett._**2015**, 12, 1635-1639. * Wu et al. (2016) Wu, J.; Liu, F.; Hao, H.; Li, L.; Jiao, L.; Zhang, X. A Nonlocal Means for Speckle Reduction of SAR Image with Multiscale-Fusion-Based Steerable Kernel Function. _IEEE Geosci. Remote Sens. Lett._**2016**, 13, 1646-1650. * Lee (1980) Lee, J. Digital image enhancement and noise filtering by use of local statistics. _IEEE Trans. Pattern Anal. Mach. Intell._**1980**, 2, 165-168. * Kuan et al. (1985) Kuan, D.; Sawchuk, A.; Strand, T.; Chavel, P. Adaptive noise smoothing filter for images with signal-dependent noise. _IEEE Trans. Pattern Anal. Mach. Intell._**1985**, 2, 165-177. * Frost et al. (1982) Frost, V.; Stiles, J.; Shanmugan, K.; Holtzman, J. A model for radar images and its application to adaptive digital filtering of multiplicative noise. _IEEE Trans. Pattern Anal. Mach. Intell._**1982**, 2, 157-166. * Yahya et al. (2014) Yahya; Norashikin; Kamel, N.; Aamir, S. Subspace-based technique for speckle noise reduction in SAR images. _IEEE Trans. Geosci. Remote Sens._**2014**, 52, 6257-6271. * Starck et al. (2002) Starck, J.; Candes, E.; Donoho, D. The curvelet transform for image denoising. _IEEE Trans. Image Process._**2002**, 11, 670-684. * Solbo and Eltoft (2004) Solbo, S.; Eltoft, T. Homomorphic wavelet-based statistical despeckling of SAR images. _IEEE Trans. Geosci. Remote Sens._**2004**, 42, 711-721. * Lopez and Fabregas (2002) Lopez, C.M; Fabregas, X.M. Reduction of SAR Interferometric Phase Noise in the Wavelet Domain. _IEEE Trans. Geosci. Remote Sens._**2002**, 40, 2553-2566. * Buades et al. (2005) Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. _in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition_, **2005**, pp. 60-65. * Deledalle et al. (2009) Deledalle, C.A.; Denis, L.; Tupin, F. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. _IEEE Trans. Image Process._**2009**, 18, 2661-2672. * Parrilli et al. (2005) Parrilli, S.; Poderico, M.; Angelino, C.V.; Verdoliva, L. A nonlocal SAR image denoisingalgorithm based on LLMMSE wavelet shrinkage. _IEEE Trans. Geosci. Remote Sens._**2012**, 50, 606-616. * _Ma et al. (2016)_ Ma, X.; Shen, H.; Zhao, X.; Zhang, L. SAR image despeckling by the use of variational methods with adaptive nonlocal functionals. _IEEE Trans. Geosci. Remote Sens._**2016**, 54, 3421-3435. * _Xu et al. (2015)_ Xu, B.; Cui, Y.; Li, Z.; Zuo, B.; Yang, J.; Song, J. Patch ordering-based SAR image despeckling via transform-domain filtering. _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._**2015**, 8, 1682-1695. * _Feng et al. (2014)_ Feng, W.; Lei, H.; Gao, Y. Speckle reduction via higher order total variation approach. _IEEE Trans. Image Process._**2014**, 23, 1831-1843. * _Zhao et al. (2015)_ Zhao, Y.; Liu, J.; Zhang, B.; Hong, W.; Wu, Y. Adaptive Total Variation Regularization Based SAR Image Despeckling and Despeckling Evaluation Index. _IEEE Trans. Geosci. Remote Sens._**2015**, 53, 2765-2774. * _Yuan et al. (2012)_ Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral-spatial adaptive total variation model. _IEEE Trans. Geosci. Remote Sens._**2012**, 10, 3660-3677. * _Li et al. (2016)_ Li, J.; Yuan, Q.; Shen, H.; Zhang, L. Noise removal from hyperspectral image with joint spectral-spatial distributed sparse representation. _IEEE Trans. Geosci. Remote Sens._**2016**, 54, 5425-5439. * _Ranjani et al. (2010)_ Ranjani, J.J; Thiruvengadam, S. J. Dual-tree complex wavelet transform based SAR despeckling using interscale dependence. _IEEE Trans. Geosci. Remote Sens._**2010**, 48, 2723-2731. * _LeCun et al. (2015)_ LeCun, Y.A; Bengio, Y.; Hinton, G. Deep learning. _Nature._**2015**, 521, 436-444. * _Zhang et al. (2016)_ Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. _IEEE Geosci. Remote Sens. Mag._**2016**, 4, 22-40. * _Xia et al. (2017)_ Xia, G.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y. AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification. _IEEE Trans. Geosci. Remote Sens._**2017**, 55, 3965-3981. * _LeCun et al. (1990)_ LeCun, Y.A.; Boser, B.; Denker, J.S.; Howard, R. E.; Habbard, W.; Jackel, L.D. Handwritten digit recognition with a back-propagation network. _in Advances in Neural Information Processing Systems_, **1990**, pp. 396-404. * _Zhang et al. (2017)_ Zhang, K.; Zuo, W.; Chen, Y.; Meng, D; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. _IEEE Trans. Image Process._**2017**, 26, 3142-3155. * _Ioffe and Szegedy (2015)_ Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. _in Proc. Int. Conf. Machine Learning_, **2015**, pp. 448-456. * _Chierchia et al. (2014)_ Chierchia, G.; Cozzolino, D.; Poggi, G.; Verdoliva, L. SAR image despeckling through convolutional neural networks. _arXiv_**2014**, arXiv:1704.00275. * _Wang et al. (2017)_ Wang, P; Zhang, H.; Patel, V.M. SAR Image Despeckling Using a Convolutional Neural Network. _IEEE Signal Process. Lett._**2017**, 24, 1763-1767. * _Dong et al. (2016)_ Dong, C.; Loy, C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. _IEEE Trans. Pattern Anal. Mach. Intell._**2016**, 38, 295-307. * _Zhang and Zuo (2017)_ Zhang, L.; Zuo. W.; Image Restoration: From Sparse and Low-Rank Priors to Deep Priors [Lecture Notes]. _IEEE Signal Process. Mag._**2017**, 34, 172-179. * _Chakrabarti (2016)_ Chakrabarti, A. A neural approach to blind motion deblurring. _in Proc. European Conference on Computer Vision_, **2016**, pp. 221-235. * _Yu and Koltun (2015)_ Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. _arXiv_**2015**, arXiv:1511.07122. * Mao et al. (2016) Mao, X.; Shen, C.; Yang, Y.-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. _in Proc. Advances in Neural Information Processing Systems_, **2016**, pp. 2802-2810. * He et al. (2016) He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. _in Proc. IEEE Conf. Computer Vision and Pattern Recognition_, **2016**, pp. 770-778. * Zhang et al. (2016) Zhang X.; Zou, J.; He, K.; Sun, J. Accelerating Very Deep Convolutional Networks for Classification and Detection. _IEEE Trans. Pattern Anal. Mach. Intell._**2016**, 38, 1943-1955. * Kim et al. (2016) Kim, J; Kwon, L.J.; Mu, L.K. Accurate image super-resolution using very deep convolutional networks. _in Proc. IEEE Conf. Computer Vision and Pattern Recognition_, **2016**, pp. 1646-1654. * Szegedy et al. (2017) Szegedy, C.; Ioffe, S.; Vanhoucke, V. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. _in Proc. Association for the Advancement of Artificial Intelligence_, **2017**, pp. 4278-4284. * Yang and Newsam (2010) Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. _in Proc. 18th SIGSPATIAL Int. Conf. Advances in Geographic Information Systems_, **2010**, pp. 270-279. * Kingma and Ba (2014) Kingma, D.; Ba, J. Adam: A method for stochastic optimization. _arXiv_**2014**, arXiv:1412.6980. * Jia et al. (2014) Jia, Y. et al. Caffe: Convolutional architecture for fast feature embedding. _in Proc. 22nd ACM Int. Conf. Multimedia_, **2014**, pp. 675-678. * Luis et al. (2016) Luis, G.; Maria, E. B.; Julio, C.; Marta, E. A New Image Quality Index for Objectively Evaluating Despeckling Filtering in SAR Images. _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._**2016**, 9, 1297-1307. * Zeyde et al. (2010) Zeyde, R.; Elad, M.; Protter, M. On Single Image Scale-Up Using Sparse-Representations. _in Int. Conf. on Curves and Surfaces_, **2010**, pp. 711-730. * Lim et al. (2017) Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K. M. Enhanced Deep Residual Networks for Single Image Super-Resolution, _IEEE Conf. Computer Vision and Pattern Recognition Workshops_, **2017**. * Li et al. (2015) Li, J.; Yuan, Q.; Shen, H.; Zhang, L. Hyperspectral Image Recovery Employing a Multidimensional Nonlocal Total Variation Model. _Signal Process._**2015**, 111, 230-248. * Wang et al. (2016) Wang, Q; Lin, J; Yuan, Y. Salient Band Selection for Hyperspectral Image Classification via Manifold Ranking. _IEEE Trans. Neural Netw. Learn. Syst._**2016**, 27, 1279-1289. * Wang et al. (2017) Wang, Q; Meng, Z; Li, X. Locality Adaptive Discriminant Analysis for Spectral-Spatial Classification of Hyperspectral Images. _IEEE Geosci. Remote Sens. Lett._**2017**, 14, 2077-2081. * Wang et al. (2017) Wang, Q; Gao, J; Yuan, Y. Embedding Structured Contour and Location Prior in Siameseed Fully Convolutional Networks for Road Detection. _IEEE Trans. Intell. Transp. Syst._**2017**, 99, 1-12. * Ma et al. (2017) Ma, X; Wu, P; Wu, Y; Shen, H. A Review on Recent Developments in Fully Polarimetric SAR Image Despeckling. _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._**2017**, 99, 1-16.
In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows superior performance over the state-of-the-art methods on both quantitative and visual assessments, especially for strong speckle noise. S ART image, despeckling, dilated convolution, skip connection, residual learning. + Footnote †: journal: Computer Vision and Pattern Recognition
Write a summary of the passage below.
196
isprs/25d24c94_6f4e_4398_8487_b1efc3448780.md
Surface Deformation Analysis by Fusion of Multi-temporal and Multi-source Data under Disaster Emergency Conditions Anpeng Shen Beijing University of Civil Engineering and Architecture, 102616 Beijing, China - [email protected], [email protected] Shuangfeng Wei Beijing University of Civil Engineering and Architecture, 102616 Beijing, China - [email protected], [email protected] Yuanyuan Wang Shaoxing Bureau Of Natural Resources and Planning, 312000 Shaoxing, China - [email protected] Zhaodong Yang Shaoxin Zhu Shaoxing Guotechnical Investigation & Surveying Institute, 312000 Shaoxing, China - [email protected] ###### Emergency Surveying & Mapping, Multi-temporal Data, Multi-source Remote Sensing Data, Data Fusion, Surface Deformation Analysis. + Footnote †: journal: Computer Science and Image Processing 1 Footnote 1: [https://doi.org/10.5194/spsr-archives-XLVIII-4-2024-417-2024](https://doi.org/10.5194/spsr-archives-XLVIII-4-2024-417-2024) 1 Footnote 2: [https://doi.org/10.5194/spsr-archives-XLVIII-4-2024-417-2024](https://doi.org/10.5194/spsr-archives-XLVIII-4-2024-417-2024) ## 1 Background In the current context of globalization, the frequent occurrence of natural disasters poses a great challenge to human society. Disasters not only cause loss of life and property, but also have a profound impact on regional economic development. Moreover, there is an optimal time period for disaster relief, and in order to save as many lives as possible, it is especially important to quickly and accurately analyze the surface deformation caused by natural disasters. Emergency mapping after natural disasters is the basic support for all kinds of public emergencies to provide geographic information and modern surveying and mapping technology, is an important part of the national emergency response system, is the command decision-making and rescue and disaster relief guarantee and basis, is through the whole process of public emergencies prevention, response, disposal and restoration of the importance of the basic work. Disaster analysis relies on multi-period topographic data, so it is necessary to obtain multi-period data from previous periods and the scene. By comparing the multi-period data and analyzing the information such as surface subsidence is the main analysis method at present (Zheng, et al., 2022). Firstly, historical data often present numerous challenges. For instance, in remote mountainous regions, past terrain data may have been collected manually due to the absence of UAVs and other advanced equipment, resulting in poor data quality due to outdated cartographic techniques. Furthermore, in emergency surveying scenarios, the high demands for speed and timeliness mean that, even if high-precision data are available from other sources, field personnel often do not have the time to obtain them and must rely on the historical data they have at hand. Additionally, due to technological limitations and communication difficulties, it is challenging to obtain externally transmitted data at disaster sites. ## 2 ## 3 Background In the current context of globalization, the frequent occurrence of natural disasters poses a great challenge to human society. Disasters not only cause loss of life and property, but also have a profound impact on regional economic development. Moreover, there is an optimal time period for disaster relief, and in order to save as many lives as possible, it is especially important to quickly and accurately analyze the surface deformation caused by natural disasters. Emergency mapping after natural disasters is the basic support for all kinds of public emergencies to provide geographic information and modern surveying and mapping technology, is an important part of the national emergency response system, is the command decision-making and rescue and disaster relief guarantee and basis, is through the whole process of public emergencies prevention, response, disposal and restoration of the importance of the basic work. Disaster analysis relies on multi-period topographic data, so it is necessary to obtain multi-period data from previous periods and the scene. By comparing the multi-period data and analyzing the information such as surface subsidence is the main analysis method at present (Zheng, et al., 2022). Firstly, historical data often present numerous challenges. For instance, in remote mountainous regions, past terrain data may have been collected manually due to the absence of UAVs and other advanced equipment, resulting in poor data quality due to outdated cartographic techniques. Furthermore, in emergency surveying scenarios, the high demands for speed and timeliness mean that, even if high-precision data are available from other sources, field personnel often do not have the time to obtain them and must rely on the historical data they have at hand. Additionally, due to technological limitations and communication difficulties, it is challenging to obtain externally transmitted data at disaster sites. ## 4 Background In the current context of globalization, the frequent occurrence of natural disasters poses a great challenge to human society. Disasters not only cause loss of life and property, but also have a profound impact on regional economic development. Moreover, there is an optimal time period for disaster relief, and in order to save as many lives as possible, it is especially important to quickly and accurately analyze the surface deformation caused by natural disasters. Emergency mapping after natural disasters is the basic support for all kinds of public emergencies to provide geographic information and modern surveying and mapping technology, is an important part of the national emergency response system, is the command decision-making and rescue and disaster relief guarantee and basis, is through the whole process of public emergencies prevention, response, disposal and restoration of the importance of the basic work. Disaster analysis relies on multi-period topographic data, so it is necessary to obtain multi-period data from previous periods and the scene. By comparing the multi-period data and analyzing the information such as surface subsidence is the main analysis method at present (Zheng, et al., 2022). Firstly, historical data often present numerous challenges. For instance, in remote mountainous regions, past terrain data may have been collected manually due to the absence of UAVs and other advanced equipment, resulting in poor data quality due to outdated cartographic techniques. Furthermore, in emergency surveying scenarios, the high demands for speed and timeliness mean that, even if high-precision data are available from other sources, field personnel often do not have the time to obtain them and must rely on the historical data they have at hand. Additionally, due to technological limitations and communication difficulties, it is challenging to obtain externally transmitted data at disaster sites. At the disaster site, in order to facilitate situational awareness, emergency responders need instant access to maps of the site (Opach, et al., 2023). Conventional methods typically involve manual investigation within the disaster zone, which is not only inefficient and limited in scope but also exposes responders to the risks of secondary hazards. Combining drones with various sensors offers the advantages of rapid, automated, and non-contact spatial data collection, providing high resolution and accuracy (Kovanic, et al., 2023). In recent years, UAV remote sensing has gradually attracted the attention of scientific researchers and industry, due to its broad application prospects. It has been widely used in agriculture, forestry, mining, and other industries. UAVs can be flexibly equipped with various sensors, such as optical, infrared, and LIDAR, and become an essential remote sensing observation platform (Zhang, et al., 2023). Consequently, they are commonly used in emergency surveying scenarios. UAV remote sensing methods produce more detailed maps, which can expedite rescue and relief operations in disaster-affected areas (Singh, et al., 2023). Furthermore, the long intervals between multiple datasets mean that the sensors used for data collection at disaster sites may differ from those used in historical data. This results in highly complex multi-temporal and multi-source data scenarios in emergency surveying, increasing the difficulty of terrain analysis and often causing delays in rescue operations. The most common application of drones is their effective creation of spatial models based on photogrammetry and LiDAR data (Kovanic, et al., 2023). UAV Light Detection Ranging (UAV-LiDAR) and UAV photogrammetry are currently common ground monitoring techniques (Zhan, et al., 2024). Photogrammetry stands out for its mobility, flexibility, and intuitiveness, particularly its ability to visually represent disaster scenes through color models, aiding in swift assessment and disaster relief coordination. However, in areas with vegetation cover such as forests, photogrammetry is limited by its passive measurement nature, which cannot penetrate the vegetation layer to directly obtain surface data (Kim, et al., 2023). UAV-mounted LiDAR, with its continuous operation and high-precision data, is widely used in terrain surveying and 3D modeling (Jiang, et al., 2022). In complex terrains such as forests, LiDAR technology, with its high precision and active measurement capabilities, excels in vegetation penetration through multi-echo technology, allowing it to directly obtain accurate surface data. In extreme environments, such as snow-covered forests, UAV-mounted LiDAR can effectively provide information about both the canopy and the sub-canopy snow surface (Koutantou, et al., 2022). Despite these advantages, the grayscale point clouds produced by LiDAR lack true color and texture information, limiting their application in disaster site assessment. Previous studies have alternated between these two technologies, overlooking their correlation, or have compared them solely on an individual basis (Zhan, et al., 2024). Combining the advantages of these two technologies through multi-sensor data The entire roadmap is divided into three main sections: processing of data before disaster, processing of data after disaster, and surface deformation analysis. The analysis process also includes the step of generating true-color point clouds. This method comprehensively utilizes multiple data sources such as satellite imagery, photogrammetry, and LiDAR. Through a series of technical processing steps, it not only optimizes old data but also effectively integrates data from different sources to enhance the speed of surface deformation analysis. The pre-disaster data collected from the database is optimized using methods such as threshold segmentation and median filtering, primarily aimed at improving data in anomalous areas and enhancing the accuracy of subsequent analysis. For post-disaster on-site data, nearest-neighbor coordinate matching and texture mapping are used to merge photogrammetric data with LiDAR point cloud data, producing true-color point clouds. The multi-source data is then converted into DSM models, enabling fusion has become an urgent need to enhance disaster relief efficiency. In summary, emergency surveying is a crucial component of the emergency response system for public incidents, providing geographic information support and decision-making foundations. Faced with the complexity of disaster sites, the key challenge is how to rapidly and comprehensively utilize multi-temporal, multi-source remote sensing data to resolve surface deformation analysis difficulties and provide both intuitive and high-precision visual data. Traditional methods are inefficient and pose secondary disaster risks. By integrating multi-temporal and multi-sensor data, it is possible to enhance disaster relief efficiency, achieve rapid and accurate surface deformation analysis, and optimize the overall effectiveness of emergency surveying. ### Methodology To address these challenges, this paper proposes an innovative method. The technical approach is illustrated in Figure 1. Considering the differences in data from different time periods, data interpolation is used to unify the resolution, followed by geographic coordinate matching for regional alignment. The processed multi-source, multi-temporal data undergoes overlay analysis using elevation interpolation matrices. Techniques such as bilateral filtering and threshold processing further optimize the results. Experimental validation in multiple typical regions demonstrates the method's effectiveness and practicality. ### Raw Data Preprocessing In emergency surveying, the quality of historical data often presents issues, making the rapid processing of raw data essential. One of the most common problems is the presence of anomalies in elevation data. Figure 1: Technology approach. For data before disaster, optimization is carried out using methods such as threshold segmentation and median filtering. The specific process is illustrated in Figure 2. Firstly, since surface data falls within a normal range, threshold segmentation is initially used to screen the data. This step identifies anomalies such as elevation issues, effectively isolating data points that do not conform to natural terrain variation characteristics, thereby laying the foundation for further processing. Further, the median filtering algorithm is applied for data optimization. A median filter is a nonlinear filter that eliminates digital signal noise while preserving signal edges. This technique has been widely used in two-dimensional digital filtering. It involves selecting an odd-numbered template window, moving it along the rows or columns of the two-dimensional digital matrix, and replacing the values within the window with the median value (An, et al., 2024). The formula for the median filter used in this paper is shown in Equation 1: \\[g(x,y)=median(f(x-i,y-j)),(i,j)\\in S, \\tag{1}\\] where \\(\\{\\)f(x,y)\\(\\}\\) = preprocessed pixel matrix \\(\\{\\)g(x,y)\\(\\}\\) = the post-processing pixel matrix \\(\\{\\)S\\(\\}\\) = window area The larger the template for median filtering, the more noise is filtered out. However, the deviation of the center value of the template window from the observed value can lead to wrong results. Therefore, a 3*3 template is used for filtering in this paper. The median filtering schematic is shown in Figure 3. Regardless of whether the area is flat or a complex mountainous region, terrain data should exhibit good continuity between ground points on a global scale. The median filtering method is highly effective at removing extremely high or low anomalous points in continuous terrain, while preserving terrain features and edge information. This achieves optimization of data in anomalous regions. ### True Color Point Cloud Generation In disaster scenes, high-precision data that allows for intuitive assessment of surface deformation is extremely valuable. The fusion of 3D LiDAR point clouds with 2D imagery is a current research hotspot in the field of photogrammetry and remote sensing. By combining the high precision of 3D LiDAR point clouds with the high observability of 2D image texture data, on-site personnel can quickly assess the disaster situation. The 3D model obtained from oblique photogrammetry is converted into a Digital Orthophoto Map (DOM). Since the current data has the same geographic coordinate system, matching the same regions becomes possible, forming the basis for data fusion. Then, the converted DOM and the LiDAR-derived LAS format files are merged using nearest-neighbor coordinate matching and texture mapping. This process integrates the photogrammetric data with the LiDAR point cloud data, resulting in the output of true-color point clouds. Specifically, the converted DOM files and LAS files are used to extract pixel data and point cloud data, respectively. After extraction, the two types of data are matched using the nearest-neighbor method based on their coordinates. Post-matching, the point cloud is processed to attach the texture from the image data to the grayscale point cloud. Finally, this results in the output of true-color point clouds, achieving the goal of multi-sensor data fusion. Moreover, since the reference factor for texture attachment is the geographic coordinate system, there is no need for joint calibration of the multi-sensors involved in the fusion. As long as the coordinate systems of the original data are consistent, the fusion process can be realized. This characteristic significantly enhances data production efficiency in post-disaster emergency surveying scenarios where timeliness is critical. ### Surface Deformation Analysis In disaster sites where rapid surface deformation analysis is required, the diversity of raw data types can negatively impact the workflow. Therefore, it is necessary to integrate multi-temporal, multi-sensor, and multi-source data to lay the foundation for data analysis. To standardize satellite imagery data, oblique measurement data, and LiDAR point cloud data into a unified format, a series of processing and standardization steps are essential. By converting multi-source data into DSM models, we achieve the integration of various types of data from pre- and post-disaster scenes over multiple time periods. Subsequently, the integrated multi-temporal, multi-sensor, and multi-source data undergo surface deformation analysis. The specific technical approach is illustrated in Figure 4. Figure 4: Processing of data before disaster. Figure 3: Schematic diagram of the median filter principle. Figure 2: Processing of data before disaster. The process is mainly divided into three parts: data preprocessing, surface deformation analysis, and result output. Firstly, considering the different resolutions of various data, we unify the data resolution through interpolation. Anomalous points are then removed using threshold analysis and median filtering, as described in Equation 1. Geographic coordinates are used to match the regions, preventing discrepancies due to varying extents of the original data from affecting the analysis results. Next, elevation difference matrices are used to analyze the same locations across multiple datasets. The results are optimized using bilateral filtering. In practical surface deformation analysis, minute deformations may be disregarded, so thresholding is applied to extract areas that have undergone significant changes. These extracted areas represent the disaster-affected regions. Finally, the surface deformation analysis results are generated. The calculation formula for extracting elevation information at the same location from two-period DSM data in the same region is shown in Equation 2: \\[\\Delta H_{Lj}=\\ H_{2}(i,j)-\\ H_{1}(i,j)\\ \\ i=1\\ \\cdots M_{i}j=1\\ \\cdots N, \\tag{2}\\] where \\(\\mathrm{H(i,j)=the}\\) elevation value of data \\(\\Delta\\mathrm{H(i,j)=the}\\) elevation difference of data \\(\\mathrm{M=location}\\) ranges \\(\\mathrm{N=location}\\) ranges By reading the elevation information from two-period DSM data, we perform pixel-by-pixel elevation differencing to obtain the difference DSM elevation matrix. The entire experimental area is traversed, and the elevation differences in the DSM data are calculated to form the elevation difference matrix. The size of the elevation difference matrix, like the experimental area of the DSM data, is M\\(\\times\\)N. The elevation difference matrix is shown in Equation 3: \\[dH=\\begin{pmatrix}\\Delta H_{11}&\\cdots&\\Delta H_{1N}\\\\ \\vdots&\\ddots&\\vdots\\\\ \\Delta H_{M1}&\\cdots&\\Delta H_{Mm}\\end{pmatrix}, \\tag{3}\\] If \\(\\mathrm{H_{i}}\\) is less than 0, the location is an elevation decrease point. If \\(\\mathrm{H_{ij}}\\) is larger than 0, it means that the location is an elevation increase point. Difference DSM data information is easily affected by the accuracy of DSM data before and after the time-phase, so when using DSM data to extract the change information, the elevation threshold t can be set. If the absolute value of the extracted elevation difference is greater than t and the elevation difference is positive, the grid point is considered an elevation increase point. If the absolute value of the extracted elevation difference is less than t and the elevation difference is negative, the grid point is considered an elevation decrease point. If the absolute value of the extracted elevation difference equals t, it is considered unchanged information. Bilateral filtering is a nonlinear filter used in image processing to remove noise, such as Gaussian noise, while preserving edges. This effect is achieved through two main functions: determining the filter coefficient based on geometric spatial distance and the difference between adjacent grid values. In the bilateral filtering algorithm, the output grid value is a weighted combination of adjacent grid values. It combines both spatial proximity and pixel value similarity to smooth the image while preserving edge information. This filter reduces image noise while avoiding the edge blurring problems commonly associated with traditional filtering methods such as Gaussian filtering. In the bilateral filtering algorithm, the output grid value is a weighted combination of adjacent grid values. The ability of the bilateral filter to smooth noise while preserving edges is due to its filter kernel being generated by two functions: the kernel domain and the value range kernel. The kernel domain, the template weights d(i, j, k, l) determined by the Euclidean distance of the pixel positions, is shown in Equation 4: \\[d(i,j,k,l)=exp\\left(-\\frac{(i-k)^{2}+(j-0)^{2}}{2\\sigma_{d}^{2}}\\right), \\tag{4}\\] Value range kernel, determined by the difference in pixel values, assigns weights to the template values r(i, j, k, l). The specific formula is shown in Equation 5: \\[r(i,j,k,l)=exp\\left(-\\frac{\\|f(i,j)-f(k,l)\\|^{2}}{2\\sigma_{r}^{2}}\\right), \\tag{5}\\] where (i, j) = the grid position (k, l) = the range centered on (i, j)(2N+1)(2N+1) f(k, l) = the grid value involved in the calculation \\(\\sigma_{i}^{2}\\)= the variance of the grid position distance \\(\\sigma_{i}^{2}\\)= the variance of the grid value Multiplying the two templates gives the template weights of the bilateral filter, is shown in Equation 6: \\[w(i,j,k,l)=\\] \\[exp\\left(-\\frac{(i-k)^{2}+(j-0)^{2}}{2\\sigma_{d}^{2}}-\\frac{\\|f(i,j)-f(k,l)\\| ^{2}}{2\\sigma_{r}^{2}}\\right), \\tag{6}\\] Therefore, the data equation for the bilateral filter can be expressed as Equation 7: \\[g(i,j)=\\frac{\\sum_{k,l}f(k,l)w(i,j,k,l)}{\\sum_{k,l}w(i,j,k,l)}, \\tag{7}\\] Considering the difference between the value domain and the air domain, compared with the difference between the traditional Gaussian filter or the mean filter of a single spatial domain and/or value domain, it can better remove noise while retaining characteristics (An, et al., 2024). By using the above methods, different sensor data can be fused, and the required DSM can be obtained from the data acquired by various sensors. Processing the data from multiple periods and different sensors allows for detailed surface elevation difference analysis. This comparison can reveal surface changes such as landslides, ground subsidence, and other deformations. ## 3 Experiments An experiment was conducted to validate the optimization method for historical data proposed in this paper. The experimental results are shown in Figure 5. Figure a shows the acquired historical data, where clear anomalies can be observed due to earlier collection methods and time periods. Figure b presents the optimized results, where the previously evident anomalies have been successfully mitigated. The median filtering method is highly effective at removing extremely high or low anomalous points in continuous terrain, while preserving terrain features and edge information. By comparing the data before and after optimization, the experimental results demonstrate that this method effectively eliminates anomalous regions in the data. Subsequently, to verify the feasibility and applicability of achieving true-color point clouds through data fusion, typical regions were selected for the true-color point cloud generation experiment. The results are shown in Figure 6. The main process involves converting the oblique model obtained from photogrammetry into a Digital Orthophoto Map (DOM) and fusing it with LiDAR point cloud data that only contains grayscale information, resulting in the output of true-color point clouds with color texture. Oblique photography and LiDAR data collection are conducted in the selected area, ensuring that both data sets share a consistent geographic coordinate system. Then, the collected data undergoes denoising, filtering, and format conversion to ensure data quality. The photographic data is processed to generate the DOM. Since the geographic coordinate systems are consistent, the nearest-neighbor coordinate matching method can be used to attach the color texture information from the DOM to the grayscale LiDAR point cloud. This fusion technique produces true-color point clouds. The integration of multiple sensors, such as cameras and LiDAR, successfully provides intuitive and high-precision visual data for disaster relief efforts. Finally, to validate the feasibility and applicability of the surface deformation analysis techniques proposed in this paper, we conducted analyses in mining and industrial areas. By processing and analyzing data from these typical regions, we confirmed the overall scheme and algorithm's feasibility and applicability. The experimental process and results are shown in Figure 7. Figure a shows the analysis conducted in the mining area, while Figure b illustrates the analysis in the industrial area. In Figure a, the analysis of models obtained from two periods of oblique photogrammetry successfully yielded surface deformation results. In Figure b, surface deformation analysis was achieved by combining the previously generated true-color point cloud data with historical DSM data. Data collection was carried out in both mining and urban areas, including oblique photography and LiDAR scanning. The collected raw data was processed and converted into the appropriate formats. Using the preprocessed data, true-color point clouds were generated, and DSMs were subsequently created. By employing comparative analysis techniques, the surface deformation in both areas was assessed. Mining areas typically have complex terrain and significant surface changes, providing an ideal environment for testing the effectiveness of techniques in analyzing subsidence and other geological activities. Urban areas, with dense buildings and varied surface changes due to construction, traffic development, and other factors, test the method's application in highly complex and dynamically changing environments. Figure 5: Optimization results of problem data. Figure 6: Generation of colored point cloud. Figure 7: Typical area experiment. To evaluate the accuracy and precision of the methods proposed in this paper, multiple elevation points within the experimental areas were measured using GPS-RTK. The analysis results were then compared with the data obtained from GPS-RTK measurements. The comparison results are shown in Table 1: The error range is between 0.031m and 0.052m, with a difference of less than 0.06m, and a standard deviation of 0.039m. These results meet the practical accuracy requirements for surface deformation detection. This outcome validates the effectiveness and practicality of our research methods, confirming that the results fully satisfy the demands of real-world applications. ## 4 Conclusion This paper proposes a method for surface deformation analysis under disaster emergency conditions by integrating multi-temporal and multi-source data. The method aims to quickly and accurately assess surface changes caused by natural disasters, providing reliable support for rescue decision-making. The main contributions of this paper are as follows: In the context of globalization, frequent natural disasters pose significant challenges to human society. Post-disaster emergency surveying is fundamental to rescue efforts, and quickly obtaining accurate information on terrain changes is crucial for saving lives and property. This paper proposes a comprehensive analytical method that integrates satellite imagery, oblique photogrammetry, and LiDAR data. Through true-color point cloud generation, data preprocessing, and surface deformation analysis, effective integration and rapid analysis of multi-source data are achieved. Historical data are optimized using threshold segmentation and median filtering to enhance data quality. Oblique imagery and LiDAR point clouds are fused to generate point cloud data with true colors and textures. Elevation interpolation matrices and bilateral filtering techniques are employed to perform multi-temporal data overlay analysis and extract surface deformation information. Experiments conducted in multiple typical regions validate the method's potential for application in various surface environments. The results demonstrate that the method can quickly apply multi-temporal, multi-source remote sensing data to effectively identify surface deformations following natural disasters. This method significantly improves the efficiency and accuracy of post-disaster emergency surveying, providing strong support for emergency response and disaster relief decision-making. By integrating multi-source remote sensing data, it optimizes data utilization and analytical flexibility, offering new perspectives and tools for future disaster response and other applications. This study offers an innovative solution for surface deformation analysis, holding significant practical implications and application prospects in the field of emergency surveying. ## Acknowledgement This work is supported by the Shaoxing Municipal Bureau of Natural Resources and Planning Production Project (CGSHZJ-2023-N000849). ## References * Zheng et al. (2022) Zheng, J., Yao, W., Lin, X., Ma, B., Bai, L., 2022. An accurate digital subsidence model for deformation detection of coal mining areas using a UAV-based LiDAR. _Remote Sensing_ 14, 421. * Opach et al. (2023) Opach, T., Rod, J.K., Munkvold, B.E., 2023. Map functions to facilitate situational awareness during emergency events. _Cartography and Geographic Information Science_ 50, 546-561. * Kovanic et al. (2023) Kovanic, L., Topitzer, B., Pet'ovsky, P., Blist'an, P., Gergel'ova, M.B., Blist'anova, M., 2023. Review of photogrammetric and Lidar applications of UAV. _Applied Sciences_ 13, 6732. * Zhang and Zhu (2023) Zhang, Z., Zhu, L., 2023. A review on unmanned aerial vehicle remote sensing: Platforms, sensors, data processing methods, and applications. _Drones_ 7, 398. * Singh et al. (2023) Singh, C.H., Jain, K., Mishra, V., 2023. UAV-Based Terrain-Following Mapping Using LiDAR in High Undulating Catastrophic Areas, in: Jain, K., Mishra, V., Pradhan, B. (Eds.). _Proceedings of UASG 2021: Wings 4 Sustainability_, Lecture Notes in Civil Engineering. Springer International Publishing, Cham, pp. 21-37. * Zhan et al. (2024) Zhan, X., Zhang, X., Wang, X., Diao, X., Qi, L., 2024. Comparative analysis of surface deformation monitoring in a mining area based on UAV-lidar and UAV photogrammetry. _The Photogrammetric Record phor._12490. * Kim et al. (2023) Kim, J., Kim, I., Ha, E., Choi, B., 2023. UAV Photogrammetry for Soil Surface Deformation Detection in a Timber Harvesting Area, South Korea. _Forests_ 14, 980. * Jiang et al. (2022) Jiang, N., Li, H.-B., Li, C.-J., Xiao, H.-X., Zhou, J.-W., 2022. A fusion method using terrestrial laser scanning and unmanned aerial vehicle photogrammetry for landslide deformation monitoring under complex terrain conditions. _IEEE Transactions on Geoscience and Remote Sensing_ 60, 1-14. * Koutantou et al. (2022) Koutantou, K., Mazzotti, G., Brunner, P., Webster, C., Jonas, T., 2022. Exploring snow distribution dynamics in steep forested slopes with UAV-borne LiDAR. _Cold Regions Science and Technology_ 200, 103587. * An et al. (2024) An, S., Yuan, L., Xu, Y., Wang, X., Zhou, D., 2024. Ground subsidence monitoring in based on UAV-LiDAR technology: a case study of a mine in the Ordos, China. _Geomech. Geophys. Geo-energy. Geo-resour_ 10, 57. \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline Measure & Observatory & Analyzed data & Error \\\\ points & data & & \\\\ \\hline 1 & 16.592 m & 16.644 m & 0.052 m \\\\ 2 & 16.642 m & 16.609 m & -0.033 m \\\\ 3 & 17.705 m & 17.753 m & 0.048 m \\\\ 4 & 17.658 m & 17.689 m & 0.031 m \\\\ 5 & 17.121 m & 17.084 m & -0.037 m \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Results validation data
Rapid analysis of surface deformation is crucial for rescue operations following natural disasters. However, the lack of recent terrain data and the discrepancy between data types and field-collected data often hinder timely surface deformation analysis. To enhance data usability, this paper proposes an analytical method that integrates multiple sources of remote sensing data, including satellite data, oblique photography data, and LiDAR data. By merging oblique images with grayscale point clouds, true-color point clouds are generated. The method optimizes old data through threshold segmentation and median filtering, then converts and unifies the resolution of multi-source data via data interpolation. Elevation interpolation matrices are employed for overlay analysis, and a combination of bilateral filtering and threshold processing is used. This groundbreaking approach enables the completion of surface deformation analysis in emergency geospatial surveys and has been validated in various typical regions, demonstrating its application potential across different surface environments. Experimental results indicate that this method can quickly utilize multi-temporal and multi-source remote sensing data to effectively identify surface deformations following natural disasters.
Summarize the following text.
206
arxiv-format/2102_06442v2.md
# Broad-UNet: Multi-scale feature learning for nowcasting tasks Jesus Garcia Fernandez [email protected] Siamak Mehrkanoon [email protected] Department of Data Science and Knowledge Engineering, Maastricht University, The Netherlands ## 1 Introduction Weather forecasting is an essential task that has a great influence on humans daily life and activities. Industries such as agriculture [1], mining [2] and construction [3] rely on the weather forecasts to make decisions and thus unexpected climatological events may result in large economic losses. Similarly, accurate weather forecasts improve safety on flights and roads and help us foresee potential natural disasters. Due to its importance, precipitation nowcasting is becoming an increasingly popular research topic. This term refers to the problem of forecasting precipitation in the near future at high spatial resolutions. It is usually performed through satellite imagery and many different approaches have been proposed for this problem. Classical nowcasting approaches mainly focus on two methods: Numerical Weather Prediction (NWP) [4] and extrapolation based techniques, such as Optical Flow (OF) [5]. NWP methods simulate the underlying physics of the atmosphere and ocean to generate predictions, so they require a vast amount of computational resources. In contrast, optical flow based methods identify and predict how objects move through a sequence of images. But they are unable to represent the dynamics behind them. In recent years, the massive amount of existing data has aroused research interest in data driven machine learning techniques for nowcasting [6; 7; 8]. By taking advantage of available historical data, data-driven based approaches have shown better performance than classical ones in many forecasting tasks [9]. Furthermore, while classical machine learning techniques rely on handcrafted features and domain knowledge, deep learning techniques automatize the extraction of those features. Recent advances in deep learning have shown promising results in diverse research areas such as neuroscience, biomedical signal analysis, weather forecasting and dynamical systems, among others [10; 11; 12; 13; 14; 15; 16; 17; 18]. Convolutional Neural Networks (CNNs) are the most popular algorithms used in computer vision [19], achieving the state-of-the-art in various tasks [20; 19; 21]. CNN architectures, such as AlexNet [22], ResNet [23] and InceptionNet [24], to name a few, mainly consist of the combination of convolutional and pooling layers. They are outstanding at classification, identification and recognition tasks. Among other architectures, autoencoders have emerged as one of the most powerful approaches in both supervised [25; 26; 27] and unsupervised learning [28; 29; 30] with the UNet [31] being one of the most versatile architectures. The UNet architecture was first proposed for medical image segmentation, but it has been employed in various domains [32; 33; 27]. It consists of a contracting path, to extract features, and an expanding path, to reconstruct a segmented image, with a set of residual connection between them to enable precise localization. In our previous work [27], we introduced various extended versions of the UNet for weather forecasting problem. In this paper, we further extend the best performing model in that work [27], i.e. the AsymmIncepRes3DDR-UNet. In particular, motivated by the results of [34], we augment the AsymmIncepRes3DDR-UNet's feature extraction capacity by incorporating an Atrous Spatial Pyramid Pooling module (ASPP) [34] in the bottleneck of the network. The ASPP module works in line with the existing building blocks of our network (Multi-scale feature convolutional block), extracting multi-scale features in paralleland combining them. Therefore, unlike the original UNet, the proposed model is designed to capture multi-scale information. In addition, it keeps the temporal dimension unchanged along the encoder path and then reduces it before being concatenated with the output of every level in the decoder path. As a result, it can efficiently learn a mapping between 3-dimensional input data and 2-dimensional output data. Furthermore, we apply a kernel factorization in most of the convolutional operations of the model, resulting in a significant reduction in the total number of parameters compared to the original UNet while having improved performance. These techniques are explained in detail in the subsequent sections. We further present an analysis of this multi-scale features extraction and the enhancement provided by the ASPP module. We show its versatility by applying it to two different nowcasting tasks, i.e. precipitation nowcasting and cloud cover nowcasting. In the precipitation nowcasting task, the model performs a regression of every pixel. In the case of cloud cover nowcasting, the model classifies each pixel as containing clouds or not. In addition, we directly compare the proposed model with the model introduced in [32], a variation of UNet architecture that relies on depthwise-separable convolutions and includes a CBAM attention module [35] at each level. While the model in [32] approximates the performance of the original UNet with a significantly reduced number of parameters, our model outperforms the original UNet with a reduced number of parameters. ## 2 Related work Traditionally, optical flow based models are the most popular techniques among classical methods for precipitation nowcasting tasks [36; 37]. However, machine learning and deep learning based approaches are dominating this field of research in recent years. Due to the vast amount of available satellite imagery, powerful deep neural networks based models are suitable candidates that can be used to address various problems existing in this field. In particular, CNN based architectures have show their great ability to handle 2D and 3D images. Thanks to the versatility of CNN's, nowcasting problems can be tackled in different fashions. For instance, the authors in [38] and [39] treated the multiple time-steps as multiple channels in the network. In this way, they could apply a simple 2D-CNN to perform the predictions. Additionally, the authors in [40] treated the multiple time-steps as depth in the samples. Thus they can apply a 3D-CNN and approximate more complex functions. As it has been shown in [41; 39; 32], among the used CNN architectures, UNet is more suitable for this task, due to its autoencoder-like architecture and ability to tackle image-to-image translation problems. In addition to CNN's, Recurrent Neural Networks (RNN's) have proved to be a robust approach. However, these architectures struggle to work with images but can capture long-range dependencies, an ability that CNN's can only partially achieve with the addition of attention mechanisms, such as self-attention [42]. In [6], the authors introduce an architecture that combines both CNN's and RNN's strengths. They extend the fully connected LSTM (FC-LSTM) with convolutional structures, obtaining the Convolutional LSTM network (ConvLSTM). As a result, the proposed model captures spatiotemporal correlations better than the FC-LSTM model. The authors in [40] introduce the Trajectory GRU (TrajGRU) model as an extension of the ConvLSTM. This architecture keeps the advantages of the previous model and also learns the location-variant structure of the recurrent connections, showing superior performance than the other models compared. Nevertheless, these RNN models have not been directly compared with the UNet in nowcasting tasks. The authors in [26] make a comparison among different types of models for cloud cover nowcasting. In [26], the models under assessment are various versions of CNN's, RNN's, LSTM and UNet. The authors showed that the UNet model is the best performing model for the given cloud cover nowcasting task. ## 3 Proposed model In this section, we introduce our Broad-UNet model. First, different elements that are used for building the network are presented. The complete architecture is then explained. ### Multi-scale feature convolutional block Motivated by the goal of extracting features at different scales, the model contains a block consisting of parallel arms as shown in Fig 1. This block serves as the core building block of our network. Within this block, the data forks into parallel branches of convolutions with different kernel sizes, after going through an initial convolution. A \\(3\\times 3\\times 3\\) convolution is followed by a set of parallel convolutions with \\(1\\times 1\\times 1\\), \\(3\\times 3\\times 3\\) and \\(5\\times 5\\times 5\\) kernel sizes. The outputs of the different branches Figure 1: Multi-scale feature convolutional block. Convolutions with different kernels are performed in parallel to extract features at different scales. A residual connection also keeps some unmodified information. are then concatenated and merged with a \\(1\\times 1\\times 1\\) convolution. Additionally, inspired by the results found in [43], we keep some information intact alongside the parallel branches with a residual connection. Lastly, the output of the block is rectified with a ReLU activation function. To reduce the large number of features resulting from these branches, we factorize the convolutions as suggested in [44]. That means a convolution \\(N\\times N\\times N\\) decomposes into the three consecutive \\(1\\times 1\\times N\\), \\(1\\times N\\times 1\\) and \\(N\\times 1\\times 1\\) convolutions. Hence, this sequence is an approximation of the original convolution with fewer parameters. ### Atrous Spatial Pyramid Pooling (ASPP) Atrous Spatial Pyramid Pooling (ASPP), is a mechanism used to capture multi-scale information. It consists of parallel branches of convolutions, similar to the convolutional block presented above. However, instead of using different kernel sizes, the same kernel is chosen with an increasing dilation rate (6, 12 and 18). In this kind of convolutions, the filter is upsampled by inserting zeros between successive values. As a result, they employ a larger field of view, without experiencing an explosion in the number of parameters. Further, it only extracts information in the spatial dimensions by applying a 2-dimensional filter (shape \\(1\\times N\\times N\\)). In addition, ASPP incorporates one branch to extract image-levels features, allowing to capture global context information. Here, we implement it by applying a global average pooling, and subsequent reshaping and upsampling back. The obtained extracted features are then concatenated and combined with a \\(1\\times 1\\times 1\\) convolution. The scheme of this mechanism is shown in Fig. 2. ### Broad-UNet Thanks to the effectiveness of UNet architecture in solving image-to-image mapping tasks, it is chosen to serve as the basis to construct our model. UNet core model which was originally proposed for medical image segmentation tasks, adopts an autoencoder structure. While the encoder part extracts features from the input image, the decoder part performs classification on each pixel to reconstruct the segmented output. Plus, a set of residual connections between both parts allows a precise localization in the output image. Differently, our proposed Broad-UNet manipulates 3-dimensional data in the encoder and 2-dimensional data in the decoder. Thus, we can input several time-steps in the first dimension, and it outputs only one time-step in the same dimension. Multi-scale feature convolutional blocks are alternated with pooling operations in the encoder, resulting in five levels. The pooling is only performed in the spatial dimensions (2nd and 3rd) and implemented with a Max Pooling layer. In this way, the temporal dimension of the data remains unchanged. Then, the decoder follows a similar structure. It alternates multi-scale feature convolutional blocks and upsampling in the spatial dimensions. Additionally, we incorporate extra convolutions in the connections between different levels of the encoder and decoder. These intermediate convolutional operations aim to reduce the temporal dimension from \\(T\\) time-steps to 1. To extend the multi-scale feature learning process, we combine the convolutional blocks with the ASPP module. It is placed in the bottleneck of the network, where the data has a highly abstract representation. In this way, we allow the network to capture more information from this representation without using larger kernels and more computational resources. Also, dropout is included in the bottleneck to force the network to learn a more sparse representation of the data and avoid possible overfitting. As a result, the network input is of shape \\(T\\times H\\times W\\times F\\) and output is of shape \\(1\\times H\\times W\\times F\\), where \\(T\\) is the number of time-steps (lags), \\(H\\) and \\(W\\) are the height and width of the images, and \\(F\\) is the number of features or elements, which we consider as channels in our network. Here, the convolutions to reduce the temporal dimension have a kernel size \\(1\\times T\\times T\\) with valid padding. In addition, the use of asymmetric convolutions drastically reduces the total number of parameters of the network. While the number of parameters is \\(\\sim\\)28 million using regular kernels \\(N\\times N\\times N\\), the number of parameters after factorizing the convolutions into \\(1\\times 1\\times N\\), \\(1\\times N\\times 1\\) and \\(N\\times 1\\times 1\\) is \\(\\sim\\)11 million. The complete architecture of the model can be found in Fig. 3. Furthermore, a comparison in the number of learnable parameters among different UNet based models examined in this paper is shown in Table 1. ## 4 Data description and preprocessing To assess the performance of our model, we apply it to two different datasets. Both of them consist of satellite images and Figure 2: Atrous Spatial Pyramidal Pooling (ASPP) block. Different dilation rates allow the network to extract multi-scale information. Due to the kernel shapes (\\(1\\times N\\times N\\)) and the image-level pooling mechanism, only spatial information is extracted. are intended to tackle weather nowcasting problems. The first one includes precipitation maps, in which the value of each pixel shows the amount of rainfall in that region. The second one consists of cloud cover maps, in which the pixel values are binary and indicate whether there is a cloud or not in that region. Here, we recreate the same samples as in [32] for the first dataset, and as in [26] for the second dataset. In this way, we can make a fair comparison with the results obtained in those research works. For reproducibility purposes, all our models and scripts are available on Github 1. Also, the datasets and pre-trained models are available upon request. Footnote 1: [https://github.com/jesusgf96/Broad-UNet](https://github.com/jesusgf96/Broad-UNet) ### Precipitation maps dataset The first dataset, provided by the Royal Netherlands Meteorological Institute (Koninklijk Nederlands Meteorologisch Instituut, KNMI) [45], includes rainfall measurements from two Dutch radar stations (De Bilt and Den Helder). These measurements are in the shape of images. The images cover the region of the Netherlands and neighbouring countries, spanning four years in 5-minutes intervals. To train and validate the models, we use data from the years 2016-2018 (80% train/ 20% validation), and the data from 2019 is used as test set. The values of each pixel represent the accumulated amount of rainfall in the last five minutes. That means that a value \\(n\\) represents \\(n\\times 10^{-2}\\) mm of rain per square kilometre. The resolution of the images is \\(765\\times 700\\), and the measured region is circle-shaped with a large margin. Following the lines of [32], we cropped the central squared area with size \\(288\\times 288\\), as shown in Fig. 4. Moreover, there is a high imbalance between pixels with rain and no rain, with plenty of images lacking raining pixels. Therefore, as in [32], we filter the dataset choosing only the images with at least 50% of pixels containing any amount of rain. This dataset is then used to create the training/validation/test samples. Additionally, we create a second dataset filtering the images with at least 20% of pixels containing any amount of rain. From this second dataset, we use only the test set. Therefore, it serves as a way of testing our trained models under different conditions. We also normalize both datasets by dividing them by the highest value in the training set. ### Cloud cover dataset The second dataset is the \"Geostationary Nowcasting Cloud Type\" classification product [46] from the European Organisa \\begin{table} \\begin{tabular}{|c c|} \\hline **Model** & **Number of parameters** \\\\ \\hline \\hline UNet & \\(\\sim\\)17M \\\\ \\hline SmaAt-UNet & \\(\\sim\\)4M \\\\ \\hline AsymmIncepRes3DDR-UNet & \\(\\sim\\)9.5M \\\\ \\hline Broad-UNet & \\(\\sim\\)11M \\\\ \\hline \\end{tabular} \\end{table} Table 1: Comparison between the number of learnable parameters in different UNet based models examined in this paper. Figure 3: Complete architecture of Broad-UNet. The Multi-scale feature convolutional block is displayed as _Conv. Block_ for simplicity. The annotation over these blocks describes the output dimension, where \\(T\\) represents the time-steps (lags), \\(H\\) and \\(W\\) represent the height and width of each image and \\(F\\) the number of features or elements predicting. tion for the Exploitation of Meteorological Satellites (EUMETSTAT). It is composed of satellite images of the whole globe at longitude 0 degrees, taken every 15 mins. The resulting size of the images in the dataset is \\(3712\\times 3712\\), spanning the years 2017-2018. For multi-comparison purposes, we generate two different dataset from this data. In the first dataset, we follow the lines of [26] and use data from 2017 and the first semester of 2018 as training set. Then the data from the second semester of 2018 is used for both validation and test. On the contrary, as in [32], we use data from 2017 and the first semester of 2018 for train and validate our models (80% train / 20% validation). The data from the second semester of 2018 is thus used only for test. We use data from 2017 and the first semester of 2018 to train the models. To validate and test the models, we use data from the second semester of 2018. In this data, every pixel can have 15 different values (1: Cloud-free land, 2: Cloud-free sea, 3: Snow over land, 4: Sea ice, 5: Very low clouds, 6: Low clouds, 7: Mid-level clouds, 8: High opaque clouds, 9: Very high opaque clouds, 10: Fractional clouds, 11: High semitransparent thin clouds, 12: High semitransparent meanly thick clouds, 13: High semitransparent thick clouds, 14: High semitransparent above low or medium clouds, 15: High semitransparent above snow/ice). However, following the lines of [26], we aim to perform a classification between cloud or no-cloud. Therefore, we group the labels from 1 to 4 into 0 (no-cloud) and the labels from 5 to 15 into 1 (cloud). Also, we crop the images according to the boundaries of France: [51.896, 41.104, -5.842, 9.842] (upper latitude, lower latitude, left longitude, right longitude). Then we apply a transformation to obtain a suitable projection and reshape the resulting image to \\(256\\times 256\\) pixels. Fig. 9, displays an example of the described pre-processing steps. ## 5 Experimental setup and evaluation In order to have a fair comparison with the results obtained in [32] and [26] for both datasets, we reproduce the same experimental setups. The data is arranged in such a way that the resulting input is a four-dimensional array \\(\\mathcal{I}\\in\\mathbb{R}^{T\\times H\\times W\\times F}\\), where \\(T\\) is the number of lags or previous time-steps, which corresponds to the time dimension. \\(H\\) and \\(W\\) refer to the size of the image and make up the spatial dimensions. The last element \\(F\\) corresponds to the predicted features, which in both cases is 1. We use TensorFlow to implement our models and train and evaluate them on the given datasets. The hyperparameters of models are tuned and the optimal ones are empirically found and used. ### Precipitation maps nowcasting As for the precipitation maps dataset, we apply the preprocessing and split the dataset as described in section 4.1. We aim to predict a precipitation map 30 minutes ahead or considering that the images are generated five minutes apart, six time-steps ahead. The number of lags, previous time-steps, is set to 12 which was emprically found to be the best one among ohtre tested lag values. The height and width of the images are 288 and 288, and the number of features in the input is 1, i.e. the precipitation maps. Therefore, the inputs of the model has the shape (12, 288, 288, 1), and output data has the shape (1, 288, 288, 1). In this nowcasting task, we perform a regression of every pixel. Mean Squared Error (MSE) is used as the loss function and Adam optimizer to optimize it, with an initial learning rate of 0.0001. The batch size and the dropout rate are set to 2 and 0.5, respectively. We also implemented a checkpoint callback to monitor the validation loss. Thus the best performing model on the validation set is saved. We use MSE as the main metric to assess the performance of the model in this dataset. Furthermore, we also include additional metrics such as accuracy, Figure 4: Example of precipitation map from the first dataset, before and after applying the preprocessing. Figure 5: Example of an image from the cloud cover dataset, before and after applying the different steps of the preprocessing. precision and recall. Following the lines of [32], in order to calculate these new metrics, we first create a binarized mask of the image, according to a threshold. This threshold is the mean value of the training set from the 50% of rain pixel dataset. Hence, any value equal or over the threshold is replaced by 1, and any value under it is replaced by 0. ### Cloud cover nowcasting Regarding the cloud cover dataset, we preprocess the data and split the dataset as described in section 4.2. In this case, we predict six different time-steps: from 15 minutes to 70 minutes ahead, or from 1 to 6 time-steps ahead. Due to the architecture of our network, we train six different model. Thus each model predicts a different time-step. Here, the number of lag is set to 4, the height and width of the images are 256 and 256, and the number of input features is again 1, the cloud cover map. That means that the model receives input data with the shape (4, 256, 256, 1), and outputs data with the shape (1, 256, 256, 1). In this task, we perform binary classification of every pixel. The binary cross-entropy is used as the loss function in this case. We use Adam optimizer with an initial learning rate of 0.001. The batch size and the dropout rate are set to 8 and 0.5, respectively. Similarly, we implemented a checkpoint callback to monitor the validation loss. Thus the best performing model on the validation set is saved. Following the lines of [26], here we also use MSE as the metric to assess the performance of the model. First, we calculate the MSE between the ground truth and the raw prediction as the main metric. In this case, the values between 0 and 1 in the predictions indicate the probability of cloud occurrence in that region. In addition, we binarize the prediction of the network with a threshold of 0.5 to generate a second assessment with the MSE metric. We also include additional metrics, i.e. accuracy, precision and recall, to compare the performance of the Broad-UNet with the model introduced in [32], which also uses UNet architecture as the basis. To calculate these new metrics, we first create a binarized mask of the image, using the value 0.5 as the threshold. ## 6 Results ### Precipitation maps nowcasting In the precipitation maps prediction task, we compare the performance of the Broad-UNet with the persistence, a simple meteorological baseline used in forecasting, and different models over the test sets of two different datasets, i.e. 50% of rain pixels and 20% of rain pixels. These models are the UNet [31] and two variants [32; 27]. The MSE is the main metric used for this comparison, and it is calculated over the denormalized data. The additional metrics are computed over the binarized data, as described in section 5.1. The performance of different models over the first precipitation maps dataset is shown in Table 2. In the same way, the performance of the models in the second precipitation maps dataset is listed in Table 3. From the obtained results, one can observe that the Broad-UNet achieved the lowest MSE score in both datasets. Two examples of 30 minutes ahead prediction with the Broad-UNet are displayed in \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline \\multicolumn{5}{c}{**MSE 50\\% of rain pixels dataset**} \\\\ \\hline **Model** & **MSE \\(\\downarrow\\)** & **Accuracy \\(\\uparrow\\)** & **Precision \\(\\uparrow\\)** & **Recall \\(\\uparrow\\)** \\\\ \\hline **Persistance** & 2.48e-02 & 0.756 & 0.678 & 0.643 \\\\ **UNet** & 1.22e-02 & 0.836 & 0.740 & 0.855 \\\\ **Sma4-UNet** & 1.22e-02 & 0.829 & 0.730 & 0.850 \\\\ **AsymmIncepRes3DDR-UNet** & 1.11e-02 & 0.858 & 0.759 & 0.800 \\\\ **Broad-UNet** & 1.08e-02 & 0.850 & 0.715 & 0.817 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Test MSE and additional metrics values for the precipitation maps prediction task using the 50% of rain pixels dataset. \\(\\downarrow\\) indicates that the optimal values are the smallest ones and \\(\\uparrow\\) indicates that the optimal values are the highest ones. \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline \\multicolumn{5}{c}{**MSE 20\\% of rain pixels dataset**} \\\\ \\hline **Model** & **MSE \\(\\downarrow\\)** & **Accuracy \\(\\uparrow\\)** & **Precision \\(\\uparrow\\)** & **Recall \\(\\uparrow\\)** \\\\ \\hline **Persistance** & 2.28e-02 & 0.827 & 0.559 & 0.543 \\\\ **UNet** & 1.11e-02 & 0.880 & 0.666 & 0.782 \\\\ **Sma4-UNet** & 1.11e-02 & 0.867 & 0.626 & 0.801 \\\\ **AsymmIncepRes3DDR-UNet** & 1.02e-02 & 0.893 & 0.621 & 0.767 \\\\ **Broad-UNet** & 1.02e-02 & 0.895 & 0.611 & 0.772 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Test MSE and additional metrics values for the precipitation maps prediction task using the 20% of rain pixels dataset. \\(\\downarrow\\) indicates that the optimal values are the smallest ones and \\(\\uparrow\\) indicates that the optimal values are the highest ones. Figure 6: Broad-UNet precipitation prediction examples. The images in the first row are generated with the test set from the 50% of pixels containing rain dataset. The images in the second row are generated with the test set from the 20% of pixels containing rain dataset. ### Cloud cover nowcasting When applying the Broad-UNet to the second dataset, we compare its performance with the persistence and various models. These models are introduced and explained in [26]. We perform this comparison with the results obtained from the test set of the cloud cover dataset. The used evaluation metrics are explained in section 5.2. In Fig. 7, we show the MSE obtained using the ground truth and the actual prediction. Fig. 8 depicts the MSE calculated with the ground truth and binarized prediction. From Fig. 7 and Fig. 8, one can notice that the Broad-UNet performance is superior in short-term forecasting. As the number of step-ahead increases, the gap between the performance of the proposed Broad-UNet and the classical UNet model decreases. In addition, in Table 4, we show the comparison between the Broad-UNet's and the model introduced in [32]. As in [32], the metrics tabulated in this table are averaged over different time-steps (15-90 minutes ahead). From the obtained results one can observe that the Broad-UNet performs better than other compared models in three out of four used metrics. In addition, two examples of the Broad-UNet's predictions are displayed in Fig. 9. Both predictions are generated with the test set of the cloud cover dataset. In Fig. 9, the images on the first and second row correspond to 30 mins and 90 mins ahead prediction, respectively. ## 7 Discussion From the obtained results, one can observe that the multi-scale feature learning allows the Broad-UNet to perform more precise predictions. This is thanks to the use of different convolutional filters in parallel. By combining convolutions with larger and smaller kernels, the model considers different amounts of information around the same region to generate the feature maps. Likewise, the inclusion of the ASPP module in the architecture allows the network to apply convolutions with diverse receptive fields at the same time. In the precipitation nowcasting task, we can observe an 11% and an 8% improvement with respect to the simple UNet for both datasets. In the cloud cover nowcasting task, the binarized predictions of the Broad-UNet are 7% more accurate than the simple UNet for 15 minutes ahead predictions, and 1% more accurate for 90 minutes ahead predictions. Since in the first nowcasting task, i.e. precipitation prediction, the model aims to perform a regression of each pixel with a wide range of values, achieving accurate forecasting or equivalently lower MSE values is more desirable. That is where the Broad-UNet shows more superior performance respect to the UNet. In the second \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline \\multicolumn{5}{c}{**Averaged test MSE cloud cover prediction**} \\\\ \\hline **Model** & **MSE \\(\\downarrow\\)** & **Accuracy \\(\\uparrow\\)** & **Precision \\(\\uparrow\\)** & **Recall \\(\\uparrow\\)** \\\\ \\hline **Persistance** & 0.1491 & 0.851 & 0.849 & 0.849 \\\\ **UNet** & 0.0785 & 0.890 & 0.895 & 0.919 \\\\ **SmaA+UNet** & 0.0794 & 0.889 & 0.892 & 0.921 \\\\ **Broad-UNet** & 0.0783 & 0.891 & 0.898 & 0.914 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Average of the MSE and additional metrics for the cloud cover prediction task. \\(\\downarrow\\) indicates that the optimal values are the smallest ones and \\(\\uparrow\\) indicates that the optimal values are the highest ones. Figure 8: Test MSE values of the different models for the cloud cover prediction task with binarized predictions. Figure 7: Test MSE values of the different models for the cloud cover prediction task. Figure 9: Broad-UNet cloud cover prediction examples. The image above is predicted 30 mins ahead, and the image below is predicted 90 minutes ahead. nowcasting task, where the goal is to carry out a binary classification on each pixel, Broad-UNet performs slightly more accurate predictions than the UNet. While the immediate predictions (i.e. 15 and 30 mins ahead) are more precise, more distant predictions (more than 45 mins ahead) are comparable to UNet's predictions. Therefore, we can state that the wide building blocks of the Broad-UNet let the network to extract the spatial and short-term temporal information more accurately than the regular UNet. The learnt feature maps in different branches inside a convolutional block is shown in Fig. 10. The chosen convolutional block is the first one so that the data doesn't have too abstract representation and is thus easier to interpret. The image fed to the network belongs to the precipitation maps dataset, and it is shown in Fig. 11. In Fig. 10, the first row of the feature maps is the output of the convolutional branch with kernel size 1. The second row corresponds to the output of the branch with kernel size 3x3x3. Lastly, the third row corresponds to the output of the branch with kernel size 5x5x5. From Fig. 11, one can observe the differences between the features extracted in each branch. The convolutions with kernel size 1 seem to strengthen detailed differences in the image, and convolutions with kernel size 3x3x3 seem to accentuate differences between areas containing a high and a low rain concentration. In addition, convolutions with kernel size 5x5x5 seem to highlight regions with high rain concentration. ## 8 Conclusion In this paper the Broad-UNet, an extension of the UNet architecture, is introduced for precipitation as well as cloud cover nowcasting. Thanks to the combination of the multi-scale feature convolutional block and the incorporation of the ASPP module, the proposed network is able to capture multi-scale information. In addition, the use of factorized kernels drastically reduces the number of parameters in the network compared to classical UNet model The performance of Broad-UNet is examined for addressing two nowcasting problems. The first problem consists of predicting precipitation maps 30 mins ahead. The second one consists of forecasting cloud cover 15 to 90 mins ahead. The obtained results suggest that the Broad-UNet extracts features more efficiently and therefore performs more accurate predictions in short-term nowcasting tasks compared to other tested UNet based models. ## Acknowledgment We would like to thank Lea Berthomier from Meteo France for providing the cloud cover dataset, as well as the required code to preprocess it. Figure 11: Image fed to the network to generate the feature maps shown below. It belongs to the precipitation maps datasets, specifically to th 50% of rain pixel dataset. Figure 10: Feature maps outputted by different branches inside the first multi-scale feature convolutional block. Every row represents the output of a different branch. Next to each row, the kernel employed by the convolution in that branch is shown. ## References * (1) A. Cogato, F. Meggio, M. De Antoni Migliorati, F. Marinello, Extreme weather events in agriculture: A systematic review, Sustainability 11 (9) (2019) 2547. * (2) S. Ivanov, P. Ivanova, S. Kuvshinkin, Weather conditions as a factor affecting the performance of modern powerful mining excavators, in: Journal of Physics: Conference Series, Vol. 1399, IOP Publishing, 2019, p. 044070. * (3) A. Senouci, M. Al-Abbasi, N. N. Eldin, Impact of weather conditions on construction labour productivity in qatar, Middle East Journal of Management 5 (1) (2018) 34-49. * (4) J. Sun, M. Xue, J. W. Wilson, I. Zawadzki, S. P. Ballard, J. Onvileeduoimeyer, P. Joe, D. M. Barker, P.-W. Li, B. Golding, et al., Use of nwp for nowcasting convective precipitation: Recent progress and challenges, Bulletin of the American Meteorological Society 95 (3) (2014) 409-426. * (5) W.-e. Woo, W.-k. Wong, Operational application of optical flow techniques to radar-based rainfall nowcasting, Atmosphere 8 (3) (2017) 48. * (6) X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, W.-c. Woo, Convolutional lstm network: A machine learning approach for precipitation nowcasting, Advances in neural information processing systems 28 (2015) 802-810. * (7) M. Holmstrom, D. Liu, C. Vo, Machine learning applied to weather forecasting, Stanford University (2016) 2-4. * (8) A. Grover, A. Kapoor, E. Horvitz, A deep hybrid model for weather forecasting, in: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, pp. 379-386. * (9) C. Faloutsos, J. Gasthaus, T. Januschowski, Y. Wang, Classical and contemporary approaches to big time series forecasting, in: Proceedings of the 2019 International Conference on Management of Data, 2019, pp. 2042-2047. * (10) S. Webb, Deep learning for biology, Nature 554 (7693) (2018). * (11) S. Mehrkanoon, J. A. K. Suykens, Deep hybrid neural-kernel networks using random fourier features, Neurocomputing 298 (2018) 46-54. * (12) S. Mehrkanoon, Deep neural-kernel blocks, Neural Networks 116 (2019) 46-55. * (13) S. Mehrkanoon, J. A. K. Suykens, Learning solutions to partial differential equations using ls-svm, Neurocomputing 159 (2015) 105-116. * (14) S. Mehrkanoon, Cross-domain neural-kernel networks, Pattern Recognition Letters 125 (2019) 474-480. * (15) J. C. B. Gamboa, Deep learning for time-series analysis, arXiv preprint arXiv:1701.01887 (2017). * (16) A. G. Salman, B. Kanigoro, Y. Heryadi, Weather forecasting using deep learning techniques, in: 2015 international conference on advanced computer science and information systems (ICACSIS), IEEE, 2015, pp. 281-285. * (17) R. Coban, I. O. Aksu, Neuro-controller design by using the multifeedback layer neural network and the particle swarm optimization, Tehnicki vjensnik 25 (2) (2018) 437-444. * (18) R. Coban, A context layered locally recurrent neural network for dynamic system identification, Engineering Applications of Artificial Intelligence 26 (1) (2013) 241-250. * (19) A. Vouldoimos, N. Doulamis, A. Doulamis, E. Protopapadakis, Deep learning for computer vision: A brief review, Computational intelligence and neuroscience 2018 (2018). * (20) D. Lu, Q. Weng, A survey of image classification methods and techniques for improving classification performance, International journal of Remote sensing 28 (5) (2007) 823-870. * (21) R. Goel, A. Sharma, R. Kapoor, State-of-the-art object recognition techniques: A comparative study, in: Soft Computing: Theories and Applications, Springer, 2020, pp. 925-932. * (22) A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Communications of the ACM 60 (6) (2017) 84-90. * (23) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. * (24) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9. * (25) J. Zhang, Y. Xie, P. Zhang, H. Chen, Y. Xia, C. Shen, Light-weight hybrid convolutional network for liver tumor segmentation., in: IJCAI, 2019, pp. 4271-4277. * (26) L. Berthomier, B. Pradel, L. Perez, Cloud cover nowcasting with deep learning, arXiv preprint arXiv:2009.11577 (2020). * (27) J. G. Fernandez, I. A. Abdellaoui, S. Mehrkanoon, Deep coastal sea elements forecasting using u-net based models, arXiv preprint arXiv:2011.03303 (2020). * (28) P. Baldi, Autoencoders, unsupervised learning, and deep architectures, in: Proceedings of ICML workshop on unsupervised and transfer learning, 2012, pp. 37-49. * (29) G. Lample, A. Conneau, L. Denoyer, M. Ranzato, Unsupervised machine translation using monolingual corpora only, arXiv preprint arXiv:1711.00043 (2017). * (30) Y.-A. Chung, C.-C. Wu, C.-H. Shen, H.-Y. Lee, L.-S. Lee, Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder, arXiv preprint arXiv:1603.00982 (2016). * (31) O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234-241. * (32) K. Trebing, T. Stanczyk, S. Mehrkanoon, Smast-unet: Precipitation nowcasting using a small attention-unet architecture, Pattern Recognition Letters 145 (2021) 178-186. * (33) Y. Tao, P. Palasek, Z. Ling, I. Patras, Background modelling based on generative unit, in: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, 2017, pp. 1-6. * (34) L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE transactions on pattern analysis and machine intelligence 40 (4) (2017) 834-848. * (35) S. Woo, J. Park, J.-Y. Lee, I. S. Kweon, Cbam: Convolutional block attention module, in: Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19. * (36) N. E. Bowler, C. E. Pierce, A. Seed, Development of a precipitation nowcasting algorithm based upon optical flow techniques, Journal of Hydrology 288 (1-2) (2004) 74-91. * (37) L. Li, Z. He, S. Chen, X. Mai, A. Zhang, B. Hu, Z. Li, X. Tong, Subpixel-based precipitation nowcasting with the pyramid lucas-kanade optical flow technique, Atmosphere 9 (7) (2018) 260. * (38) G. Ayzel, M. Heistermann, A. Sorokin, O. Nikitin, O. Lukyanova, All convolutional neural networks for radar-based precipitation nowcasting, Procedia Computer Science 150 (2019) 186-192. * (39) S. Agrawal, L. Barrington, C. Bromberg, J. Burge, C. Gazen, J. Hickey, Machine learning for precipitation nowcasting from radar images, arXiv preprint arXiv:1912.12132 (2019). * (40) X. Shi, Z. Gao, L. Lausen, H. Wang, D.-Y. Yeung, W.-k. Wong, W.-c. Woo, Deep learning for precipitation nowcasting: A benchmark and a new model, in: Advances in neural information processing systems, 2017, pp. 5617-5627. * (41) V. Lebedev, V. Ivashkin, I. Rudenko, A. Ganshin, A. Molchanov, S. Ovcharenko, R. Grokhovetskiy, I. Bushmarinov, D. Solomentsev, Precipitation nowcasting with satellite imagery, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 2680-2688. * (42) A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, in: Advances in neural information processing systems, 2017, pp. 5998-6008. * (43) C. Szegedy, S. Ioffe, V. Vanhoucke, A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning, arXiv preprint arXiv:1602.07261 (2016). * (44) H. Yang, C. Yuan, B. Li, Y. Du, J. Xing, W. Hu, S. J. Maybank, Asymmetric 3d convolutional neural networks for action recognition, Pattern recognition 85 (2019) 1-12. * (45) R. N. Institute, Royal netherlands meteorological institute (2020). URL [https://www.kmni.nl/home](https://www.kmni.nl/home) - 0 degree (2020). URL [https://navigator.eumetast.int/product/EO:EUM:DAT:MSG:GNWCC](https://navigator.eumetast.int/product/EO:EUM:DAT:MSG:GNWCC)
Weather nowcasting consists of predicting meteorological components in the short term at high spatial resolutions. Due to its influence in many human activities, accurate nowcasting has recently gained plenty of attention. In this paper, we treat the nowcasting problem as an image-to-image translation problem using satellite imagery. We introduce Broad-UNet, a novel architecture based on the core UNet model, to efficiently address this problem. In particular, the proposed Broad-UNet is equipped with asymmetric parallel convolutions as well as Atrous Spatial Pyramid Pooling (ASPP) module. In this way, The the Broad-UNet model learns more complex patterns by combining multi-scale features while using fewer parameters than the core UNet model. The proposed model is applied on two different nowcasting tasks, i.e. precipitation maps and cloud cover nowcasting. The obtained numerical results show that the introduced Broad-UNet model performs more accurate predictions compared to the other examined architectures. keywords: Satellite imagery, Precipitation forecasting, cloud cover forecasting, Deep learning, Convolutional neural network, U-Net + Footnote †: journal: arXiv
Condense the content of the following passage.
229
arxiv-format/2109_07288v1.md
# Two algorithms for vehicular obstacle detection in sparse pointcloud Simone Mentasti\\({}^{1}\\), Matteo Matteucci\\({}^{1}\\), Stefano Arrigoni\\({}^{2}\\), Federico Cheli\\({}^{2}\\) Partially supported by project TEINVEIN: TE'Enologic Innovative per i VEiecoli Intelligent, CUP (Codice Unogo Topeting - Unique Project Code): E96D1700010009 - Call \"AccorID\" project i Ricerca e r'Innovazione\", cofunded by POR FESR 2014-2020 (Progamma Operativo Regionale, Fondo Europeo di SciVulppo Regionale - Regional Operational Programme, European Regional Development Fund).\\({}^{1}\\) S. Mentasti, M. Matteucci are with the Department of Electronics Information and Bioengineering of Politecnico di Milano, pzza Leonardo da Vinci 32, Milan, Italy, [email protected]\\({}^{2}\\) S. Arrigoni, F. Cheli are with the Department of Mechanical Engineering of Politecnico di Milano, via La Masa 1, Milan, Italy, [email protected] ## I Introduction To properly drive in an unknown environment, autonomous vehicles need to sense the surroundings and find possible obstacles. Among the most used sensors for this task are lidars [1, 2][3]. Those sensors return a 3D pointcloud representing the area around the autonomous car up to 100 meters with millimeter precision. The density of this pointcloud is a function of the number of lasers mounted on the sensor, which defines how many planes, sometimes called channels, are in the 3D pointcloud. To retrieve a detailed representation, most autonomous vehicle prototypes employ high-end lidars, generally, with 32 planes or more [4, 5]. Those sensors provide dense pointclouds that deep learning architectures can process to retrieve 3D bounding boxes of obstacles [6]. However, those sensors are still expensive, and smaller projects can not sustain their costs. Moreover, to process a rich pointcloud in real-time via deep-learning techniques at a high enough frequency to safely control the vehicle, powerful GPUs are required. A design-to-cost project might decide to employ sensors with a reduced number of planes, where the returned pointcloud is considerably less defined. For a 16 plane lidar, a human can still identify and classify obstacles, but with a lower resolution, this task becomes challenging. Due to the reduced number of points, deep learning approaches struggle to extract enough features to classify obstacles correctly. For this reason, different methods, not deep-learning-based, need to be employed. Lidars and laser sensors have been used in the robotics field way before the rise of autonomous vehicles. Indoor and logistic robots are usually less expensive, and for this reason, they have always been using smaller lidars, usually even single-plane ones. Despite using similar sensor suits, autonomous vehicles and logistics robots have some significant differences. Indoor robots can navigate the environment with low-level representations such as grid-map [7]; contrarily, planning algorithms for autonomous vehicles, which move at high speed in dynamic environments, require a more high-level representation, like a list of 3D bounding boxes [8]. Because of these differences, it is impossible to directly employ classical robotics solutions, which work well with limited plane lidars, on autonomous vehicles. In this paper, we propose two solutions, one for a 16 plane lidar and one for an 8 plane lidar. Both approaches are based on occupancy grid, and successive geometric operation on the pointcloud to extract 3D orientated bounding boxes from this low-density pointcloud, as shown in Fig. 1. In particular, the second scenario performs most of the tasks on a 2D grid. Fig. 1: Snapshot of the 8 plane obstacle detection algorithm. In blue the original pointcloud points, in black the occupancy grid, and in red and green respectively the 2D bounding boxes on the ground plane and 3D bounding boxes fitted on the pointcloud data. Therefore, the proposed solution can be easily extended to any number of plane scenarios (e.g., smaller sensors with four or fewer planes but also higher resolution ones). To validate both algorithms, we recorded a custom dataset (publicly available at 1), since no dataset is currently available for this task. We employed an RTK-GPS to acquire the exact position of a dynamic obstacle moving around the ego vehicle, equipped with a 16 plane Velodyne. In such a way, we were able to compute the exact position and heading for that specific obstacle to validate our results properly. For the second task, we also performed a decimation of the pointcloud, removing half of the planes to simulate a lower resolution sensor. Footnote 1: Dataset will be released upon paper acceptance This paper is structured as follows; in Section II, we provide an overview of the current state of the art regarding lidar-based obstacle detection, highlighting the similitude with the robotics world. In Section III and Section IV we describe the two proposed algorithms, for 16 planes lidars and for sensors with fewer planes. Finally, in Section V, we provide experimental validation of both solutions, using our custom recorded dataset, to compare the algorithm output with trustable ground truth. ## II related work Obstacle detection from laser and lidar data has been a research topic for many years. First solutions come from the indoor robotics world [9],[10]; in particular, the most common approach is still to employ a 2D occupancy grid to represent the surrounding of the robot [11]. To better generalize on different terrains, and thanks to the increasing computational power and availability of multi-channel sensors, robotics has also moved from 2D grid to 3D representation [12]. Autonomous vehicle research started from classical robotics ideas, but rapidly identified different solutions. Nowadays, autonomous cars' detection systems can be divided into three categories: projection methods, volumetric convolutional, and raw pointclouds. The first two are natural evolutions of the robotics approach; projection methods perform a projection of the 3D pointcloud on a 2D plane and then proceed to identify obstacles on the 2D grid. Two class of solution has been identified for this processing phase; the first one is based on geometry and computer vision [13, 8]. While the second one leverages on the increased available computational power, employing deep learning techniques to process the 2D grid with convolutional neural network [14, 15]. Volumetric convolutional methods are based on a 3D occupancy grid; in this case, the processing phase is similar to the 2D scenario. The 3D voxel grid can again be processed with a convolutional neural network to identify obstacles [16, 17]. The main disadvantage of those approaches is that the 3D grid and its processing are both resource-demanding and require high computational power to be performed in real-time. Nevertheless, they can outperform the 2D approaches thanks to a more accurate representation that retains most of the original pointcloud information. Raw pointcloud methods perform detection directly on the pointcloud, without processing it. The solutions have rapidly grown in popularity in the last years, thanks to the increasing available computational power, and are one of the most used approaches nowadays. Many solutions are based on the popular architecture PointNet [18] and its evolution PointNet++ [18], but recently, with the release of big benchmarking dataset, other approaches have emerged [19]. Deep learning solutions can be extremely powerful and accurate, but they require dense pointclouds, and lidars with a high number of planes to extract enough features. As a result, they perform poorly with excessively sparse pointclouds. This paper addresses the scenario where data from lidar are limited in resolution, and obstacles are described only by a few planes. Our approaches are based on the projection method, but we extend it to employ, in the final steps, the original pointcloud data. In such a way, we can achieve higher accuracy in the obstacle pose and heading estimation. ## III 16 plane obstacle detection Pointcloud acquired by a 16 plane lidar are generally too sparse to be processed with deep-learning-based solutions to retrieve obstacles. For this reason, the majority of currently developed autonomous vehicles employ at least a 32 planes lidar, or a 64 planes one. Nevertheless, due to their competitive costs, smaller lidar might be employed in low-cost vehicles or scenarios where computational power is limited, and it is impossible to employ complex neural networks to extract obstacles from the pointcloud. Sixteen plane lidars still return a pointcloud with well-defined obstacles, and therefore it can be processed with geometric-based approaches. In this section, we analyze a geometric solution to obstacle detection using a 16 plane lidar. ### _Processing pipeline_ The first steps of the pipeline consist of pointcloud pre-processing. Deep learning approaches require the complete pointcloud as input; in this scenario, instead, it is important to remove all the points that are certainly not obstacles. The first step consists of ground plane removal. This is performed using a slightly modified version of the algorithm proposed in [20], to work with our 16 plane lidar. The output of this phase is a pointcloud without the ground plane and the information about the normal of the removed ground. In the next step, we filter all points which belong to objects we are not interested in. In particular, we remove all points above a certain height. This is particularly important to avoid considering as obstacles bridges or light signs. We finally remove points that are too far away on the left and right of the vehicle and therefore are not of interest. The output of this first part is still a 3D pointcloud, but with considerably fewer points, and most importantly, mainly elements that belong to possible obstacles. In the next step, we proceed to project the 3D pointcloud to a 2D plane, using the normal previously computed when removing the ground plane, and apply a 2D grid on the pointcloud. Iterating through each cell of the grid, we set a cell as 'occupied' if the number of projected points falling in that cell is above a threshold. The output of this phase is a 2D occupancy grid, similar to the one employed in mobile robotics and presented in [21]. This type of representation is well suited for small and indoor tasks, like logistic robots, but to perform planning in a dynamic environment with an autonomous vehicle it is required a more accurate representation of the surrounding. In particular, the control algorithm performs better with a list of fully characterized obstacles. To retrieve this information from a 2D grid, a further elaboration step is required; the core of our solution is indeed this second processing phase, depicted in Fig. 2. The first processing step performed on the occupancy grid is a set of morphological operations. The grid can be easily considered as a binary image. Therefore, classical morphological operations, like closure and opening, can remove single noisy points and merge close areas. Next, we apply a clustering algorithm on the grid to retrieve a list of connected components. Those elements are all the candidate obstacles. Finally, we iterate through the list, analyzing each cluster; in particular, based on the area and shape of the obstacle, it is possible to remove objects which are not of interest. In the analyzed scenario, we focus only on finding other vehicles, therefore, all obstacles that are considerably smaller or bigger are removed. Similarly, it is possible to filter only pedestrians based on their shape on the grid. In the last block of the pipeline, we retrieve a better representation of the obstacles. From the occupancy grid processing phase, it is possible to infer the position and size of the obstacles, but with the discretization of the grid (i.e., the size of a cell). To compute a more accurate representation, we take the list of clusters on the 2D grid and the original pointcloud, and we proceed to crop the small area on the pointcloud relative to each obstacle. Since, in our scenario, we are looking for other vehicles, and the obstacle heading is useful information for the control algorithm, we compute it from the 3D points. In particular, given the pointcloud, we look for a plane perpendicular to the direction of the ego-vehicle, since it will be the side with the highest number of points, as shown in Fig. 2(a). From the orientation of the plane, it is possible to retrieve the heading of the vehicle. Next, using this plane, we compute the 3D bounding box that best fits all the points, keeping such plane as one of the faces, Fig. 2(b). The final output will be a 3D orientated bounding box that matches the original pointcloud and, therefore, does not have a discretization error. ## IV 8 plane obstacle detection While 16 plane lidars still provide a dense enough pointcloud to identify obstacles and fit planes to compute 3D bounding boxes, lower density lidars are not suited for this approach. In particular, sensors with a lower number of lasers might provide information of an obstacle only on one or two levels, making it challenging to identify the plane, as for the method we have shown previously. Nevertheless, those sensors are still largely employed, both in lower costs automotive environments and smaller outdoor delivery robots. Therefore the problem of retrieving a list of obstacles from those sensors is still actual. Fig. 3: Core steps of the 3D bounding box computation process, first a plane is fitted on the back or the side of the obstacle, then a 3D box is computed using the plane heading and the pointcloud data. Fig. 2: Schema of the obstacle detection architecture for 16 plane lidar. ### _Processing Pipeline_ Since the previously described approach can not be fully applied with this type of data, the pipeline has a different structure. The first steps are still similar, as shown in Fig 4, until we compute the 2D occupancy grid. The only difference will be the size of the cell, which is smaller (i,e,. 0.05 m) to allow us to perform operations directly on the grid. The main differences are in the clustering. In the previous algorithm, we retrieved from the grid only 2D bounding boxes, which were used to extract a portion of the original pointcloud, and compute all the information on that one. In this case, since fitting a vertical plane on a single lidar layer is not feasible, we want to retrieve the heading from the 2D grid. To do so, after computing the connected components, we proceed to reconstruct a convex hull for each element. Then we differentiate two possible scenarios, displayed in Fig. 5. If both dimensions of the convex hull are above a fixed threshold, it means the obstacle is not perpendicular with respect to the lidar, and therefore two sides are visible, Fig. 5a. In this case, the convex hull generally has a triangular shape, since only two sides of an obstacle are visible. Accordingly, we retrieve the three vertices of the triangle and infer from those the possible fourth vertex. From those four points, it is possible to calculate the heading of the obstacle. In particular, to filter noise, we compute the relative angle of each side and average it, with the assumption of a rectangular shape for the obstacle. If one size is under the threshold, we assume that the obstacle is parallel or perpendicular to the lidar, Fig.5b. In this case, it is impossible to employ the described approach, and computing the heading based only on two points, which might be close together, would be excessively noisy. Therefore, we proceed to fit a line using RANSAC algorithm [22] on the occupancy grid points. The slope of this line is the heading of the obstacle. This solution is still noisier than computing the values on the two obstacle side, but it allows us to retrieve the heading even when only one side of the obstacle is visible. To increase the accuracy, we still take the cropped area of the original pointcloud around this 2D bounding box, using the pointcloud point to compute the size and center of the box and a minimum height. Since some obstacles might be described by only one lidar plane, we are not able to return the exact height of the obstacle, but only the maximum value retrieved by the lidar. In this pipeline, we assume that obstacles have a rectangular shape to compute the heading. Since this requirement is not satisfied by a pedestrian, we check the convex hull size before performing the heading computation. If both dimensions are under a fixed threshold and are more similar to a pedestrian than a vehicle, this computation is not performed. We still compute the 3D box using the information from the grid and the original pointcloud, but we will not provide a heading. Computing the heading on a discretized grid introduces a small noise compared to the original points. To improve this, a solution might be to use the cropped projected pointcloud as input to RANSAC. After an experimental analysis where we compared the values computed using the grid and the one from the projected 2D points, we concluded that the improvements of this solution are minimal. In contrast, the increased computational power required to fit a line with RANSAC directly on the pointcloud points is substantial. For this reason, we preferred the approach previously illustrated, which is computationally less expensive and still guarantees high enough accuracy. ## V experimental results Both algorithms have been validated on a dataset acquired in the Monza ENI circuit. The recorded data consists of a full lap, with data from a 16 plane lidar and RTK-GPS position of the ego vehicle and one obstacle. The obstacle is a medium-sized van, which performs multiple maneuvers around the ego vehicle, changing its distance from zero up to twenty meters. From this recording, it is possible to compare the distance and heading computed by the algorithms with respect to accurate ground truth. The second approach is designed to work with lidars with low numbers of planes, therefore we employed a decimated version of the pointcloud. Moreover, the algorithm works mainly on a 2D grid, and the quality of Fig. 4: Schema of the obstacle detection architecture for 8 or less plane lidar. the output is not heavily dependant on the number of planes. The only advantage provided by the higher number of planes is in the obstacles' height estimation. Fig. 6 shows the results of the first approach, based on 16 planes lidar, on the recorded data. In particular, it is possible to notice how the computed distance follows the ground truth for the whole lap, Fig. 5(a), even when this value grows up to \\(15m\\). The mean distance error of the recording is \\(0.7m\\), which is an acceptable value for this task, also considering some section of the track where the ground truth has some small error due to GPS occlusion. Moreover, due to the particular geometry of the obstacles, the mounting point of the GPS sensor is not exactly on the back of the vehicle. Therefore this value might be slightly lower than the one computed. Nevertheless, the computed distance can be considered accurate enough for the planning algorithm to compute a trajectory and safely drive the ego vehicle. Fig. 5(b) shows instead the estimated relative heading. In this scenario, the computed values are close to the ground truth, with a mean error of \\(0.1^{\\circ}\\). This extremely low value is partially due to the mean absolute value of the run, which is close to zero. The algorithm performs well also in some challenging sections of the track, through high curvature corners and large chicanes (i.e., \\(Time\\)\\(150s\\) and \\(Time\\)\\(270s\\)). While it still follows the ground truth closely in the two really fast chicanes (i.e., \\(Time\\)\\(20s\\) and \\(Time\\)\\(130s\\)), overshooting only when the angle between the ego vehicle and the obstacle is above \\(40^{\\circ}\\). The results of the second approach are shown in Fig. 7. Also, in this case, it is possible to notice how the computed values accurately follow the ground truth. The distance error is slightly higher than the 16-planes solution, with a mean error of \\(0.8m\\). Of particular interest is how the algorithm can still compute an accurate position when the obstacle is far from the ego vehicle (e.g., \\(15m\\)). The most significant error can be instead identified when the obstacle is close and on one side of the ego vehicle (e.g., \\(Time\\)\\(280s\\)), where the number of points describing the obstacle is low due to the mounting position of the lidar and the proximity of the obstacle. The same issue can also be identified in the previous approach, proving that this problem is more due to physical constraints of the system than related to a specific implementation. This is also why most autonomous vehicles employ other sensors for lateral detection at close range, like sonar and single-point lasers. The heading in the second scenario is also noisier, with a higher mean error (while in the 16 plane case, we had \\(0.09^{\\circ}\\) in this scenario, the error is \\(0.12^{\\circ}\\)). This value is not fully representative due to the long section with low heading, but it is possible to notice how the computed value is noisier than the first versions of the algorithm. Moreover, in the high curvature corners, the overshoot is more pronounced and, while following the ground truth closely, it is possible to see a higher error. After some analysis, we concluded that this higher error is due to the discretization process of the grid on which values are computed. Indeed, computing these values on a discretized surface will always be less accurate than using the real 3D points of the cloud. Nevertheless, Fig. 6(b) shows how the algorithm is still able to compute values close to the ground truth. Therefore, while less accurate, this solution can also be employed as a source for the control algorithm. ## VI conclusions In this paper, we presented two algorithms for obstacle detection from sparse pointcloud. The first one is designed to work with 16-plane lidars, where deep-learning-based approaches are not suitable due to the low number of features, but have enough points to allow us to perform vertical plane fitting operations. The second one is instead developed to work with all types of sensors, since it performs most of the operations on a 2D occupancy grid. Therefore, the only missing information with a lower number of planes is the height of the obstacles. Both solutions have been validated using a custom acquired dataset, with accurate ground truth, to compare the real obstacle position and heading with the one from the algorithms. Both solutions have proved their Fig. 5: 3D bounding boxes of a bus, computed few seconds apart. In the first scenario, only the side of the vehicle is visible, and the heading is computed using RANSAC. In the second scenario, both sides are visible, and the box is computed extrapolating the D point under the rectangular assumption. ability to compute 3D bounding boxes with low error. The second approach is slightly less accurate due to the grid discretization process, but the error values are acceptable for control. Moreover, the solution can run in real-time on a consumer laptop without a modern GPU. Future works will be centered on implementing a final block of the pipeline to perform classification on the retrieved bounding box, similarly to neural network-based approaches. A second improvement will focus on tracking the state of each obstacle, in such a way, it should be possible to mitigate the bounding box noise, and filter spikes in the heading estimation. ## References * [1]A. Asvadi, C. Premebida, P. Peixoto, and U. Nunes (2016) 3D lidar-based static and moving obstacle detection in driving environments: an approach based on voxels and multi-region ground planes. Robotics and Autonomous Systems83, pp. 299-311. External Links: Document, Link Cited by: SSI. * [2]A. Asvadi, C. Premebida, P. Peixoto, and U. Nunes (2016) 3D lidar-based static and moving obstacle detection in driving environments: an approach based on voxels and multi-region ground planes. Robotics and Autonomous Systems83, pp. 299-311. External Links: Document, Link Cited by: SSI. * [3]A. Asvadi, C. Premebida, P. Peixoto, and U. Nunes (2016) 3D lidar-based static and moving obstacle detection in driving environments: an approach based on voxels and multi-region ground planes. Robotics and Autonomous Systems83, pp. 299-311. External Links: Document, Link Cited by: SSI. * [4]A. Asvadi, C. Premebida, P. Peixoto, and U. Nunes (2016) 3D lidar-based static and moving obstacle detection in driving environments: an approach based on voxels and multi-region ground planes. Robotics and Autonomous Systems83, pp. 299-311. External Links: Document, Link Cited by: SSI. * [5]A. Asvadi, C. Premebida, P. Peixoto, and U. Nunes (2016) 3D lidar-based static and moving obstacle detection in driving environments: an approach based on voxels and multi-region ground planes. Robotics and Autonomous Systems83, pp. 299-311. External Links: Document, Link Cited by: SSI. * [6]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [7]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [8]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [9]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [10]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [11]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [12]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [13]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [14]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [15]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [16]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [17]A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Link Cited by: SSI. * [18]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [19]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [20]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [21]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [22]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [23]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [24]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [25]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [26]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [27]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [28]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [29]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [30]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [31]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [32]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [33]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [34]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [35]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [36]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [37]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697-12705. External Links: Document, Link Cited by: SSI. * [38]A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019)
One of the main components of an autonomous vehicle is the obstacle detection pipeline. Most prototypes, both from research and industry, rely on lidars for this task. Pointcloud information from lidar is usually combined with data from cameras and radars, but the backbone of the architecture is mainly based on 3D bounding boxes computed from lidar data. To retrieve an accurate representation, sensors with many planes, e.g., greater than 32 planes, are usually employed. The returned pointcloud is indeed dense and well defined, but high-resolution sensors are still expensive and often require powerful GPUs to be processed. Lidars with fewer planes are cheaper, but the returned data are not dense enough to be processed with state of the art deep learning approaches to retrieve 3D bounding boxes. In this paper, we propose two solutions based on occupancy grid and geometric refinement to retrieve a list of 3D bounding boxes employing lidar with a low number of planes (i.e., 16 and 8 planes). Our solutions have been validated on a custom acquired dataset with accurate ground truth to prove its feasibility and accuracy.
Give a concise overview of the text below.
223
arxiv-format/2401_14504v1.md
# Learning When to See for Long-term Traffic Data Collection on Power-constrained Devices Ruixuan Zhang\\({}^{1}\\), Wenyu Han\\({}^{1}\\), Zilin Bian\\({}^{1}\\), Kaan Ozbay\\({}^{1,*}\\), Chen Feng\\({}^{1,*}\\) \\({}^{1}\\) Tandon School of Engineering, New York University, NY, USA, 10012. * Corresponding Authors emails: {k0772, cfeng}@nyu.edu ## I Introduction Over the past few decades, Intelligent Transportation Systems (ITS) have experienced significant growth and advancement. The remarkable achievements of ITS can be largely attributed to the development of infrastructure and hardware for accessing, collecting, and processing real-world data. Within this context, surveillance cameras have emerged as a vital and extensively employed element of modern ITS, particularly due to the rapid progress in video processing and communication technology. Leveraging the valuable insights ingrained within high-dimensional image data, the utilization of collected video-based data presents immense possibilities for traffic analysis and encompasses various applications, including the detection of hazardous traffic scenarios such as collisions [1] and deteriorating road conditions [2], as well as traffic counting [3] and traffic state estimation [4]. Of greater significance, video-based data goes beyond extracting vehicle-related information and allows for capturing data regarding road users like pedestrians and cyclists, which traditional vehicle-focused sensors are difficult to provide [5]. This opens up possibilities for various applications residing in crowd management and public safety, such as social distancing analysis during COVID-19 [6]. Among the many effective traffic data-collection systems that employ surveillance cameras, the Caltrans PeMS1 and the New York City traffic sensing program2 are two notable examples, where the Caltrans PeMS system gathers various real-time highway data, covering an extensive directional distance of over 41,000 miles, with more than 18,000 stations and 46,000 detectors, and the New York City traffic sensing program deploys over 2,000 surveillance cameras positioned at intersections across the city to monitor urban traffic conditions. Undoubtedly, these expansive data collection systems offer the research community and policymakers access to invaluable data that was previously unavailable. Nevertheless, we have identified three key challenges in the existing video-based data collection systems that limit the general public's accessibility and hinder the broader impact of leveraging these advanced technologies in ITS: Footnote 1: [http://pems.dot.ca.gov](http://pems.dot.ca.gov) Footnote 2: [https://webcams.nyctmc.org/map](https://webcams.nyctmc.org/map) * Expensive infrastructure limits the reach of existing surveillance systems, leaving certain underrepresented areas, like lower-income neighborhoods, being excluded from the monitoring and management efforts of transportation agencies. * The lack of spatial flexibility in infrastructure-based surveillance systems hampers response to urgent traffic monitoring demands (i.e., temporary work zones and special events) and hinders quick evaluation and adaptation for expansion requirements. * The current data collection strategies that depend on infrastructure and necessitate a stable external power supply are impractical in situations where power accessibility is not guaranteed. To address the limitations of hardware and the necessity for efficient data collection strategies, we present a learning-based data collection framework that empowers rapidly deployable lightweight devices to perform video-based data collection and extend their lifespan through the use of battery power. Specifically, the framework aims to address the inherent trade-off between extending the lifespan of the power-constrained devices and the resulting performance degradation caused by prolonged usage. The framework consists of three modularized components, namely prediction, control, and estimation. We endorse the concept of predictive control, wherein the DRQN controller leverages prediction results from the RNN predictor to actively determine the timing for the next data collection activity. Additionally, an RNN estimator is employed to refine the prediction results, aimingto minimize the disparity between the obtained and actual data profiles. The proposed framework is evaluated using a real-world traffic dataset, demonstrating its superiority over the baselines. To the best of our knowledge, there are currently no existing framework-level solutions for enabling long-term (self-maintained for at least a week) video data collection in power-constrained contexts. ## II Related Work In this section, we review the existing work on enabling video processing and data collection under power-constrained schemes, which can be broadly classified into two categories: (1) reducing the computational load of neural networks for video processing, and (2) optimizing the sampling strategies. **Efficient deep network backbone**. The progress in single-board computers has made it possible to run modern deep learning models on affordable devices equipped with dedicated GPU (graphics processing unit) or TPU (tensor processing unit), such as Nvidia Jetson Nano and Google Coral Dev Board. However, due to the limited onboard RAM and inferencing capabilities of these edge devices, most state-of-the-art image processing models (object detection, object tracking, crowd density estimation, etc.) cannot be directly deployed, and a significant accuracy drop is observed if limiting power supply [7]. Numerous efforts have been made to develop lightweight models for power-constrained devices. Through the exploration of network parameters such as depth, width, and resolution, it is possible to reduce the number of parameters and floating point operations (FLOPs) while preserving accuracy, as demonstrated in the works of SqueezeNet [8], GoogleLeNet [9], and EfficientNet [10]. Additionally, MobileNets [11] pioneers the use of depth-wise separable convolutions to reduce the number of convolution operations, which has since become a fundamental component in subsequent studies. **Adaptive Sampling**. Another effective approach is to conserve power through sparse yet strategic samplings. The approach known as adaptive sampling, which involves guiding the sampling process in a sequential manner based on information from previous observations [12]. This allows for more efficient data collection by focusing resources on areas of interest or significance, while reducing the sampling rate in less informative regions. Superior performance compared to conventional (uniform) sampling schemes has been demonstrated using either filter-based approaches [13][14] or recent learning-based approaches [15][16]. Adaptiveness in data sampling can be determined by considering criteria such as information gain [15] event detection [17], and uncertainties reduction [18]. Signal reconstruction from adaptive sampling has also been investigated [19][20]. However, our laboratory experiments and the findings presented in [21], utilizing the Google Coral Dev Board, demonstrate that no existing lightweight object detection neural networks can sustain continuous video processing at a frequency of 30 Hz for more than 48 hours on a 10,000 mAh Li-ion battery, when deployed at a plaza with moderate pedestrian density. It is worth noting that while experiments in [22] demonstrate the potential for energy savings through strategic sensor activation, there is currently _no existing research specifically addressing data collection using adaptive sampling in the transportation domain_. Acknowledging the limitations and gaps in the existing literature, we introduce a modularized framework that is independent of specific video processing methods and offers flexibility to incorporate various adaptive sampling algorithms. Furthermore, the accuracy of data collection is enhanced through post-estimation techniques. This framework is suitable for various types of traffic data and is designed to operate on low-cost lightweight devices with limited battery power resources, like Google Coral Dev Board and Nvidia Jetson Nano. ## III Methodology In this section, we delve into the proposed solution, including problem definition, framework architecture, and the design of the framework's functionality. Throughout the entire paper, we will treat sampling as making video-based observations from deep neural networks (e.g., pedestrian detection and counting) for the sake of simplicity. ### _Problem statement_ This paper addresses the challenge of optimizing the timing of observations for a lightweight device with limited battery capacity. We assume that the number of available observations is proportional to the remaining battery power. The goal is to allocate observation opportunities in a manner that maximizes the capture of information and enables the reconstruction of periods where no observations are made. We design a two-stage framework. The first stage, referred to as the \"initialization\" phase, involves operating the device to gather continuous observations within a limited time period solely for the recording purpose, without any background algorithm running. The length of this stage is exclusively determined by the battery capacity and remains uninfluenced by the particular scenes encountered. In the second stage, depending on the remaining battery capacity, the device can either be equipped with a new battery or continue using the existing one. During this stage, the device transitions into \"power-saving\" mode, strategically making Fig. 1: Workflow of the proposed two-stage framework. observations using the previously collected data to extend its lifespan. The workflow of the proposed framework in this work can be summarized as depicted in Fig. 1. #### Iii-A1 Prediction The objective of the prediction module is to forecast the future based on past observations. Let \\(\\mathcal{P}\\) denotes a predictor, \\(X^{t}\\) represents the data observed at time \\(t\\), and \\(\\bar{X}^{t+k}\\) denotes the prediction at time \\(t+k\\). A prediction task maps \\(K^{\\prime}\\) historical data to \\(K\\) future data is given by Eq. (1). \\[[X^{t-K^{\\prime}+1},\\cdots,X^{t}]\\xrightarrow{\\mathcal{P}}[\\bar{X}^{t+1},\\cdots, \\bar{X}^{t+K}] \\tag{1}\\] In the context of time progression, accurate prediction is essential as the available information can become outdated and biased. This presents challenges for the controller in making informed decisions. To address this, we utilize an RNN predictor that is capable of capturing location-specific patterns, enhancing its sensitivity to the unique characteristics of each location. By incorporating this predictor into our framework, we aim to improve the accuracy and reliability of decision-making processes. Let \\(D\\) denote the existing database generated from infrastructures, and \\(\\mathbf{X^{h}}\\) represent the data collected by the device on-site in the first stage. We apply conditional training and inference, incorporating \\(\\mathbf{X^{h}}\\) as part of the input, as illustrated in Eq. (2). \\[[X^{t},\\mathbf{X^{h}}]\\xrightarrow{\\mathcal{P}^{D}}[\\bar{X}^{t+1},\\cdots,\\bar{ X}^{t+K}] \\tag{2}\\] In this manner, the predictor can utilize both the existing database \\(D\\) and the local historical data \\(\\mathbf{X^{h}}\\), allowing it to effectively handle limited on-site data while considering similarities across locations. We train the RNN predictor \\(f_{\\mathbf{\\theta}^{pred}}\\), parameterized by \\(\\mathbf{\\theta}^{pred}\\), to predict the entire future data profile using \\(\\mathbf{X^{h}}\\) as input. The prediction model is trained using the loss function shown in Eq. 3, where \\(Y\\) represents the ground truth data to be collected. \\[\\mathcal{L}(\\mathbf{\\theta}^{pred})=\\big{|}\\big{|}(Y-f_{\\mathbf{\\theta}^{pred}}( \\mathbf{X^{h}}))\\big{|}\\big{|}_{2}^{2} \\tag{3}\\] #### Iii-A2 Control In this study, the selection of observation times is treated as a sequential decision-making problem, as future state-action pairs are influenced by past decisions. To incorporate temporal dependencies and find an observation policy, we employ a Deep Recurrent Q-Network (DRQN), which combines a Long Short-Term Memory (LSTM) [23] and a Deep Q-Network, to learn the Q-value function by integrating information from sequential inputs over time. Reinforcement Learning [24] is generally defined as a Markov Decision Process (MDP) problem, denoted by a tuple \\(\\{\\mathcal{S},\\mathcal{A},\\mathcal{P},\\mathcal{R}\\}\\). At each step, the agent observes a state \\(s\\in\\mathcal{S}\\), selects an action \\(a\\in\\mathcal{A}\\), and observes the next state \\(s^{\\prime}\\in\\mathcal{S}\\). The reward is determined by the function \\(\\mathcal{R}(s,a,s^{\\prime})\\), which assigns a reward value to the state-action transition. In the DRQN approach, the Q-function \\(q_{\\pi}(s,a)\\approx q_{\\pi}(s,a;\\mathbf{\\theta})\\) is approximated using neural networks with parameters \\(\\mathbf{\\theta}\\). The network parameters are optimized by minimizing the loss function defined in Eq. 4. \\[\\mathcal{L}(\\mathbf{\\theta})=\\mathbb{E}_{(s,a,r,s^{\\prime})\\sim\\mathcal{B}}\\Big{[} (y-Q(s,a;\\mathbf{\\theta}))^{2}\\Big{]} \\tag{4}\\] Here \\(\\mathcal{B}\\) denotes the replay memory buffer containing experience \\((s,a,r,s^{\\prime})\\), and \\(y=r+\\gamma\\max_{a^{\\prime}}\\hat{Q}(s^{\\prime},a^{\\prime};\\mathbf{\\theta}^{-})\\), where \\(\\hat{Q}(s^{\\prime},a^{\\prime};\\mathbf{\\theta}^{-})\\) is the output of the target network and \\(Q(s,a;\\mathbf{\\theta})\\) is the output of the evaluation network. Our design incorporates the principles of Model Predictive Control (MPC) [25], although we use a neural network for prediction instead of a dynamics model. The concept of conditional training, which is employed in the prediction module, is also applied here. The detailed design is outlined as follows: * _State_: \\(s=[X^{t},\\bar{X}^{(t+1):(t+K)},t_{local},t_{global},O_{ava},\\mathbf{X^{h}}]\\), represents the data collected at time \\(t\\) through observations, \\(\\bar{X}^{(t+1):(t+K)}\\) denotes the prediction from time \\(t+1\\) to \\(t+K\\), \\(t_{local}\\) indicates the index of the current time \\(t\\) in each run, \\(t_{global}\\) corresponds to the 24-hour time in the real world, \\(O_{ava}\\) denotes the remaining observations, and \\(\\mathbf{X^{h}}\\) represents the local historical data collected in the first stage. * _Action_: \\(a\\in\\{1,2,\\cdots,K\\}\\), represents the number of time steps from the current time \\(t\\) to the next observation. This choice of action results in a finite action space, while still providing the flexibility to handle long time horizons. * _Reward_: \\(r(s,a,s^{\\prime})=-r_{accuracy}-w_{1}\\cdot r_{similarity}-w_{2}\\cdot r_{waste}\\), where \\(r_{accuracy}\\) quantifies the prediction accuracy from time \\(t\\) to the next observation at time \\(t+a\\), \\(r_{similarity}\\) measures the similarity between the prediction and the ground truth, \\(r_{waste}\\) penalizes the agent for leaving observations unused, and weights \\(w_{1}\\) and \\(w_{2}\\) are weights that determine the importance of each component of the reward. By employing DRQN, the controller evaluates a sequence of states leading up to time \\(t\\), enabling it to ascertain the best timing for the subsequent observation. #### Iii-A3 Estimation When the device terminates due to running out of available observations, the collected data profile consists of a combination of observed values and predictions between two consecutive observations, denoted as \\(\\mathbf{\\hat{X}}=[\\cdots,X^{a_{i}},\\bar{X}^{(a_{i}+1):(a_{i}+a_{i+1}-1)},X^{a_{ i}+a_{i+1}},\\cdots]\\), where \\(a_{i}\\) represents the action taken at each decision iteration until using up all \\(\\mathcal{O}\\) observation opportunities, with \\(i=1,2,\\cdots,\\mathcal{O}\\). Given the ground truth of the data to be collected, the estimation becomes a supervised learning problem aimed at calibrating the prediction using the actual observed data. In this problem, the input to the estimation model is the data collected by the device along with the predicted values between the observed data points, and the label is the corresponding ground truth. We train another RNN estimator \\(f_{\\mathbf{\\theta}^{est}}\\), parameterized by \\(\\mathbf{\\theta}^{est}\\), to perform post estimation with the collected data. The network is trained using the loss function shown in Eq. 5, where \\(Y\\) represents the ground truth of the data to be collected. \\[\\mathcal{L}(\\mathbf{\\theta}^{est})=\\big{|}\\big{|}Y-\\mathbf{\\hat{X}}\\big{|}_{2}^{2} \\tag{5}\\] ### _Summary_ In this framework, each module plays a distinct role. The prediction module generates predictions that are incorporated as part of the state input for the control module. The control module utilizes this information to make decisions on observation times. Finally, the estimation module calibrates the data profile obtained from the control and prediction modules. Specifically, we employ a predictor based on the RNN architecture, utilizing fully connected LSTM hidden units. The design of the predictor follows the Encoder-decoder framework introduced in [26]. Our DRQN architecture is based on the design presented in [27]. The estimator shares the same overall architecture as the predictor. The energy consumption related to neural network inference depends on how often they are invoked. In this framework, each module is limited to being called no more than once per time step, which leads to a minimal energy impact in comparison to the continuous energy drain caused by hours of camera recording. ## IV Experiment While the primary focus of the proposed framework is on processing videos obtained from surveillance cameras, it is also capable of handling low-dimensional time series data processed from these videos. The framework can effectively work with various types of time series data, including occupancy, speed, flow, and more. Additionally, it is applicable to different subjects such as vehicles, pedestrians, and cyclists, making it versatile for diverse scenarios and domains. ### _Dataset_ Due to the lack of video data collected from power-constrained devices, without losing generality, we adopt a broader perspective by employing time series data obtained from the current infrastructure. We treat time series data as the processed outcomes of raw videos using computer vision algorithms, e.g., object detection. Traffic3 dataset [28] is used for experiments, which includes hourly collected of highway occupancy rates at 861 locations from July 1, 2016, to July 2, 2018, on San Francisco Bay area freeways from PeMS. This study does not exclusively focus on or confine itself to highway occupancy data. Instead, it showcases the viability of the proposed approach for a range of traffic data collection endeavors, such as collecting pedestrian/cyclist data in urban neighborhoods. Furthermore, we intentionally retain outliers in the data to demonstrate the robustness of our models. The total 861 locations in the data source are randomly divided into three sets for training, validation, and testing using a ratio of 0.7:0.2:0.1. The 17,544 data points from each location are subsequently divided into non-overlapping sub time series, with each sub-series consisting of 216 data points (equivalent to 9-day data). We take the first 48 data points (equivalent to 2-day data) as historical data collected in the first stage and aim to estimate the occupancy rates for the subsequent 168 data points (equivalent to 7-day coverage). Footnote 3: [https://github.com/thuml/Autoformer](https://github.com/thuml/Autoformer) ### _Settings_ We create a demanding scenario where the device has a limited opportunity to make hourly observations, allowing it to spend at most \\(\\frac{1}{6}\\) of the total time. This corresponds to \\(\\mathcal{O}=168\\times\\frac{1}{6}=28\\) observation opportunities over the next 7 days. This constraint enables the device to extend its lifespan by a factor of 6. Following each hourly observation captured by the onboard camera, the device performs real-time processing of the raw video data using state-of-the-art artificial intelligence techniques on-board, such as object detection. Our objective is to generate accurate descriptions of the 7-day data based on the limited observations, ensuring that the final outputs capture the underlying patterns as effectively as possible. #### Iv-B1 Predictor As a baseline for the prediction module, we consider a classical dynamics-based method called the Autoregressive (AR) model combined with the Kalman filter. The order selection for the AR model is determined as 4 based on autocorrelation tests. This baseline, denoted as AR(4)\\({}_{kal}\\), is widely recognized for its effectiveness in time series forecasting. Our proposed RNN predictor, denoted as LSTM\\({}_{pred}\\), comprises two recurrent layers with 128 LSTM units in both the encoder and decoder. LSTM\\({}_{pred}\\) is trained by providing the input data of the first 2 days and predicting the subsequent 7 days. We apply teacher-forcing tricks [29] during training, with an initial learning rate of \\(1e^{-4}\\). #### Iv-B2 Controller We pick the uniform observation policy, where observations are taken at at equal intervals across the entire time horizon, as the baseline control policy against the proposed DRQN. In this study, we construct a DRQN network consisting of three fully connected layers, followed by a recurrent layer, and another fully connected layer that maps the hidden states to the action space. We set the prediction window \\(K=12\\), length of history state-action pair stored in the memory replay buffer \\(L_{m}=12\\), and the size of replay buffer \\(\\mathcal{B}\\) is 5,000. The exploration rate \\(\\epsilon\\) decays from 1 to 0.1. The reward function calculates \\(r_{accuracy}=\\frac{1}{a}\\sum_{i=1}^{a}\\|\\bar{X}_{pred}^{t+i}-X_{gt}^{t+i}\\|_{2}\\), and \\(r_{similarity}=DTW(\\bar{X}^{t+1:t+i},X^{t+1:t+i})_{gt}\\) is calculated using Dynamic Time Warping [30] implemented by _tslearn_ package. The last term for unused observations is calculated as \\(r_{unused}=\\mathcal{O}-O_{ava}^{T}\\), where \\(\\mathcal{O}\\) is the total available number of observation hours at the beginning and \\(O_{ava}^{T}\\) is the number observation hours available at the end of the desired lifespan \\(T\\). Additionally, we set \\(w_{1}=1\\) and \\(w_{2}=10\\). #### Iv-B3 Estimator Two estimation methods are considered in the experiments: (1) Gaussian process regression (GPR) implemented using _SciPy_ package; (2) Recurrent Neural Network, LSTM\\({}_{est}\\), which shares the same architecture as LSTM\\({}_{pred}\\) but with an input layer dimension of 216 (48+168) and hidden layers of 256. #### Iv-B4 Baselines To evaluate the effect of three modules, we put forward three baseline configurations, including (1) Uniform + GPR: Uniform observation policy and GPR estimator; (2) Uniform + LSTM\\({}_{est}\\): Uniform observation policy and the LSTM-based estimator; (3) AR(4)\\({}_{kal}\\) + Uniform + LSTM\\({}_{est}\\): the predictor is an Auto-Regressive model with a Kalman filter, combined with the Uniform observation policy and the LSTM-based estimator. We will compare the baseline configurations with our proposed configuration of LSTM\\({}_{pred}\\) + DRQN + LSTM\\({}_{est}\\). Input data for all modules in all experiments are normalized in the same magnitude. ### _Long-term data collection performance comparison_ Table I shows the comparison of different configurations. They are evaluated based on three commonly used metrics for accuracy metrics, as well as a novel metric that is specifically relevant to the transportation community, including: (1) Mean Absolute Error (MAE), (2) Mean Absolute Percentage Error (MAPE), and (3) Root Mean Squared Error (RMSE), and the last metric (4) Coverage: this metric represents the cumulative sum of all observed values, reflecting the intuition that a higher quantity of content in observations can potentially yield more valuable information. All data, including missing (zeros) values, are used in calculating these metrics, with the exception of MAPE for numerical reasons. Our proposed configuration consistently outperforms the other configurations across all metrics, indicating the effectiveness of our framework and the chosen configuration (Fig. 3). Throughout the experiments, we observed that although AR(4)\\({}_{kal}\\) generally performs satisfactorily, it can encounter significant errors or even fail in certain instances where the historical data used to fit the AR model deviates significantly from the actual data. This discrepancy can lead to exaggerated errors and ultimately cause the DRQN controller to fail due to numerical issues in loss backpropagation. Fig. 2 demonstrates a specific example where the combination of AR(4)\\({}_{kal}\\) and the Uniform policy fails to produce meaningful results. This type of failure is not acceptable in the context of long-term data collection, regardless of its low probability, as real-world data cannot always be assumed to be stationary. In contrast, our learning-based predictor, LSTM\\({}_{pred}\\), shows robustness in scenarios where AR(4)\\({}_{kal}\\) fails. The failure cases encountered in the AR(4)\\({}_{kal}\\) + Uniform + LSTM\\({}_{est}\\) configuration can provide insights into why it performs even worse than its variant without the AR(4)\\({}_{kal}\\) predictor, as these failure cases may have contaminated the estimator's parameters with extremely high loss values. This highlights the significance of the prediction module in determining the overall performance evaluation. ### _Effect of control policy and estimation_ In order to better understand the impact of the control policy and the estimator, we conducted two sets of ablation studies, and the experiment settings are as follows: * LSTM\\({}_{pred}\\) + DRQN v.s. LSTM\\({}_{pred}\\) + Uniform. * LSTM\\({}_{pred}\\) + DRQN + LSTM\\({}_{est}\\) v.s. LSTM\\({}_{pred}\\) + Uniform + LSTM\\({}_{est}\\). Table II shows the comparison between the configurations of LSTM\\({}_{pred}\\) + DRQN and LSTM\\({}_{pred}\\) + Uniform. It can be observed that DRQN results in better performance in all metrics due to its ability to actively search for the optimal observation time that minimizes the prediction error. An illustrative example depicted in Fig. 4 shows that DRQN tends to make observations at points where the prediction errors are either significant at the current step or total deviation in the future, such as peaks or bottoms. This efficient utilization of observations helps minimize errors stemming from predictions and consequently enhances overall accuracy performance. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline & **RMSE \\({}^{a}\\)** & **MAE \\({}^{a}\\)** & **MAPE \\({}^{b}\\)** & **Coverage** \\\\ \\hline LSTM\\({}_{pred}\\)+DRQN & **0.0282** & **0.0156** & **120.4\\%** & **1.721** \\\\ LSTM\\({}_{pred}\\)+Uniform & 0.0320 & 0.0182 & 149.0\\% & 1.512 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE II: Performance comparison for using predictor and controller only. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline **Metric** & **Uniform+GPR** & **Uniform+LSTM\\({}_{est}\\)** & **AR(4)\\({}_{kal}\\)+Uniform +LSTM\\({}_{est}\\)** & **LSTM\\({}_{pred}\\)+DRQN+LSTM\\({}_{est}\\)** \\\\ \\hline RMSE (\\(\\downarrow\\)) \\({}^{a}\\) & 0.0318 & 0.0223 & 0.0223 & **0.0212** \\\\ MAE (\\(\\downarrow\\)) \\({}^{a}\\) & 0.0176 & 0.0119 & 0.121 & **0.0115** \\\\ MAPE (\\(\\downarrow\\)) \\({}^{b}\\) & 74.0\\% & 60.7\\% & 63.0\\% & **53.3\\% (-12.20\\%)** \\\\ Coverage (\\(\\uparrow\\)) \\({}^{a}\\) & 1.512 & 1.512 & 1.512 & **1.721 (+13.82\\%)** \\\\ \\hline \\hline \\end{tabular} \\({}^{a}\\)evaluated including missing data; \\({}^{b}\\)excluded zero values \\end{table} TABLE I: Performance comparison for traffic occupancy rate data collection. Our proposed learning-based configuration achieves the best performance in all metrics and much better generalization compatibility. \\(\\uparrow\\): the higher the better, \\(\\downarrow\\): the lower the better. Fig. 2: Prediction of AR(4)\\({}_{kal}\\) and LSTM\\({}_{pred}\\) following Uniform observation policy. LSTM\\({}_{pred}\\) can work with historical data that will fail AR(4)\\({}_{kal}\\). \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline & **RMSE \\({}^{a}\\)** & **MAE \\({}^{a}\\)** & **MAPE \\({}^{b}\\)** & **Coverage** \\\\ \\hline LSTM\\({}_{pred}\\)+DRQN & **0.0282** & **0.0156** & **120.4\\%** & **1.721** \\\\ LSTM\\({}_{pred}\\)+Uniform & 0.0320 & 0.0182 & 149.0\\% & 1.512 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE III: Performance comparison of DRQN and Uniform control policy given the predictor and estimator using LSTM\\({}_{pred}\\) and LSTM\\({}_{est}\\). The impact of incorporating the estimation module is shown in Table III. It is observed that DRQN continues to outperform Uniform in the majority of metrics, although the performance gap narrows down, which means LSTM\\({}_{pred}\\) + Uniform + LSTM\\({}_{est}\\) gains more improvement by adding the estimator. To investigate why the addition of an estimator has a milder impact on the performance of DRQN compared to the Uniform policy, we visualize the action distribution obtained using DRQN, as depicted in Fig. 5. It reveals that, firstly, the allocation of observations per day decreases progressively over time, indicating that the DRQN controller assigns greater importance to the initial days compared to the later ones. This can be attributed to the fact that a general LSTM predictor has the ability to quickly learn location-specific patterns. Therefore, having more observations during the initial days facilitates faster adaptation to the local data patterns and improves the prediction accuracy for the far future. One consequence is that as the predictor becomes increasingly accurate and calibrated with the observations from the initial days, it requires fewer observations for reliable predictions. This could be one of the reasons why Uniform observations show greater improvements compared to DRQN after adding the estimator: Uniformly distributed observations provide guidance throughout the entire time horizon, while DRQN observations are concentrated in the initial days and may not contribute significantly to the estimation towards the end of the time horizon. Secondly, the distribution of observations within each day exhibits a two-peak pattern, suggesting a higher likelihood of observations being allocated to morning and afternoon peak hours. Remarkably, the DRQN controller learns this behavior without any explicit guidance from the reward function. This demonstrates the ability of DRQN to identify crucial time intervals for maximizing performance metrics such as accuracy and information coverage, resulting in improved outcomes. Another notable finding is that while the estimator module appears to have the most significant impact on improving accuracy performance compared to the other two modules, we believe that these three modules should function together as a cohesive system, and controllers like DRQN can Fig. 4: Observations are strategically taken either when there is a significant prediction error or at representative points, such as peaks. Fig. 5: The distribution of observations allocated in each day by the DRQN controller. The y-axis represents the ratio of the number of observations assigned at a specific time to the total number of test instances. Each day starts at 0:00 and ends at 23:00. Fig. 3: The proposed LSTM\\({}_{pred}\\)+DRQN+LSTM\\({}_{est}\\) configuration predicts the start and end of the peak hours. It can generate smooth predictions even without new observations. bring benefits not only to accuracy but also to implicit utility in domain-specific metrics, such as information coverage in transportation. ## V Conclusion This paper presents a modularized framework designed to facilitate long-term data collection on power-constrained devices. By integrating prediction, control, and estimation modules, the framework effectively extends the device's lifespan while maintaining reasonable performance. Real-world data experiments indicate the effectiveness of the proposed framework and its configuration. The effect of each module is thoroughly examined and analyzed. Future work involves testing the framework in complex urban scenarios and conducting field experiments for real-world validation. ## Acknowledgment This work has been supported by C2SMART, a Tier 1 University Transportation Center at New York University, and the United States National Science Foundation grant #2238968. The views presented in this paper are those of the authors alone. ## References * [1]E. P. Iijina, D. Chand, S. Gupta, and K. Goutham (2019) Computer vision-based accident detection in traffic surveillance. In 2019 10th International conference on computing, communication and networking technologies (ICCCNT), pp. 1-6. Cited by: SSI. [MISSING_PAGE_POST] . Tan and Q. Le (2019) Efficient: rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105-6114. Cited by: SSII-A. * [41]M. Muller (2007) Dynamic time warping. Information retrieval for music and motion, pp. 69-84. Cited by: SSII-A. * [42]M. Casares and S. Velipasar (2011) Adaptive methodologies for energy-efficient object detection and tracking with battery-powered embedded smart cameras. IEEE Transactions on Circuits and Systems for Video Technology21 (10), pp. 1438-1452. Cited by: SSII-A. * [43]M. Casares and S. Velipasar (2011) Adaptive methodologies for energy-efficient object detection and tracking with battery-powered embedded smart cameras. IEEE Transactions on Circuits and Systems for Video Technology21 (10), pp. 1438-1452. Cited by: SSII-A. * [44]M. Casares and S. Velipasar (2011) Adaptive methodologies for energy-efficient object detection and tracking with battery-powered embedded smart cameras. IEEE Transactions on Circuits and Systems for Video Technology21 (10), pp. 1438-1452. Cited by: SSII-A. * [45]M. Casares and S. Velipasar (2011) Adaptive methodologies for energy-efficient object detection and tracking with battery-powered embedded smart cameras. IEEE Transactions on Circuits and Systems for Video Technology21 (10), pp. 1438-1452. Cited by: SSII-A. * [46]M. Casares (2011) A survey of energy-efficient object detection and tracking with battery-powered embedded smart cameras. IEEE Transactions on Circuits and Systems for Video Technology21 (10), pp. 1438-1452. Cited by: SSII-A. * [47]M. Casares and S. Velipasar (2011) Adaptive methodologies for energy-efficient object detection and tracking with battery-powered embedded smart cameras. IEEE Transactions on Circuits and Systems for Video Technology21 (10), pp. 1438-1452. Cited by: SSII-A. * [48]M. Casares and S. Velipasar (2011) Adaptive methodologies for energy-efficient object detection and tracking
Collecting traffic data is crucial for transportation systems and urban planning, and is often more desirable through easy-to-deploy but power-constrained devices, due to the unavailability or high cost of power and network infrastructure. The limited power means an inevitable trade-off between data collection duration and accuracy/resolution. We introduce a novel learning-based framework that strategically decides observation timings for battery-powered devices and reconstructs the full data stream from sparsely sampled observations, resulting in minimal performance loss and a significantly prolonged system lifetime. Our framework comprises a predictor, a controller, and an estimator. The predictor utilizes historical data to forecast future trends within a fixed time horizon. The controller uses the forecasts to determine the next optimal timing for data collection. Finally, the estimator reconstructs the complete data profile from the sampled observations. We evaluate the performance of the proposed method on PeMS data by an RNN (Recurrent Neural Network) predictor and estimator, and a DRQN (Deep Recurrent Q-Network) controller, and compare it against the baseline that uses Kalman filter and uniform sampling. The results indicate that our method outperforms the baseline, primarily due to the inclusion of more representative data points in the profile, resulting in an overall 10% improvement in estimation accuracy. Source code will be publicly available.
Condense the content of the following passage.
261
arxiv-format/2308_11492v1.md
A LiDAR-Inertial SLAM Tightly-Coupled with Dropout-Tolerant GNSS Fusion for Autonomous Mine Service Vehicles Yusheng Wang,, Yidong Lou, Weiwei Song, Bing Zhan, Feihuang Xia and Qigeng Duan Manuscript submitted Dec 10, 2022. This work was supported by the Joint Foundation for Ministry of Education of China under Grant 6141A0211907 (_Corresponding author_: Yidong Lou).Yusheng Wang is with the GNSS Research Center, Wuhan University, 129 Lucoyu Road, Wuhan 430079, China and the CHC NAVIGATION, 6 Huanglongshandong Road, Building 2, 430073, Wuhan, China and the Beijing Lishedachian Co., Ltd, 5 Yheyguan Road, Beijing 100871, China (email: [email protected]).Yidong Lou and Weiwei Song are with the GNSS Research Center, Wuhan University, 129 Lucoyu Road, Wuhan 430079, China (email: [email protected]; [email protected]). ## I Introduction The continuous spread of COVID-19 has promoted a growing demand of robotics in a great variety of scenes, from hospitals, construction sites, assembly plants, to mines. This surge of interest is motivated by a wide range of unmanned applications, such as autonomous service robots and robotaxis. The deployment of autonomous mine vehicles is of particular interest since the potential benefits include access to unreachable or dangerous locations and monitoring personnel in unsafe areas. These peculiarities will have affirmative impact on the mine operation, production, and safety. Localization and environment perception are the essential capabilities for autonomous mine vehicles operation. In typical GPS-denied underground mine environments, many researches have proposed to use simultaneously localization and mapping (SLAM) to solve these problems [1, 2]. Unfortunately, most mature SLAM approaches have undesirable performance when deployed in real-life mine environments: the poor illumination renders visual-SLAM systems unreliable, the slippery terrain makes the wheel odometry inaccurate, and the explosion-proof requirements limit the large deployment of wireless sensors such as UWB and RFID. As light detection and ranging (LiDAR) sensors are less sensitive to illumination variations and provide direct, high-fidelity, long-range 3D measurements, they have been widely accepted for odometry estimation in the past decade. Typically, LiDAR odometry algorithms estimate the ego motion of the vehicle through registering consecutive LiDAR frames. When it comes to perceptually-challenging mine environments, the presence of self-repetitive and symmetric areas increases the difficulty of constraining the relative motion along the main shaft of the mine tunnel. Techniques employed to mitigate this issue consist of observability analysis [3], degeneracy detection and mitigation [4, 5] and the integration of other sources of measurements, such as inertial data from an IMU. The challenges of autonomous mine vehicles SLAM extend to engineering implementation. SLAM algorithms must operate onboard with limited computational budget, and deliver vehicle pose estimations with low latency regularly. Moreover, these SLAM systems are required to withstand intermittent sensor measurements and recover from transitory faulty states. In this paper, we present a LiDAR odometry system that enables robust and accurate state estimation and environment reconstruction for autonomous mine vehicles. Concretely, the contributions of this paper include: 1. We develop a LiDAR-inertial system that incorporates the Bing Zhan is with the CHC NAVIGATION, 599 Gaqing Road, Building D, 201702, Shanghai, China. (email: [email protected]) Feihuang Xia is with the Beijing Lishedachian Co., Ltd, 5 Yheyuan Road, Qigeng Duan is with the Beijing Lishedachian Co., Ltd, 5 Yheyuan Road, Beijing 100871, China and the Department of Geography and Resource Management, The Chinese University of Hongkong, Hong Kong Special Administrative Region, Hong Kong 999077, China. (email: [email protected]). information from two LiDARs, an IMU, and wheel odometers using error-state Kalman Filter (ESKF) and graph optimization. Instead of using the commonly used iterated closet point (ICP) or feature points for registration, we merge laser scans through surfel fusion. * To fully compensate the largely accumulated drifts inside the tunnel, we develop a loop closure aided re-initialization method after long periods of GPS-dropouts. In that case, an estimation of the accumulated drift during the GPS outages is provided, then the errors both inside and outside the tunnel can be well eliminated. * The proposed pipeline is thoroughly evaluated in various mine tunnel environments across a long-time span. The results show our system drifts only 1.86 m after travelling up to 6.6 km (5 km tunnel). The reminder of this paper is organized as follows. Section II reviews the relevant scholarly works. Section III gives an overview of the proposed system. Section IV presents the detailed graph optimization process applied in our system, followed by the experimental results described in Section V. Finally, Section VI concludes this paper and demonstrates future research directions. ## II Related Work Prior works on point cloud registration and LiDAR SLAM in tunnel-like environments are extensive. In this section, we briefly review scholarly works on these two aspects. ### _Point Cloud Registration_ The point cloud registration calculates the frame-to-frame displacement and matches the consecutive scans based thereon. It can be broadly classified into three different categories: point based, feature based and mathematical property based methods. The point based methods can be treated as dense approaches since they make full use of points from raw LiDAR scans. On the other hand, the feature-based approaches are regarded as sparse methods, as they only employ a select number of points for tracking. Furthermore, the mathematical property methods take advantage of statistical models, and transform the discrete representations of a single scan into a continuous distribution. Many point based methods are the variations of ICP [6], which is an iterative two-step process. The algorithm first establishes the correspondences between the source and target point clouds. Then a transformation is calculated to reduce the distance between all corresponding points. Both step is repeated until reaching preset criteria. Considering its large computation burden with increased point cloud numbers, many approaches have tried to reduce its cost for real-time operation [7, 8, 9]. The feature based methods receive growing interest in recent years due to their simplicity and relatively lower computational complexity. Through extracting and matching feature points on planar surface and edges from the current and previous scan, the relative motion can be estimated accordingly. Similar to the Dilution of Precision (DOP) concept in the field of satellite navigation, the feature distribution also has a great influence to the state estimation results. The frame-to-frame registration is prone to fail once the environment is mostly planar. Therefore, some methods [10, 11, 12] use surfel as small planar feature and register new points by minimizing the point to plane residuals. In this paper, we add the degeneracy analysis [5] to the surfel matching to further improve the matching accuracy inside the tunnels, where the state estimation problem is only performed in the well-conditioned direction. The normal distribution transform (NDT) [13] is a widely used mathematical property based methods. NDT divides the 3D space into small cells, and calculate the local probability density function (PDF) in each cell. Then the point-to-distribution correspondences are computed within a scan pair to find the optimal transformation. NDT aims to maximize the likelihood of all points described by the normal distributions. This reduces the memory consumption as well as computation time for nearest-neighbor searches. ### _LiDAR SLAM in Tunnel-like Environments_ The current LiDAR SLAM have proved to be accurate and robust enough for many scenarios, and we are mainly focused in handling degeneracy here. LiDAR SLAM usually produces a large drift error in scenes with textureless surface or repeated structures, such as indoor environments or tunnels. Researchers have proposed adding auxiliary sensors, degeneracy analysis, and geometric structures to cope with this problem. Adding sensor means introducing additional constraints. Cameras have proved to be a good complementary sensor in some LiDAR degenerated districts [14, 15, 16], but they are still inaccurate in perceptually-degraded mine tunnels. As an environment-insensitive sensor, the ultra-wideband (UWB) provides less cumulative errors and has attracted more and more interest in recent years [17]. However, they need to be largely deployed along the mine tunnel, which does not satisfy the explosion-proof regulation of most mine tunnels. As discussed in our previous work [18], the LiDARs of limited field of view (FoV) but high point cloud density is less likely to fail at degenerated districts. On the other hand, the spinning LiDAR with 360\\({}^{\\circ}\\) FoV can provide consistent state estimation towards irregular movement and better loop closing. In this paper, we leverage the advantages of both LiDARs in the system design. One of the early works of degeneracy analysis is proposed by Zhang et al. [5], which leverages the minimum eigenvalue of the information matrices to determine system degeneracy. However, this metric is difficult to interpret because of unclear physical meaning. Zhen et al. define the localizability vector through projecting information matrix into the eigenspace, and model the degeneration with a frictionless force-closure [19].Taglique et al. [20] also use the smallest eigenvalue of point-to-plane cost to indicate the least observable direction. Instead of solving the degeneracy problem using the LiDAR-inertial SLAM directly, it will switch to other parallel-running odometry algorithms [21] when the metric is below a self-defined threshold. The degeneracy analysis based methods have been largely deployed to tunnel exploration tasks [20, 22]. We also introduce this degeneracy analysis feature into our system to further improve system accuracy. Man-made environments are often with strong structural regularity, lines, surfaces and objects. These geometric features have been widely exploited in LiDAR SLAM to improve state estimation accuracy [23, 24, 25]. Zhou et al. adopt the planar constraints to optimize plane parameters in the back-end [26], achieve promising results in a single-layer indoor environment. Zhou et al. [27] propose to use the principal component analysis (PCA) to extract sphere features along with planar, edge, and ground features. The results show that the spherical features can improve the system stability in highway scenarios. ## III Tightly Coupled Lidar-Inertial SLAM The pipeline of the proposed system is visualized in Fig. 1. The two LiDARs are sent to different state estimation module, which estimates the full LiDAR state by registering surfels in a scan to the map via a tightly coupled ESKF. Then the weights are calculated accordingly and integrated with the GNSS measurements. Once the GNSS signal is available again after long dropouts, the re-initialization module is awakened to further optimize the trajectory and the mapping result. Before diving into details of methods, we first define the frames and notations used throughout this article in TABLE I. In addition, we denote \\((\\cdot)^{\\text{B}}_{\\text{L}}\\) as the transformation from LiDAR frame to IMU frame. Besides, we follow the \"boxplus\" and \"boxminus\", \\(\\boxplus\\) and \\(\\boxplus\\), operation defined in [28] to parameterize the state error on manifold. For a manifold \\(\\mathcal{M}\\) with dimension \\(n\\), we have: \\[\\mathcal{M}=\\text{SO}(3):\\ \\ \\ \\ \\ \\ \\ \\mathbf{R}\\boxplus \\mathbf{r}=\\text{RExp}(\\mathbf{r});\\ \\ \\ \\mathbf{R}_{1}\\boxplus \\[\\mathbf{\\hat{p}}_{\\mathbf{\\hat{0}}_{k}}^{\\mathbf{\\hat{0}}_{k+1}}=\\mathbf{p}_{ \\mathbf{\\hat{0}}_{k}}^{\\mathbf{\\hat{0}}_{k+1}}+\\mathbf{n}_{\\mathbf{\\hat{p}}^{0}}, \\tag{6}\\] where \\(\\mathbf{n}_{\\mathbf{\\hat{p}}^{0}}\\) is also the zero-mean white Gaussian noise. Based thereupon, we can derive the kinematic model as: \\[\\dot{\\mathbf{R}}_{B_{k}}^{W}=\\mathbf{R}_{B_{k}}^{W}\\big{[}\\mathbf{ \\hat{\\omega}}_{k}-\\mathbf{b}_{\\omega_{k}}-\\mathbf{n}_{\\omega}\\big{]}_{A^{ \\prime}}\\ \\mathbf{\\hat{p}}_{B_{k ### _Iterated Kalman Filter_ For each LiDAR input, we employ an iterated Kalman filter to estimate the system state utilizing the state transition model (6) and measurement model (11). By setting the process noise to zero, we can perform the forward propagation upon received IMU data using the error state dynamic model [29]: \\[\\mathbf{\\hat{x}}_{i+1} =\\mathbf{\\hat{x}}_{i}\\boxplus\\big{(}\\Delta\\mathbf{tf}(\\mathbf{ \\hat{x}}_{i},\\mathbf{u}_{i},\\mathbf{0})\\big{)}\\] \\[=\\mathbf{F}_{\\mathbf{\\hat{x}}}\\mathbf{\\hat{x}}_{i}+\\mathbf{F}_{ \\mathbf{w}}\\mathbf{w}_{i}. \\tag{12}\\] Here \\(\\mathbf{\\hat{x}}_{0}=\\mathbf{\\bar{x}}_{k-1}\\), where \\(\\mathbf{\\bar{x}}_{i}\\) is the optimal state estimation of the LiDAR scan at \\(t_{i}\\). The matrix \\(\\mathbf{F}_{\\mathbf{\\hat{x}}}\\) and \\(\\mathbf{F}_{\\mathbf{w}}\\) is listed at the top of this page in (13). \\(\\mathbf{A(u)}^{-1}\\) follows the definition in [30]: \\[\\mathbf{A(u)}^{-1} =\\mathbf{I}-\\frac{1}{2}[\\mathbf{u}]_{A}+\\big{(}1-\\alpha\\;(\\| \\mathbf{u}\\|)\\big{)}\\frac{|\\mathbf{u}|_{A}^{2}}{\\|\\mathbf{u}\\|^{2}},\\] \\[\\propto(u)=\\frac{u}{2}\\frac{\\cos(u/2)}{\\sin(u/2)}. \\tag{14}\\] Besides, the propagated covariance \\(\\mathbf{\\bar{P}}_{i}\\) can be calculated by: \\[\\mathbf{\\bar{P}}_{i+1}=\\mathbf{F}_{\\mathbf{\\bar{P}}}\\mathbf{\\bar{P}}_{i} \\mathbf{F}_{\\mathbf{\\bar{w}}}^{T}+\\mathbf{F}_{\\mathbf{w}}\\mathbf{Q}\\mathbf{F} _{\\mathbf{w}}^{T};\\mathbf{\\bar{P}}_{0}=\\mathbf{\\bar{P}}_{k-1}, \\tag{15}\\] where \\(\\mathbf{Q}_{i}\\) is the covariance of the noise \\(\\mathbf{w}_{i}\\). We need to compensate for the relative point cloud motion before they are integrated with the propagated state \\(\\mathbf{\\hat{x}}_{i}\\) and covariance \\(\\mathbf{\\bar{P}}_{i}\\) to produce an optimal state update. This process is implemented from [31], where (6) is propagated backward as \\[\\mathbf{\\bar{x}}_{i-1}=\\mathbf{\\bar{x}}_{i}\\boxplus\\big{(}-\\Delta\\mathbf{tf} (\\mathbf{x}_{i},\\mathbf{u}_{i},\\mathbf{0})\\big{)}. \\tag{16}\\] The backward propagation utilizes the left IMU and wheel odometer measurement as the input to compute a relative pose between point sampling time and the scan end time. After the motion compensation process, we can view all the points within the scan are sampled at the same time. Then we can derive the residual at the \\(\\kappa\\)-th iteration \\(\\mathbf{z}_{i}^{\\kappa}\\) as: \\[\\mathbf{z}_{i}^{\\kappa}=\\mathbf{h}_{i}(\\mathbf{\\hat{x}}_{i}^{\\kappa},\\mathbf{ 0})=\\mathbf{n}_{i}^{T}\\left(\\mathbf{T}_{\\mathbf{\\hat{x}}_{i}}^{W}\\mathbf{T}_{ \\mathbf{\\hat{x}}_{i}}^{B_{i}}\\mathbf{\\hat{p}}_{i}^{L}-\\mathbf{p}_{i}\\right). \\tag{17}\\] The total measurement noise is \\(\\mathbf{n}_{i}^{T}\\mathbf{T}_{\\mathbf{\\hat{x}}_{i}}^{W}\\mathbf{T}_{\\mathbf{ \\hat{x}}_{i}}^{B_{i}}\\mathbf{\\hat{n}}_{i}^{L}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{ R}_{i})\\). Combining the prior distribution from the forward propagation \\(\\mathbf{x}_{i}\\boxminus\\mathbf{\\hat{x}}_{i}\\) and the measurement model, we can derive a posterior distribution of the state \\(\\mathbf{x}_{i}\\), denoted as \\(\\mathbf{\\bar{x}}_{i}^{\\kappa}\\), and its maximum a posterior (MAP) form: \\[\\min_{\\mathbf{x}_{i}}\\Bigg{(}\\|\\mathbf{x}_{i}\\boxminus\\mathbf{\\bar{x}}_{i}\\| _{\\mathbf{\\hat{p}}_{i}}^{2}+\\sum_{j=1}^{m}\\|d_{j}-\\mathbf{H}_{j}\\cdot(\\mathbf{ x}_{i}\\boxminus\\mathbf{\\bar{x}}_{i})\\|_{\\mathbf{\\hat{p}}_{j}}^{2}\\Bigg{)}. \\tag{18}\\] Let \\(\\mathbf{H}=\\big{[}\\mathbf{H}_{1}^{\\kappa^{T}},\\mathbf{H}_{2}^{\\kappa^{T}}, , \\mathbf{H}_{m}^{\\kappa^{T}}\\big{]}^{T}\\), \\(\\mathbf{R}=\\text{diag}(\\mathbf{R}_{1},\\mathbf{R}_{2}, ,\\mathbf{R}_{m})\\), \\(\\mathbf{P}=(\\mathbf{J}^{\\kappa})^{-1}\\mathbf{\\bar{P}}_{i}(\\mathbf{J}^{\\kappa}) ^{-T}\\), and \\(\\mathbf{z}_{i}^{\\kappa}=\\big{[}\\mathbf{z}_{i}^{\\kappa T},\\mathbf{z}_{i}^{ \\kappa T}, ,\\mathbf{z}_{m}^{\\kappa T}\\big{]}^{T}\\), this MAP problem can be solved by iterated Kalman filter: \\[\\mathbf{K}=(\\mathbf{H}^{T}\\mathbf{R}^{-1}\\mathbf{H}+\\mathbf{P}^{-1})^{-1} \\mathbf{H}^{T}\\mathbf{R}^{-1},\\] \\[\\mathbf{\\hat{x}}_{i}^{\\kappa+1}=\\mathbf{\\hat{x}}_{i}^{\\kappa} \\boxplus\\big{(}-\\mathbf{K}\\mathbf{z}_{i}^{\\kappa}-(\\mathbf{I}-\\mathbf{KH})( \\mathbf{J}^{\\kappa})^{-1}(\\mathbf{\\hat{x}}_{i}^{\\kappa}\\boxminus\\mathbf{\\hat{ x}}_{i})\\big{)}, \\tag{19}\\] where \\(\\mathbf{K}\\) is the Kalman gain, \\(\\mathbf{H}\\) is the Jacobin matrix of the measurement model \\(\\mathbf{h}_{1}(\\mathbf{x}_{i},\\mathbf{u}_{i}^{L})\\), and \\(\\mathbf{J}^{\\kappa}\\) is the partial differentiation of \\((\\mathbf{\\hat{x}}_{i}^{\\kappa}\\boxplus\\mathbf{\\hat{x}}_{i}^{\\kappa}) \\boxminus\\mathbf{\\hat{x}}_{i}\\) w.r.t. \\(\\mathbf{\\hat{x}}_{i}^{\\kappa}\\) evaluated at zero. This process repeats until convergence, \\(\\|\\mathbf{\\hat{x}}_{i}^{\\kappa+1}\\boxminus\\mathbf{\\hat{x}}_{i}^{\\kappa}\\|<\\epsilon\\), then the optimal state and covariance estimation are: \\[\\mathbf{\\bar{x}}_{i}=\\mathbf{\\hat{x}}_{i}^{\\kappa+1},\\mathbf{\\bar{P}}_{i}=( \\mathbf{I}-\\mathbf{KH})\\mathbf{P}. \\tag{20}\\] The state update is then used to transform each scan point to the global frame and inserted into the map. ### _Graph Optimization_ We construct a pose graph at the back-end to integrate pose information from two LiDAR odometries, inertial odometry, GNSS, and detected loop closures. This state estimation process can be formulated as a maximum-a-posterior (MAP) problem. Given the measurements \\(\\mathbf{z}_{k}\\) and the history of states \\(\\mathbf{x}_{k}\\), the MAP problem can be formulated as: \\[\\mathbf{x}_{k}^{*}=\\underset{\\mathbf{z}_{k}}{\\operatorname{argmax}}\\;\\mathrm{p} \\left(\\boldsymbol{x}_{k}|\\boldsymbol{x}_{k}\\right)\\propto\\mathrm{p}(\\mathbf{x}_{0}) \\mathrm{p}\\big{(}(\\mathbf{z}_{k}|\\mathbf{x}_{k})\\big{)} \\tag{21}\\] If the measurements are conditionally independent, then (21) can be solved through least squares minimization: \\[\\boldsymbol{\\chi}^{*}=\\underset{\\mathbf{\\hat{x}}_{k}}{\\operatorname{argmin}}\\sum \\sum_{l=1}^{k}\\|\\boldsymbol{r}_{l}\\|^{2} \\tag{22}\\] where \\(\\boldsymbol{r}_{l}\\) is the residual of the error between the predicted and measured value. For the sake of decreasing system memory usage and increasing computation efficiency, we employ the sliding window to keep a relative steady number of nodes in the local graph. Given a sliding window containing \\(k\\) keyframes, \\(\\boldsymbol{X}=[\\mathbf{\\bar{x}}_{1}^{T},\\mathbf{\\bar{x}}_{2}^{T}, ,\\mathbf{ \\bar{x}}_{k}^{T}]^{T}\\), we maximize the likelihood of the measurements, and the optimal states can be acquired through solving the MAP problem:\\[\\min_{\\mathbf{x}}\\{\\left\\lVert\\mathbf{r}_{p}\\right\\rVert^{2}+\\mathcal{W} _{inertial}\\sum_{l=1}^{N_{g}}\\left\\lVert\\mathbf{r}_{y_{l}}\\right\\rVert^{2}+\\mathcal{W }_{avia}\\sum_{l=1}^{N_{g}^{grain}}\\mathbf{r}_{Li}^{aria}\\] \\[+\\mathcal{W}_{rs}\\sum_{i=1}^{N_{g}^{rs}}\\mathbf{r}_{Li}^{rs}+\\mathcal{W }_{GNSS}\\sum_{i=1}^{N_{g}}\\left\\lVert\\mathbf{r}_{g_{i}}\\right\\rVert^{2}\\} \\tag{23}\\] where \\(\\mathbf{r}_{p}\\) is the prior factor marginalized by Schur-complement [32, 33], \\(\\mathbf{r}_{g_{i}}\\) is the residual of IMU-odometer preintegration result [18]. \\(\\mathbf{r}_{Li}^{aria}\\) and \\(\\mathbf{r}_{Li}^{rs}\\) define the residual of Avia and RS-16 LiDAR odometry. Finally, the GNSS constraints is denoted by \\(\\mathbf{r}_{g_{i}}\\). Note that we use manually established value for the LiDAR odometry covariance and directly use the GNSS covariance predict from the raw measurements. Since the residuals are expressed in different frames, we unify their expression in the inertial frame using the calibrated sensor extrinsic such that: \\[\\mathbf{r}_{g_{i}}=\\mathbf{R}_{ir}^{B}(\\mathbf{p}^{W_{i}}-\\mathbf{p}^{W_{i-1}}- \\mathbf{p}_{W}^{B}), \\tag{24}\\] where the extrinsic \\(\\mathbf{T}_{W}^{B}=\\{\\mathbf{R}_{W}^{B},\\mathbf{p}_{W}^{B}\\}\\) transform the GNSS pose into inertial coordinates. We denote \\(N_{g}\\), \\(N_{g}^{aria}\\), \\(N_{L}^{rs}\\), and \\(N_{g}\\) as the number of four factors within the sliding window. \\(\\mathcal{W}\\) with footnotes defines the respective weighting factors. We assume the short-term inertial preintegration result is accurate and set it as the reference to calculate other weighting factors. For the short period k to k+1, we denote \\(\\mathbf{p}_{jk}^{\\mathcal{J}_{k+1}}\\) and \\(\\mathbf{p}_{k}^{L_{k+1}}\\) as the pose estimation results of inertial and LiDAR odometry. Then the LiDAR odometry weighting factor can be expressed using: \\[\\mathcal{W}_{L}=\\left(1-\\left(\\frac{\\left\\lVert\\mathbf{p}_{\\mathcal{L}_{k}}^{ L_{k+1}}\\right\\rVert-\\left\\lVert\\mathbf{p}_{\\mathcal{J}_{k}}^{J_{k+1}} \\right\\rVert}{\\left\\lVert\\mathbf{p}_{\\mathcal{J}_{k}}^{J_{k+1}}\\right\\rVert} \\right)^{2}\\right)\\mathcal{W}_{inertial} \\tag{25}\\] The \\(\\mathcal{W}_{GNSS}\\), on the other hand, is a combination of the DOP value, satellite number, and the real-time kinematic (RTK) solution status. ### _Re-initialization_ The autonomous mine service vehicle starts in the open district, then enters the mine tunnel and travels for a long time, finally leaves the tunnel. In that case, the vehicle state may have accumulated a significant amount of drift inside the tunnel. If we directly fuse the large error LiDAR odometry and GNSS measurements, the accumulated drift may be not fully or over compensated, leading to extra pose estimation errors. Therefore, we propose to use a two-step re-initialization process for drift elimination. 1. _Loop detection_: Once the vehicle leaves the tunnel, the GNSS signal is available again. This will awake a iris loop detection thread based on [33], with the global pose and mapping updated upon detected loop. Note that we merely employ the 360\\({}^{\\circ}\\) spinning LiDAR for loop detection. 2. _Full recovery_: After the loop is founded and the RTK is at the fixed solution, we will add the global measurements to the pose graph as presented in Section IV-D. After this full recovery, the estimated state is again aligned with global. ### _Hardware and Software Level Verification_ The hardware-level verification is conducted at the data preprocessing stage, including data stream existence, frequency, and individual verification. The data stream existence test aims to find out whether the required data input exist or not. Since the LiDAR-inertial odometry has a filter-based structure, and it will fail immediately when no input is from either IMU or LiDAR. Therefore, each of our LiDAR-inertial odometry will reinitialize and restart when either IMU or LiDAR stream is lost for one second. Otherwise, the filter-based system may generate large and unrecoverable drift as visualized in Fig. 3. The data frequency test also follows this idea. The system set the stream with the lowest frequency as the primary input, and monitor the counts of other data within two consecutive frames continuously, e.g., the LiDAR is set as the primary input (10 Hz), and approximately twenty frames of IMU input (200 Hz) should be found within two successive LiDAR scans. Once this criterion is not hold for thirty seconds, the system will send a warning to the user interface for a manual check, e.g., a yellow warning sign on the central control screen. The individual verification is mainly for the LiDAR sensors. Since the mine service vehicle operates in the narrow tunnel, once the vehicle is facing direct to the wall, the LiDAR-inertial odometry may fail against the textureless and flat terrain. Therefore, we monitor the Euclidian distance of the point clouds within each scan, if 70 % of the points are below two meters to the LiDAR, the current frame is discarded for pose estimation. Once this stage retains for more than ten seconds, the system will switch to inertial odometry temporarily. The software-level test is performed for parallel pose estimation modules to remove clearly wrong results. We set the maximum speed of the vehicle as 30 km/h, and verify whether the displacement of each odometry is beyond this limit or not, e.g., once the displacements of two successive LiDAR odometry (10 Hz) is beyond 2.0 m, it will be discarded for pose graph construction, since it is clearly wrong pose estimation results. Similarly, we use the steering angle information to monitor the individual yaw estimation results. Fig. 3: Illustration of the influence of temporal IMU data loss to the LiDAR-inertial system. The real-time mapping has a sudden vertical drift w.r.t. a 0.6 s IMU data loss. ## IV Experiments To evaluate the performance of the proposed method, we conducted experiments in Madiliang mine of Ordos, China. As visualized in Fig. 4(a), the mine service vehicle transport food, staff worker, and some necessities between ground office and underground mine face. The underground mine tunnel is around 2.5 km long, with many branches along the path to the mine face as pictured in Fig. 4(b). We collected the dataset utilizing several autonomous mine service vehicles as shown in Fig 4(c). The vehicle is composed of one Robosense RS-16 spinning LiDAR with 360\\({}^{\\circ}\\)\\(\\times\\)30\\({}^{\\circ}\\) FoV and one Livox Avia non-repetitive scanning LiDAR with 70.4\\({}^{\\circ}\\)\\(\\times\\)77.2\\({}^{\\circ}\\) FoV. We use a ASENSING INS570D integrated navigation unit to provide GNSS and inertial measurements. We also collect the wheel encoder readings of the two rear wheels. The localization ground truth is kept by a MAPSTNAV POS620 high precision integrated navigation system with fiber optic gyros. POS620 supports precise post processing procedure, which can achieve centimeter-level localization accuracy in the long tunnels. In addition, we manually setup several check points inside the tunnel (on the wall or on the floor) using total station to further verify the localization and mapping accuracy. All our algorithms are implemented in C\\({}^{++}\\) and executed in Ubuntu Linux using the ROS [34]. We use an onboard computer with two NVIDIA Jetson Xaiver NX for real-time processing in the vehicle. Since all our sensors are hardware synchronized, we record the GNSS timestamp for each SLAM pose output. Then we can directly evaluate the localization accuracy through timestamp matching. ### _Ground Tests_ The first experiments seek to evaluate the positioning and mapping accuracy of our system in the open sky environment. We first stay still waiting for the RTK initialization, and our SLAM algorithm will automatically align the estimation coordinates with global coordinates in this stage. Since there are no open source algorithms which integrate two LiDARs of different mechanism currently, we select three state-of-the-art (SOTA) open source algorithms, Lio-sam [35], Lili-om [36], and Fast-lio2 [4], that apply to both spinning and non-repetitive LiDARs for comparison. For the latter two approaches, we add the GNSS constraints of our approach to the back-end utilizing GTSAM [37]. In addition, we record the same GNSS timestamp to the selected approaches for positioning evaluation. The first sequence is in the open sky, 2.7 km in length with time duration of 417.5 s. We plot the mapping result of Livox Avia LiDAR in Fig. 5, in which the consistent and clear building edges indicating our method is of high precision globally. Besides, we also provide four close views in Fig. 5 for the readers to inspect the local registration accuracy. We denote INS570D as the direct output of the INS570D integrated navigation unit. The 3D root mean square error (RMS) and maximum error (MAX) is computed and reported in TABLE II. Since the RTK status is always at fixed point solution along the journey, all the methods can be well-constrained by the global position measurements, and achieve a comparable accuracy with the post processed results. The LiDAR SLAM approaches cannot achieve a better accuracy than the INS570D as the range measuring error of LiDAR sensor is above 2 cm. In addition, we plot the sequential position and attitude error curves of our approach in Figure 6. We can infer that the largest error is in the vehicle moving direction (x direction), which doubles that of the y and z directional errors. Therefore, we have reason to believe that the time synchronization within different sensors is not accurate enough. Besides, we find that the asynchronous communication and delay between ROS nodes influence the performance of the SLAM algorithms. Different computers will output non identical results, and the odometry result for the same data input is not completely identical for different trials. Fig. 4: Visualization of the mine service vehicle in (a) and an example of inner view of the mine tunnel. The second sequence is half indoor and half outdoor, 320 m in length with time duration of 278 s. As pictured in Fig. 7, the vehicle starts in the outdoor and slowly drives into the mining truck maintenance garage. The overall mapping result is visualized in Fig. 8, illustrating that the cooperate mapping benefits both the 360deg coverage of Robosense RS-16 LiDAR and the high density of Livox Avia LiDAR. To further verify our mapping accuracy, we transform the local coordinates into WGS-84 and project the map onto satellite image as visualized in Fig. 9. The trees and building edges are well-matched to the background, demonstrating that our method is of high precision globally. To quantitively present the localization accuracy of various methods, we compute the RMS and MAX errors and report Figure 5: The Livox Avia mapping result of ground test, sequence one. The top view of the overall mapping is pictured in the middle, (a), (b), (c), and (d) visualize the mapping in detail. (a) shows the mine trucks, (b) is the dormitory for miners, (c) presents the mine conveyor belt, and (d) is a crossing. Figure 6: The 3D positioning and 3D attitude error curves of our method for ground test, sequence one. Figure 7: The trajectory of our approach plotted onto satellite image. them in TABLE II. The advantage of adding LiDAR sensor is now obvious, most of the approaches have an improvement of 25% than that of INS570D. This significant increment happens mainly indoors, where the GPS signal is blocked, and the IMU mechanization cannot hold for one minute. LiDAR SLAM now acts as a strong pose constraint at the GNSS outages, which significantly improve the localization accuracy. ### _Underground Tunnel Tests_ The second experiments seek to evaluate the positioning and mapping accuracy of our system in the mine tunnel. As visualized in Fig. 10, the mine service vehicle stays still outside of the tunnel for RTK and coordinates initialization. Then it enters the tunnel, travels for more than 1700 s, and leaves the tunnel finally, the overall time consumption is 2137 s. The global map is plotted in Fig. 11, in which the top view of the map demonstrate that our map is of high consistency horizontally. Besides, the side view of our map illustrates our result is of high consistency vertically. In addition, we present some of the mapping details both inside and outside of the mine tunnel, the clear and vivid structure on the wall or on the ground demonstrating our mapping is of high precision locally. The underground tunnel is dominated by flat walls and ground, which is of low texture and repetitive patterns, making it one of the most challenging scenarios for SLAM algorithms. We observe that all the selected methods fail to provide entire state estimation results throughout the tunnel. They either'stops' at certain areas or fails completely with great errors. On the contrary, our approach can provide seamless and accurate pose estimation result along the path. As pictured in Fig. 12, the GNSS-IMU-odometer tightly coupled output of INS570D can provide continuous pose output regardless of the environment variations. However, the GNSS measurements merely give a one-off correction when satellite signal is available, and the accumulated errors are not corrected where satellite signal is not available. On the other hand, our approach utilizes the detected loop and GNSS positioning to perform the one-off correction. The following accumulated errors are then corrected by ICP-based map matching. In this way, we can correct the errors spread along the trajectory. To further present the superiority of our system, we plot the absolute error over distance in Fig. 13 for detailed reference. We can directly infer that our method has magnitudes of higher accuracy than that of the INS570D. The reason is two-fold: Firstly, the mine service vehicle has an explosion-proof design, and the tires are made of rubber, which easily lead to wheel slippage in the mine tunnel. Therefore, the wheel odometer is not accurate, especially when the vehicle enters or leaves tunnel branches. Secondly, the accumulated errors are only corrected once when the GNSS signal is available, leaving the remaining errors unsolved. As shown at distance 6600 m in Fig. 13, the errors are corrected with many outliers observable. On the other hand, our loop detection and re-initialization process smooths Fig. 8: The cooperate mapping result of two LiDARs in the middle, and the color is coded by height variations. (a) and (b) shows the example of indoor and outdoor mapping. (c) and (d) presents the advantage of the cooperate mapping, where both coverage and density is ensured. Fig. 10: The trajectory of our approach plotted onto satellite image. Fig. 9: The mapping result projected onto satellite image, the color is coded by height variations. the whole trajectory bi-directionally. The RMS and MAX error of our system is 2.465 m and 16.861 m. ### _Consistency Tests_ The third experiments seek to evaluate the consistency of our system within different journeys or datasets. We use GTSAM to perform pose graph optimization, which may generate non-identical pose estimation results even for the same datasets of different trials. The consistency measures the similarity of different paths, which strongly describes the GNSS dropout performance. Besides, the vehicles need to transport staff or necessities to given branch tunnels, the global consistency is of vital importance for such tasks. We seek to check the consistency within different platforms and trials. In addition to the onboard computer, we use a laptop with Intel i7-10510U CPU, 16 GB RAM for comparison. We perform three runs on each computer for the same dataset, the parameters remain unchanged for different trials. The trajectories of different trials are visualized in Fig. 14, where we can find the paths over various trials does not change much. The maximum trajectory-to-trajectory error is below 20 cm, which is acceptable for cross platform vehicle navigation. ### _Ablation Study_ In this experiment, we aim to understand the contribution of different sensors and factors. We hereby define the following notations in TABLE II for illustration. The first test seeks to understand the contribution of different LiDARs. We employ a tunnel-only sequence for illustration, and we plot the individual trajectories in Fig. 15. ours w/o Livox fails shortly in the tunnel and generates large errors. This is due to the flat wall in the tunnel, even the extracted surfel is repetitive, and the point-to-plane pose estimation is not accurate. On the contrary, ours w/o RS survives in this scenario. The Fig. 11: The top view of the global map in (a), and the four dashed white circle indicate the tunnel branches where the vehicle enters and transport staff workers. (b), (c), and (d) are the detailed view of cooperate mapping result, where (b) shows the ground view, (c) presents the entrance of tunnel, (d) gives a tunnel branch. (e) is the side view of the global map. Fig. 12: The trajectory comparison of our method and INS570D w.r.t. post processed ground truth. Fig. 13: The absolute translational errors over distance. reason is twofold: Firstly, the surfel extracted from the surrounding tunnel walls are of high similarity and are harmful for pose estimation. The restricted FoV of Livox Avia is now an advantage in the tunnels, where the observable features are mainly in front of the vehicle and the harmful features are largely omitted. Secondly, Avia has a higher density than RS-16. The increased density leads to more observable features in single scan, where slight environment changes can be detected. Such changes include road signs, holes on the wall, or road curbs in the tunnel, which can provide strong constraint as discussed in our previous work [38]. These two reasons can be interpreted as the DOP in the field of GNSS, where each extracted surfel can be viewed as a satellite. The satellite distribution is now represented using surfel distribution, where Avia has a lower DOP due to better distribution. Therefore, the pose estimation utilizing Avia has an optimal solution. However, the RS-16 is still indispensable of the system. As visualized in the two insets of Fig. 15, when RS-16 is excluded for pose estimation, the system will generate slight errors with irregular motion (sharp turning or forward-backward moving). The omnidirectional view of RS-16 now effectually constrains the pose estimation result, especially for the yaw direction. To further reveal this effect, we plot the two maps around the same tunnel branch for illustration in Fig. 16. It is evident that the global map is consistent and the local map is clear when RS-16 LiDAR is utilized for pose estimation. The second test seeks to reveal the effectiveness of surfel-based scan matching. We also implement a LOAM-like [39] feature points based scan matching approach of our system, denoted as ours w/o Surfel. In addition, we also treat feature points with large intensity variations as edge points as presented in our previous work [18]. Since the surfel based scan matching also suffers from the degeneration for RS-16 as mentioned in the last paragraph, we only select Avia LiDAR for illustration. We select two scenarios, one on the ground and one in the tunnel for comparison. The extracted surfel and feature points in different scenarios are visualized in Fig. 17. It is seen that the edge and planar points have a good distribution on the ground, which can provide strong pose constraint in all direction. However, the planar points have a bad distribution in the tunnel, where only the lateral motion can be constrained, leading to partial unobservability of the longitudinal direction. On the other hand, the extracted surfel have a uniform distribution both on the ground and in the tunnel, which should give better pose estimation results. We hereby utilize a tunnel-only sequence to verify our assumption. As pictured in Fig. 18, the vehicle starts at the middle of tunnel, travels to the last tunnel branch, and returns to the start point, then finally leaves the tunnel. Although ours w/o Surfel maintains consistent and accurate pose estimation for more than 1 km in the tunnel, it still fails to keep this result to the ground due to degeneration problems. On the other hand, the surfel based scan matching can finish the whole journey, and achieves 40% lower return-to-start-point error as visualized in Fig. 18, satisfying with our hypothesis. The third test seeks to understand the contribution of GNSS re-initialization stage. The sequence used is the same with the underground tunnel test. We denote ours w/o RE as our system merely utilizing GNSS pose graph optimization without re-initialization stage. When re-initialization is not included, our system will directly integrate the pseudorange and carrier-phase measurement into joint state estimation when integer ambiguity is at the fixed solution. Although the maintained pose graph is updated upon the GNSS measurements, the accumulated errors are not eliminated completely as pictured in Fig. 19(a). This is because ours w/o RE uses the same coordinate transformation matrix along the journey, which is approvable when GNSS is always available or temporally unavailable. However, this impact is not negligible in the case of longer dropouts, while the Figure 14: The consistency test across different platforms and trials, and the three insets denote the start, middle, as well as the end of the journey. Figure 15: The trajectory comparison of LiDAR contribution ablation study. The two insets denote the errors caused by irregular motions of ours w/o RS. SLAM algorithms suffer from drift during the absence of GNSS measurements. Therefore, there is a large deviation between the pose estimation of SLAM and GNSS, leading to not fully compensated states as visualized in Fig. 19(a). If we manually increase the weight of GNSS measurements, the drift outdoor can then be fully corrected. However, the trajectory is not Figure 16: Visualization of the mapping difference without Robosense RS-16 LiDAR. (a) is the result of our approach whereas (b) is from ours w/o RS. The two insets in (a) and (b) presents the mapping details of a tunnel branch. Figure 19: The trajectory comparison of our approach with or without GNSS re-initialization stage. Ours w/o RE has a lower GNSS weight in (a), where drifts outdoor are not fully compensated. On the other hand, the GNSS weight is higher in (b), where many bulges exist on the path. The upper inset in (b) denote an example of such bulges Figure 17: Visualization of the extracted feature points in (a) and surfel in (b), the top two are on the ground while the bottom two are in the tunnel. The red and green points in (a) are the extracted planar and edge points. The markers in (b) are the extracted surfel. Figure 18: The trajectory comparison of surfel based and feature points based LiDAR SLAM. The three insets denote the starting area, tunnel branch and places where feature points based approach has large outliers. coherent with several bulges caused by this \"forced\" optimization as visualized in Fig. 19(b). This non-consistent state also leads to large map distortion, especially in the rolling direction, where the tunnel is rotated for more than 30 degrees. ### _Runtime Efficiency_ Since our system is designed to provide state estimation results to the autonomous mine service vehicles, the runtime efficiency is of prime concern for real-world deployment. We thereby drive the vehicle into the tunnel, travels for 4067 s, and record the per-module time consumption along the journey. Note that our algorithm only utilizes two cores out of the six cores of the ARM Carmel CPU (the internal CPU of Xaiver NX). We plot three key processes in the LiDAR odometry in Fig. 20(a). The preprocessing stage include outlier and distortion removal, down sampling, and voxelizing. Then the surfel extraction and association denote the time consumption of surfel feature extraction and surfel matching process. It is seen the LiDAR odometry can reach real-time performance even on this embedded ARM CPU, the average runtime is 27.20 ms. Since the real-time map rendering is too time-consuming for this power-limited platform, we disable the visualization thread and plot the state optimization time usage in Fig. 20(b). We can infer that the time used for optimization is increasing continuously along the journey due to enlarged pose graph. Besides, when the vehicle drives out of the tunnel, the loop detection and re-initialization thread is invoked, leading to a large increment after 3500 s. The average time consumption inside the tunnel is 8.12 ms, whereas that of the leaving tunnel is 29.57 ms. Note that the time consumption of IMU/odometer preintegration is negligible, and we do not consider this impact in the experiment. We also record the CPU and memory usage along the journey. Since our LiDAR odometry and state optimization process are running in different threads, we continuously record the thread statistics as shown in Fig. 21. It is seen that the CPU and memory usage of LiDAR odometry thread is almost steady for the entire journey. Since our memory is 16 GB, the average memory consumption of LiDAR odometry is 1.31 GB. On the other hand, the GTSAM-based graph optimization is constantly occupying computational resources due to increased graph size. ## V Conclusion In this paper, we proposed a localization and mapping framework for autonomous mine service vehicles, achieving accurate and robust pose estimation results in such scenes. Our system integrates measurements from two LiDARs, an IMU, wheel odometers, and optionally a GNSS receiver in a tightly-coupled manner. The front-end includes two parallel running ESKF based LiDAR-inertial odometry. Different from common algorithms utilizing feature points, we extract surfel elements for scan-to-scan registration. Pose results from different estimation engine are jointly optimized at the backend using pose graph optimization. To fully alleviate the long-term accumulated drift in the tunnel, also known as the GNSS dropouts, we utilize a loop detection based re-initialization process for state alignment. The proposed method has been extensively validated in real-world mine environments, with an acceptable accuracy in most scenarios. In addition, our system Fig. 21: The CPU and memory usage of LiDAR odometry thread and state optimization thread in (a) and (b). Fig. 20: The time consumption of the LiDAR odometry part of our system in (a) and the state optimization part in (b). has been successfully deployed for several autonomous mine service vehicles for state estimation. There are several directions for future research. Check-point based mapping evaluation in tunnels is desirable, which helps to understand the system mapping performance. Another research direction concerns developing a safer and easier-to-use platform for large deployment. Many engineering designs, such as explosion-proof shell and long during continuous operation, need to be considered. ## Acknowledgment We would like to thanks the Suzhou plusgo CO., Ltd and Tsinghua University Suzhou Automotive Research Institute for program support and data collection. ## References * [1]D. He, W. Xu, and F. Zhang (2021) Embedding manifold structures into kalman filters. arXiv e-prints, pp. arXiv:2102. Cited by: SS2. * [2]D. He, W. Xu, and F. Zhang (2021) Fast-lio: a fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robotics and Automation Letters6 (2), pp. 3317-3324. Cited by: SS2. * [3]C. Hertzberg, R. Wagner, U. Frese, and L. Schroder (2013) Integrating generic sensor fusion algorithms with sound state representations through encapsulation of manifolds. Information Fusion14 (1), pp. 57-77. Cited by: SS2. * [4]C. Park, P. Moghadam, J. L. Williams, S. Kim, S. Sridharan, and C. Fokes (2021) Elasticity meets continuous-time: map-centric dense 3D LIDAR SLAM. IEEE Transactions on Robotics38 (2), pp. 978-997. Cited by: SS2. * [5]C. Park, S. Kim, P. Moghadam, C. Fookes, and S. Sridharan (2017) Probabilistic surfel fusion for dense LIDAR mapping. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2418-2426. Cited by: SS2. * [6]C. Park, S. Kim, P. Moghadam, C. Fookes, and S. Sridharan (2017) Probabilistic surfel fusion for dense LIDAR mapping. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2418-2426. Cited by: SS2. * [7]C. Park, S. Kim, P. Moghadam, C. Fookes, and S. Sridharan (2017) Probabilistic surfel fusion for dense LIDAR mapping. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2418-2426. Cited by: SS2. * [8]C. Park, P. Moghadam, J. L. Williams, S. Kim, S. Sridharan, and C. Fookes (2021) Elasticity meets continuous-time: map-centric dense 3D LIDAR SLAM. IEEE Transactions on Robotics38 (2), pp. 978-997. Cited by: SS2. * [9]J. Lin, C. Zheng, W. Xu, and F. Zhang (2021) R2LIVE: a robust, real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping. arXiv preprint arXiv:2102.12400. Cited by: SS2. * [10]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [11]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [12]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [13]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [14]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [15]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [16]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [17]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [18]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [19]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [20]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [21]J. Lin, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [22]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [23]J. Zhang and S. Singh (2018) Laser-visual-inertial odometry and mapping with high robustness and low drift. Journal of Field Robotics35 (8), pp. 1242-1264. Cited by: SS2. * [24]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [25]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [26]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [27]J. Zhang and S. Singh (2018) Laser-visual-inertial odometry and mapping with high robustness and low drift. Journal of Field Robotics35 (8), pp. 1242-1264. Cited by: SS2. * [28]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [29]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [30]J. Zhang and S. Singh (2018) Laser-visual-inertial odometry and mapping with high robustness and low drift. Journal of Field Robotics35 (8), pp. 1242-1264. Cited by: SS2. * [31]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [32]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [33]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [34]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [35]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [36]J. Zhang and S. Singh (2018) Laser-visual-inertial odometry and mapping with high robustness and low drift. Journal of Field Robotics35 (8), pp. 1242-1264. Cited by: SS2. * [37]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [38]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [39]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [40]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [41]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Systems (ICRA), pp. 5751-5757. Cited by: SS2. * [42]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [43]J. Zhang and S. Singh (2018) Laser-visual-inertial odometry and mapping with high robustness and low drift. Journal of Field Robotics35 (8), pp. 1242-1264. Cited by: SS2. * [44]J. Zhang, S. Wang, and M. Kaess (2021) An-SLAM: lidar smoothing and mapping with planes. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5751-5757. Cited by: SS2. * [45]J. Zhang, M. Kaess, and S. Singh (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 809-816. Cited by: SS2. * [39] J. Zhang and S. Singh, \"LOAM: Lidar Odometry and Mapping in Real-time.,\" in _Robotics: Science and Systems_, 2014, vol. 2, no. 9. \\begin{tabular}{c c} & Yusheng Wang received the B.Eng. degree in navigation engineering from Wuhan University, Wuhan, China, in 2016, and the M.S. degree in geodesy engineering from the Stuttgart University, Stuttgart, Germany, in 2018. He is currently working towards the Ph.D. degree with the GNSS research center, Wuhan University under the supervision of Prof. Yidong Lou. He worked as a software engineer in Wuhan In-Driving Co., Ltd from 2018 to 2020, where he was in charge of the localization department. He is a co-founder of Beijing Lishedachuan Co., Ltd and also a senior SLAM engineer in CHCNAV. His research interests include sensor fusion, SLAM in industrial application, and computer vision. \\\\ \\end{tabular} \\begin{tabular}{c c} & Yidong Lou received the B.Eng. degree in geodesy engineering from Wuhan University, Wuhan, China, in 2001, and the M.S. degree in geodesy engineering from Wuhan University in 2004, and the Ph. D degree in geodesy engineering from Wuhan University in 2008. He is currently with the GNSS research center, Wuhan University, as a professor. His research activity focuses on the theoretical methods and software of GNSS real-time high-precision data processing as well as the meteorological applications. His research findings have been successfully applied to the major project, \"National Beidou Ground-based Augmentation System Development and Construction\", which has realized the wide-area real-time positioning accuracy of Beidou from meter level to centimeter level, supporting the innovative applications of a nationwide network service. \\\\ \\end{tabular} \\begin{tabular}{c c} & Weiwei Song received the B.Eng., M.S., and Ph.D. degrees in geodesy engineering from Wuhan University, Wuhan, China, in 2004, 2007, and 2011, respectively. He is currently with the GNSS Research Center, Wuhan University, as a professor. His research interests include sensor fusion in industrial application and high-precision GNSS localization. \\\\ \\end{tabular} \\begin{tabular}{c c} & Bing Zhan received the B.Eng. degree in navigation engineering from Wuhan University, Wuhan, China, in 2016. Then he worked as a product manager in CHCNAV. He is now a senior product manager of monitoring department in CHCNAV. \\\\ \\end{tabular} \\begin{tabular}{c c} & Feihuang Xia received the B.Sc. from Zhejiang University, Zhejiang, China, in 2017 and Ph.D. degree in Peking University in 2021. He is currently with the Beijing Lishedachuan CO., Ltd. His research interest is manifold math and SLAM. \\\\ \\end{tabular} \\begin{tabular}{c c} & Qigen Duan received the B.Eng. from China University of Geosciences, Beijing, China, in 2017. He is currently working towards the Ph.D. degree with the Department of Geography and Resource Management, Chinese University of Hongkong, under the supervision of Prof. Hui Lin. He worked as a software engineer in NavInfo Co., Ltd from 2017 to 2018. He is also the founder of Beijing Lishedachuan Co., Ltd. His research interests include SLAM, 3D reconstruction, and digital twin. \\\\ \\end{tabular}
Multi-modal sensor integration has become a crucial prerequisite for the real-world navigation systems. Recent studies have reported successful deployment of such system in many fields. However, it is still challenging for navigation tasks in mine scenes due to satellite signal dropouts, degraded perception, and observation degeneracy. To solve this problem, we propose a LiDAR-inertial odometry method in this paper, utilizing both Kalman filter and graph optimization. The front-end consists of multiple parallel running LiDAR-inertial odometries, where the laser points, IMU, and wheel odometer information are tightly fused in an error-state Kalman filter. Instead of the commonly used feature points, we employ surface elements for registration. The back-end construct a pose graph and jointly optimize the pose estimation results from inertial, LiDAR odometry, and global navigation satellite system (GNSS). Since the vehicle has a long operation time inside the tunnel, the largely accumulated drift may be not fully by the GNSS measurements. We hereby leverage a loop closure based re-initialization process to achieve full alignment. In addition, the system robustness is improved through handling data loss, stream consistency, and estimation error. The experimental results show that our system has a good tolerance to the long-period degeneracy with the cooperation different LiDARs and surfel registration, achieving meter-level accuracy even for tens of minutes running during GNSS dropouts. SLAM, multi-modal fusion, mine service vehicle.
Give a concise overview of the text below.
299
arxiv-format/2206_02106v3.md
# Do we need dense matter equation of state in curved spacetime for neutron stars? Jianing Li Department of Physics, Tsinghua University, Beijing 100084, China Tao Guo Department of Physics, Tsinghua University, Beijing 100084, China Jiaxing Zhao Department of Physics, Tsinghua University, Beijing 100084, China Lianyi He Department of Physics, Tsinghua University, Beijing 100084, China November 5, 2021 ## I Introduction Neutron stars could be one kind of the densest objects in our universe. Their masses are estimated to be between the Chandrasekhar limit \\(1.4\\ M_{\\odot}\\) and \\(2.16\\ M_{\\odot}\\)[1; 2; 3], with \\(M_{\\odot}\\) the solar mass. The most massive neutron star that has ever been observed so far is the one in the binary system PSR J0740+6620, consisting of a neutron star and a white dwarf [4]. The mass of the neutron star in this binary system is reported to be \\(2.08\\pm 0.07\\ M_{\\odot}\\), close to the upper mass limit [5]. Neutron stars have been regarded as natural laboratories for the study of the many-body physics of dense strong interaction matter. Neutron stars are usually thought to be composed of neutron matter at a few times of the nuclear saturation density, with a small amount of protons and leptons to ensure charge neutrality and beta equilibrium. It is also conjectured that with increasing matter density, deconfined quark matter may emerge in the core of neutron stars [6]. The strangeness component may also appear, such as hyperons and even strange quark matter [7; 8; 9]. While numerous observations for neutron stars have been accumulated, plenty of unclear issues still remain. For example, the emergence of hyperons seems inevitable if the matter becomes sufficiently dense. However, this will soften the equation of state (EoS) and make the largest mass predicted by theory smaller than the Chandrasekhar limit. This is the so-called hyperon puzzle [10], which may originate from the obscurity of the interactions in the many-hyperon system [11; 12]. It is also debated whether there exists quark matter in neutron stars [2; 13]. This is related to the theoretical issue of the transition from hadronic matter to quark matter. To solve these issues, one of the top priorities is to compute the accurate EoS of dense strong interaction matter. On the theoretical side, plenty of phenomenological models for the nuclear force [14; 15; 16; 17] have been used to predict the EoS of dense nuclear matter. Quantum field theory is also a powerful tool to calculate the EoS of relativistic dense matter [18]. On the other hand, to predict the structure of (static) neutron stars, we solve the Tolman-Oppenheimer-Volkoff (TOV) equation [19; 20], with the dense matter EoS as an input. However, the EoS of dense matter are usually computed by using quantum many-body theory in flat spacetime, and the possible curved spacetime effect induced by strong gravitation in neutron stars is not taken into account at all. Therefore, it seems discordant to put the dense matter EoS computed in flat spacetime into the TOV equation. A question naturally comes into being: Can we use the dense matter EoS computed in flat spacetime to study neutron stars? If curved spacetime effect really influences the dense matter EoS, it would increase the complexity of the study of dense matter from neutron stars. Recently, some works have reported that the dense matter EoS computed by using quantum field theory in curved spacetime would make a big difference [21; 22]. They found that the gravitational time dilation effect leads to a significant increase of the maximum mass of neutron stars. In this work, however, we clarify that to study the hydrostatic equilibrium of dense matter within the framework of general relativity and relativistic fluid dynamics, the grand canonical EoS of dense matter, i.e., the pressure \\(p\\) as a function of the temperature \\(T\\) and the chemical potential \\(\\mu\\), \\(p(T,\\mu)\\), should be the same as that computed in flat spacetime. We show that this is a requirement from the local thermodynamic relations and the conservation of the energy and momentum of the relativistic fluid. The gravitation influences the pressure \\(p\\) only through enhancing the temperature \\(T\\) and the chemical potential \\(\\mu\\), known as Tolman's law [23; 24] and Klein's law [25]. Hence the theoretical framework of TOV equation and relativistic fluid dynamics with an EoS determined from flat spacetime is self-consistent. The paper is organized as follows. In Sec. II we prove that the dense matter EoS used to study the hydrostatic equilibrium in a static and spherical star should be the same as that in the flat spacetime. We generalize the proof to general static spacetime in Sec. III. In Sec. IV, we convert the TOV equation into a grand canonical version so that the grand canonical EoS can be used as a direct input. We demonstrate the solution and visualize the gravitational effect on the baryon chemical potential by using the Walecka model. We summarize in Sec. V. The nature units \\(c=\\hbar=k_{\\rm B}=1\\) are used throughout. ## II The grand canonical EOS in local thermal equilibrium Consider isolated dense matter in hydrostatic equilibrium. A curved spacetime is created according to general relativity and we assume that it is spherically symmetric. The line element \\(\\mathrm{d}s^{2}=g_{\\mu\ u}\\mathrm{d}x^{\\mu}\\mathrm{d}x^{\ u}\\) can be written as \\[\\mathrm{d}s^{2}=-e^{2\\Phi(r)}\\mathrm{d}t^{2}+e^{2\\Psi(r)}\\mathrm{d}r^{2}+r^{2} \\mathrm{d}\\theta^{2}+r^{2}\\sin^{2}\\theta\\mathrm{d}\\phi^{2}. \\tag{1}\\] The spacetime metric reads explicitly \\[g_{tt} =-e^{2\\Phi(r)},\\quad g_{rr}=e^{2\\Psi(r)},\\] \\[g_{\\theta\\theta} =r^{2},\\quad g_{\\phi\\phi}=r^{2}\\sin^{2}\\theta,\\] \\[g_{\\mu\ u} =0\\quad\\mathrm{for}\\ \\mu\ eq\ u. \\tag{2}\\] The dense matter can be described by relativistic fluid dynamics. For a relativistic fluid, the energy-momentum tensor can be written as \\[T_{\\mu\ u}=pg_{\\mu\ u}+\\left(p+\\varepsilon\\right)U_{\\mu}U_{\ u}+\\pi_{\\mu\ u}, \\tag{3}\\] with \\(p\\) the isotropic pressure and \\(\\varepsilon\\) the proper energy density. For hydrostatic equilibrium, the transport terms in \\(\\pi_{\\mu\ u}\\) do not contribute and can be neglected from now on. The velocity four-vector \\(U^{\\mu}\\) is defined so that \\(g^{\\mu\ u}U_{\\mu}U_{\ u}=-1\\). Since the fluid is at rest, we take \\[U_{t}=\\sqrt{-g_{tt}}=e^{\\Phi},\\quad U_{r}=U_{\\theta}=U_{\\phi}=0. \\tag{4}\\] The matter profile and the spacetime metric can be determined by solving Einstein's field equation \\(G_{\\mu\ u}=8\\pi GT_{\\mu\ u}\\), with \\(G\\) the gravitational constant. Computing the Einstein tensor \\(G_{\\mu\ u}\\) we obtain a number of equations [26; 27]. The \\(tt\\)-component gives \\[e^{2\\Psi(r)}=\\left(1-\\frac{2Gm}{r}\\right)^{-1}, \\tag{5}\\] where \\[m(r)\\equiv\\int_{0}^{r}4\\pi r^{2}\\mathrm{d}r \\tag{6}\\] can be interpreted as the total mass contained inside radius \\(r\\). The \\(rr\\)-component gives \\[\\frac{\\mathrm{d}\\Phi\\left(r\\right)}{\\mathrm{d}r}=\\frac{e^{2\\Psi(r)}}{r^{2}}G \\left(m+4\\pi r^{3}p\\right). \\tag{7}\\] The third equation can be derived from the \\(\\theta\\theta\\)-component or \\(\\phi\\phi\\)-component. However, it is convenient to use continuity equation of the energy-momentum tensor, \\(\ abla_{\\mu}T^{\\mu\ u}=0\\), which is guaranteed by Einstein's field equation. The only nontrivial equation is given by the \\(\ u=1\\) (or \\(\ u=r\\)) component. A direct calculation gives \\[\ abla_{\\mu}T^{\\mu 1}=e^{-2\\Psi(r)}\\left[\\frac{\\mathrm{d}p}{\\mathrm{d}r}+(p+ \\varepsilon)\\frac{\\mathrm{d}\\Phi\\left(r\\right)}{\\mathrm{d}r}\\right]. \\tag{8}\\] Summarizing the above results, we finally arrive at the famous Tolman-Oppenheimer-Volkoff equation \\[\\frac{\\mathrm{d}p}{\\mathrm{d}r}=-\\frac{G(p+\\varepsilon)(m+4\\pi r^{3}p)}{r^{2} \\left(1-\\frac{2Gm}{r}\\right)}. \\tag{9}\\] This equation is normally solved by using the dense matter EoS of the form \\(p=p(\\varepsilon)\\) as an input. However, in this form, it is not quite clear whether and how the gravitational effect on the EoS should be taken into account. Actually, we normally use the dense matter EoS determined in flat spacetime or on Earth. On the other hand, theorists are good at computing the grand canonical EoS \\(p=p(T,\\mu)\\) by using finite temperature field theory in flat spacetime [18]. Some recent works have tried to compute the grand canonical EoS based on the statistic mechanics of quantum fields in curved spacetime [21; 22]. Within their approach, the gravitational time dilation effect explicitly influences the grand canonical EoS, i.e., \\(p(T,\\mu)\\) is different at different position in the gravitational field. This leads to a significant increase of the maximum mass limit of neutron stars, in contrast to the previous predictions based on the dense matter EoS determined in flat spacetime. In the following, we will discuss how the local thermodynamic relations and the energy-momentum conservation in fluid dynamics (as guaranteed by Einstein's field equation) constrain the grand canonical EoS \\(p(T,\\mu)\\), or, what kind of EoS \\(p(T,\\mu)\\) is compatible with the TOV equation and local thermal equilibrium. The dense matter described by relativistic fluid dynamics is composed of many fluid elements sufficiently small. On the other hand, each small fluid element should contain sufficiently large degrees of freedom so that the thermodynamic limit and local thermal equilibrium can be reached. In curved spacetime, each fluid element is described by local thermodynamic variables in the local rest frame of the fluid element. According to the equivalence principle, these local thermodynamic variables should obey the fundamental laws of thermodynamics [23; 24; 25; 28; 29; 30]. Assuming that the relativistic fluid may carry several conserved charges, we introduce the corresponding chemical potentials \\(\\mu_{1},\\mu_{2}, \\), denoted by \\(\\{\\mu_{i}\\}\\) for convenience. The grand canonical EoS of a fluid element at the position \\((r,\\theta,\\phi)\\) can be formally expressed as \\[p=p(T,\\{\\mu_{i}\\};r). \\tag{10}\\] Here we first assume that the EoS may be different at different positions in curved spacetime, i.e., the gravitation caused a direct influence on the EoS. Because of the isotropy, the explicit position dependence can be realized only through the radius \\(r\\). Note that \\(T\\) and \\(\\mu_{i}\\) are the local temperature and chemical potentials of the fluid element located at the position \\((r,\\theta,\\phi)\\)i.e., \\[T=T(r),\\quad\\mu_{i}=\\mu_{i}(r). \\tag{11}\\] If local thermal equilibrium is reached, the local thermodynamic quantities should satisfy the fundamental thermodynamic relation \\[\\varepsilon=Ts+\\sum_{i}\\mu_{i}n_{i}-p, \\tag{12}\\] where the entropy density \\(s\\) and the number density \\(n_{i}\\) can be evaluated from the EoS, \\[s=\\frac{\\partial p(T,\\{\\mu_{i}\\};r)}{\\partial T},\\quad n_{i}= \\frac{\\partial p(T,\\{\\mu_{i}\\};r)}{\\partial\\mu_{i}}. \\tag{13}\\] For a relativistic fluid in hydrostatic equilibrium, the conservation of the energy and momentum, \\(\ abla_{\\mu}T^{\\mu\ u}=0\\), gives \\[\\frac{\\mathrm{d}p}{\\mathrm{d}r}=-(p+\\varepsilon)\\frac{\\mathrm{d }\\Phi}{\\mathrm{d}r}. \\tag{14}\\] Using the fundamental thermodynamic relation (12), we arrive at \\[\\frac{\\mathrm{d}p}{\\mathrm{d}r}=-\\left(Ts+\\sum_{i}\\mu_{i}n_{i} \\right)\\frac{\\mathrm{d}\\Phi}{\\mathrm{d}r}. \\tag{15}\\] Further using the thermodynamic relation (13), we obtain a functional equation \\[\\frac{\\mathrm{d}p}{\\mathrm{d}r}=\\frac{\\partial p(T,\\{\\mu_{i}\\};r )}{\\partial T}\\left(-T\\frac{\\mathrm{d}\\Phi}{\\mathrm{d}r}\\right)+\\sum_{i} \\frac{\\partial p(T,\\{\\mu_{i}\\};r)}{\\partial\\mu_{i}}\\left(-\\mu_{i}\\frac{\\mathrm{ d}\\Phi}{\\mathrm{d}r}\\right), \\tag{16}\\] which is valid at arbitrary radius \\(r\\). On the other hand, the standard chain rule gives \\[\\frac{\\mathrm{d}p}{\\mathrm{d}r}=\\frac{\\partial p(T,\\{\\mu_{i}\\};r )}{\\partial r}+\\frac{\\partial p(T,\\{\\mu_{i}\\};r)}{\\partial T}\\frac{\\mathrm{d} T}{\\mathrm{d}r}+\\sum_{i}\\frac{\\partial p(T,\\{\\mu_{i}\\};r)}{\\partial\\mu_{i}} \\frac{\\mathrm{d}\\mu_{i}}{\\mathrm{d}r}. \\tag{17}\\] Comparing the above two equations for arbitrary radius \\(r\\), we find \\[\\frac{\\partial p(T,\\{\\mu_{i}\\};r)}{\\partial r}=0. \\tag{18}\\] Thus we conclude that _the grand canonical EoS does not depend explicitly on the position_. Meanwhile, we obtain other two relations \\[\\frac{\\mathrm{d}T}{\\mathrm{d}r}=-T\\frac{\\mathrm{d}\\Phi}{\\mathrm{ d}r},\\] \\[\\frac{\\mathrm{d}\\mu_{i}}{\\mathrm{d}r}=-\\mu_{i}\\frac{\\mathrm{d} \\Phi}{\\mathrm{d}r}. \\tag{19}\\] The solution of these two equations can be expressed as \\[T(r)=T_{\\infty}e^{-\\Phi(r)},\\] \\[\\mu_{i}(r)=\\mu_{i}^{\\infty}e^{-\\Phi(r)}. \\tag{20}\\] They are nothing but the Tolman's law for the temperature [23; 24] and the Klein's law for the chemical potentials [25] in a static gravitational field. Here the constants \\(T_{\\infty}\\) and \\(\\mu_{i}^{\\infty}\\) are interpreted as the temperature and chemical potentials measured by an observer at infinity (\\(r\\rightarrow\\infty\\)). The above laws also guarantee that the fugacities \\(z_{i}=\\exp(\\mu_{i}/T)\\) are position independent, as required by vanishing heat flow and diffusion for a system in local thermal equilibrium [31]. Summarizing the above results, we conclude that the grand canonical EoS should be uniform in a curved spacetime created by dense matter in hydrostatic equilibrium, i.e., \\[p=p(T,\\{\\mu_{i}\\}). \\tag{21}\\] The gravitation influences the pressure only through the redshift of the temperature and chemical potentials, i.e., the Tolman's law and the Klein's law. Since the spacetime is asymptotically flat, we can determine the grand canonical EoS \\(p(T,\\{\\mu_{i}\\})\\) at \\(r\\rightarrow\\infty\\), that is, _the grand canonical EoS can be essentially determined in flat spacetime_. In previous works [21; 22], the gravitational effect is attributed to the gravitational potential \\(\\Phi(r)\\), i.e., the gravitational time dilation. Since \\(\\Phi(r)\\) is a single-valued function of \\(r\\), it is equivalent to assume that the EoS may depend explicitly on \\(\\Phi\\), i.e. \\[p=p(T,\\{\\mu_{i}\\};\\Phi). \\tag{22}\\] The conservation of energy and momentum, Eq. (14), can be rewritten as \\[\\frac{\\mathrm{d}p}{\\mathrm{d}\\Phi}=-p-\\varepsilon. \\tag{23}\\] Using the thermodynamic relations, we obtain \\[\\frac{\\mathrm{d}p}{\\mathrm{d}\\Phi}=-T\\frac{\\partial p(T,\\{\\mu_{ i}\\};\\Phi)}{\\partial T}-\\sum_{i}\\mu_{i}\\frac{\\partial p(T,\\{\\mu_{i}\\};\\Phi)}{ \\partial\\mu_{i}}. \\tag{24}\\]On the other hand, the standard chain rule gives \\[\\frac{\\mathrm{d}p}{\\mathrm{d}\\Phi} =\\frac{\\partial p(T,\\{\\mu_{i}\\};\\Phi)}{\\partial T}\\frac{\\mathrm{d}T }{\\mathrm{d}\\Phi}+\\sum_{i}\\frac{\\partial p(T,\\{\\mu_{i}\\};\\Phi)}{\\partial\\mu_{i }}\\frac{\\mathrm{d}\\mu_{i}}{\\mathrm{d}\\Phi}\\] \\[+\\frac{\\partial p(T,\\{\\mu_{i}\\};\\Phi)}{\\partial\\Phi}. \\tag{25}\\] Therefore, we have the following identities \\[\\frac{\\partial p(T,\\{\\mu_{i}\\};\\Phi)}{\\partial\\Phi}=0,\\] \\[\\frac{\\mathrm{d}T}{\\mathrm{d}\\Phi}=-T,\\ \\ \\ \\ \\ \\frac{\\mathrm{d}\\mu_{i}}{ \\mathrm{d}\\Phi}=-\\mu_{i}. \\tag{26}\\] The first identity indicates that _the grand canonical EoS does not depend explicitly on \\(\\Phi\\)_. The second and the third equations give the same results as in Eq. (20). We note that the grand canonical EoS computed in previous works [21; 22], which shows an explicit dependence on the gravitational potential \\(\\Phi\\), _is not compatible with the TOV equation and local thermodynamic relations_. ## III Generalization to arbitrary static spacetime Even though the configuration of dense matter in hydrostatic equilibrium is normally spherically symmetric, the results in Sec. II can be generalized to arbitrary static spacetime. Consider a general static curved spacetime. The line element \\(\\mathrm{d}s^{2}=g_{\\mu\ u}\\mathrm{d}x^{\\mu}\\mathrm{d}x^{\ u}\\) can be expressed as (\\(x^{0}\\equiv t\\)) \\[\\mathrm{d}s^{2}=g_{00}\\mathrm{d}t^{2}+g_{ij}\\mathrm{d}x^{j}\\mathrm{d}x^{j}. \\tag{27}\\] The metric functions \\(g_{00}\\) and \\(g_{ij}\\) are independent of time but depend in an arbitrary way of the spatial coordinates \\(x^{k}\\) (\\(k=1,2,3\\)). For hydrostatic equilibrium, we take \\[U_{0}=\\sqrt{-g_{00}},\\ \\ \\ U_{1}=U_{2}=U_{3}=0. \\tag{28}\\] We also consider the conservation of the energy and momentum, \\(\ abla_{\\mu}T^{\\mu\ u}=0\\). The \\(\ u=k\\) (\\(k=1,2,3\\)) component gives \\[\\frac{\\mathrm{d}p}{\\mathrm{d}x^{k}}=-(p+\\varepsilon)\\frac{\\mathrm{d}\\ln\\sqrt{ -g_{00}}}{\\mathrm{d}x^{k}}. \\tag{29}\\] Here we use \\(\\mathrm{d}/\\mathrm{d}x^{k}\\) to denote the derivative with respect to the spatial coordinates, so that it can be distinguished from that with respect to the temperature and chemical potentials. The grand canonical EoS of a fluid element at the position (\\(x^{1},x^{2},x^{3}\\)) can be formally expressed as \\[p=p(T,\\{\\mu_{i}\\};\\{x^{k}\\}). \\tag{30}\\] Using the local thermodynamic relations, we obtain \\[\\frac{\\mathrm{d}p}{\\mathrm{d}x^{k}}=\\frac{\\partial p(T,\\{\\mu_{i}\\};\\{x^{k}\\}) }{\\partial T}\\left(-T\\frac{\\mathrm{d}\\ln\\sqrt{-g_{00}}}{\\mathrm{d}x^{k}} \\right)+\\sum_{i}\\frac{\\partial p(T,\\{\\mu_{i}\\};\\{x^{k}\\})}{\\partial\\mu_{i}} \\left(-\\mu_{i}\\frac{\\mathrm{d}\\ln\\sqrt{-g_{00}}}{\\mathrm{d}x^{k}}\\right). \\tag{31}\\] On the other hand, the standard chain rule gives \\[\\frac{\\mathrm{d}p}{\\mathrm{d}x^{k}}=\\frac{\\partial p(T,\\{\\mu_{i}\\};\\{x^{k}\\}) }{\\partial x^{k}}+\\frac{\\partial p(T,\\{\\mu_{i}\\};\\{x^{k}\\})}{\\partial T} \\frac{\\mathrm{d}T}{\\mathrm{d}x^{k}}+\\sum_{i}\\frac{\\partial p(T,\\{\\mu_{i}\\};\\{x ^{k}\\})}{\\partial\\mu_{i}}\\frac{\\mathrm{d}\\mu_{i}}{\\mathrm{d}x^{k}}. \\tag{32}\\] Again, comparison of the two results gives the following identities \\[\\frac{\\partial p(T,\\{\\mu_{i}\\};\\{x^{k}\\})}{\\partial x^{k}}=0,\\] \\[\\frac{\\mathrm{d}T}{\\mathrm{d}x^{k}}=-T\\frac{\\mathrm{d}\\ln\\sqrt{-g _{00}}}{\\mathrm{d}x^{k}},\\] \\[\\frac{\\mathrm{d}\\mu_{i}}{\\mathrm{d}x^{k}}=-\\mu_{i}\\frac{\\mathrm{d }\\ln\\sqrt{-g_{00}}}{\\mathrm{d}x^{k}}. \\tag{33}\\] The first identity indicates that _the grand canonical EoS should not depend on the spatial position explicitly in a general static spacetime_. The second and the third identities give the Tolman's law and the Klein's law in a general static spacetime, \\[T(\\{x^{k}\\})=\\frac{T_{\\infty}}{\\sqrt{-g_{00}(\\{x^{k}\\})}},\\] \\[\\mu_{i}(\\{x^{k}\\})=\\frac{\\mu_{i}^{\\infty}}{\\sqrt{-g_{00}(\\{x^{k} \\})}}. \\tag{34}\\] If the spacetime is asymptotically flat as normally satisfied, this means that the grand canonical EoS \\(p(T,\\{\\mu_{i}\\})\\) is the same as that determined in flat spacetime. Otherwise it is not consistent with local thermodynamic relations. In fact, applying EoS in flat spacetime as an input is correct in the view of general relativity. As the process shown in Sec. II, the TOV equation is a tensor equation derived from the conservation law of energy-momentum tensor. The pressure in the TOV equation is the one appearing in the energy-momentum tensor. The equivalence principle states that any physical laws expressed in the form of tensor equations in special relativity holds in the same form in general relativity [26; 27]. Therefore, the EoS imported to the TOV equation does not need any general-relativistic correction. Such correction is ought to be inherited by the covariant derivative terms instead. Although this validity can be deduced based on general relativity, Eq. (33) can only derived from equilibrium thermodynamic. ## IV Neutron stars from \\(p(\\mu)\\) EoS The TOV equation (9) is convenient to solve if the EoS of the form \\(p=p(\\varepsilon)\\) is known. However, as field theoretical methods normally determine the grand canonical EoS \\(p=p(T,\\{\\mu_{i}\\})\\) directly, it is more convenient to use Eq. (19) and arrive at an alternative version of the TOV equation. For a cold star, we can set \\(T=0\\) and obtain \\[\\begin{cases}\\frac{\\mathrm{d}\\mu_{i}}{\\mathrm{d}r}=-\\mu_{i}\\frac{ \\mathrm{d}\\Phi}{\\mathrm{d}r},\\\\ \\frac{\\mathrm{d}\\Phi}{\\mathrm{d}r}=\\frac{G(m+4\\pi r^{3}p)}{r^{2}\\left(1-\\frac{ 2Gm}{r}\\right)}.\\end{cases} \\tag{35}\\] If we are not interested in the gravitational potential \\(\\Phi\\), we can write \\[\\frac{\\mathrm{d}\\mu_{i}}{\\mathrm{d}r}=-\\frac{G\\mu_{i}(m+4\\pi r^{3}p)}{r^{2} \\left(1-\\frac{2Gm}{r}\\right)}. \\tag{36}\\] This version can be conveniently solved if the grand canonical EoS \\(p=p(\\{\\mu_{i}\\})\\) at zero temperature is known. The energy density in the mass function \\(m(r)\\) can be expressed as \\[\\varepsilon(\\{\\mu_{i}\\})=\\sum_{i}\\mu_{i}\\frac{\\partial p(\\{\\mu_{i}\\})}{ \\partial\\mu_{i}}-p(\\{\\mu_{i}\\}). \\tag{37}\\] For neutron stars, there is only one chemical potential, the baryon chemical potential \\(\\mu_{\\mathrm{B}}\\), serves as the thermodynamic variable in the EoS. In the following, we adopt the Walecka model [32] to describe the dense matter in neutron stars and demonstrate the solution of the TOV equation (35) with the grand canonical EoS. As clarified in Sec. II, we only need to calculate the EoS from the model in flat spacetime. The Lagrangian density of the Walecka model is given by \\[\\begin{split}\\mathcal{L}_{\\mathrm{W}}=&\\sum_{\\mathrm{ N=n,p}}\\bar{\\psi}_{\\mathrm{N}}\\left(i\\gamma^{\\mu}\\partial_{\\mu}-m_{\\mathrm{N}}+g_{ \\sigma}\\sigma-g_{\\omega}\\gamma^{\\mu}\\omega_{\\mu}\\right)\\psi_{\\mathrm{N}}\\\\ &+\\frac{1}{2}\\left(\\partial_{\\mu}\\sigma\\partial^{4}\\sigma-m_{ \\sigma}^{2}\\sigma^{2}\\right)-U(\\sigma)\\\\ &-\\frac{1}{4}F^{\\mu\\sigma}F_{\\mu\ u}+\\frac{1}{2}m_{\\omega}^{2} \\omega^{\\mu}\\omega_{\\mu},\\end{split} \\tag{38}\\] where \\(F_{\\mu\ u}=\\partial_{\\mu}\\omega_{\ u}-\\partial_{\ u}\\omega_{\\mu}\\) and \\(\\psi_{\\mathrm{N}}\\) (\\(\\mathrm{N=n,p}\\)) denote the nucleon fields with mass \\(m_{\\mathrm{N}}\\). In the present model, isospin symmetry is assumed for the sake of simplicity. The scalar \\(\\sigma\\) meson with mass \\(m_{\\sigma}\\) and the vector \\(\\omega\\) meson with mass \\(m_{\\omega}\\) are introduced to describe the long-range attraction and the short-range repulsion of the nuclear force. This is realized by the two meson-nucleon coupling terms with coupling constants \\(g_{\\sigma}\\) and \\(g_{\\omega}\\). We add a phenomenological potential \\(U(\\sigma)=\\frac{1}{3}bm_{\\mathrm{N}}\\left(g_{\\sigma}\\sigma\\right)^{3}+\\frac{1 }{2}c\\left(g_{\\sigma}\\sigma\\right)^{4}\\) to fit the empirically known properties of nuclear matter [18]. To describe the neutron star matter, electrons and even muons should be introduced to guarantee \\(\\beta\\)-equilibrium and charge neutrality. We thus add a term for leptons, \\[\\mathcal{L}_{\\mathrm{lep}}=\\sum_{l=\\mathrm{e,\\mu}}\\bar{\\psi}_{l}\\left(i\\gamma^ {\\mu}\\partial_{\\mu}-m_{l}\\right)\\psi_{l}. \\tag{39}\\] The partition function of the model can be expressed in the imaginary-time path integral formalism, \\[\\begin{split}\\mathcal{Z}&=\\int\\prod_{\\alpha}[d\\psi _{\\alpha}][d\\bar{\\psi}_{\\alpha}][d\\sigma][d\\omega_{\\mu}]\\\\ &\\times\\exp\\left\\{\\int_{0}^{\\beta}\\mathrm{d}\\tau\\int\\mathrm{d}^{3 }\\mathbf{x}\\left[\\mathcal{L}_{\\mathrm{W}}+\\mathcal{L}_{\\mathrm{lep}}+\\sum_{ \\alpha}\\mu_{\\alpha}\\psi_{\\alpha}^{\\dagger}\\psi_{\\alpha}\\right]\\right\\},\\end{split} \\tag{40}\\] where \\(\\alpha\\) denotes the fermion species, i.e., \\(\\alpha=\\mathrm{n,p,e,\\mu}\\), and \\(\\beta=1/T\\). The four chemical potentials \\(\\mu_{\\alpha}\\) are not independent. They are constrained by \\(\\beta\\)-equilibrium. It is equivalent to introduce only the baryon number \\(\\mathrm{B}=\\sum_{\\mathrm{N=n,p}}\\psi_{\\mathrm{N}}^{\\dagger}\\psi_{\\mathrm{N}}\\) and electric charge \\(\\mathrm{Q}=\\sum_{\\mathrm{s=p,e,\\mu}}\\psi_{\\mathrm{a}}^{\\dagger}\\psi_{\\mathrm{a}}\\), which are conserved in the presence of weak interaction. Thus we can express the four chemical potentials \\(\\mu_{\\alpha}\\) in terms of the baryon chemical potential \\(\\mu_{\\mathrm{B}}\\) and the electric chemical potential \\(\\mu_{\\mathrm{Q}}\\) as \\[\\mu_{\\mathrm{a}}=\\mu_{\\mathrm{B}},\\quad\\mu_{\\mathrm{p}}=\\mu_{\\mathrm{B}}+\\mu_ {\\mathrm{Q}},\\quad\\mu_{\\mathrm{e}}=\\mu_{\\mu}=\\mu_{\\mathrm{Q}}. \\tag{41}\\] For neutron stars mainly composed of neutrons, the electric chemical potential \\(\\mu_{\\mathrm{Q}}\\) is negative. The partition function and the thermodynamic quantities can be conveniently computed within the mean field approximation [18]. At finite density, the nucleons act as sources in the equations of motion for the meson fields, which indicates that finite density generates nonzero expectation values for the scalar and vector meson fields. In the mean field approximation, the effective potential for the static and uniform meson fields \\(\\sigma\\) and \\(\\omega_{0}\\) is given by \\[\\mathcal{V}_{\\mathrm{eff}}(\\sigma,\\omega_{0};T,\\mu_{\\mathrm{B}}, \\mu_{\\mathrm{Q}})\\] \\[=\\frac{1}{2}m_{\\sigma}^{2}\\sigma^{2}+U(\\sigma)-\\frac{1}{2}m_{ \\omega}^{2}\\omega_{0}^{2}\\] \\[-\\frac{T}{V}\\sum_{n,\\mathbf{k}}\\sum_{\\alpha}\\ln\\det\\left[ \\mathcal{S}_{0\\alpha}^{-1}(ik_{n},\\mathbf{k})+\\Sigma_{\\alpha}(\\sigma,\\omega_{ 0})\\right], \\tag{42}\\] where \\(\\mathcal{S}_{0\\alpha}^{-1}=(ik_{n}+\\mu_{\\alpha})\\gamma^{0}-\\mathbf{\\gamma}\\cdot \\mathbf{k}-m_{\\alpha}\\) is the inverse of the thermal Green's function of free nucleons and leptons, with \\(k_{n}=(2n+1)\\pi T\\) (\\(n\\in\\mathbb{Z}\\)). The quantities \\(\\Sigma_{\\alpha}\\) are defined as \\(\\Sigma_{\\mathrm{n}}=\\Sigma_{\\mathrm{p}}=g_{\\sigma}\\sigma-g_{\\omega}\\gamma^{0} \\omega_{0}\\) and \\(\\Sigma_{\\mathrm{e}}=\\Sigma_{\\mu}=0\\). The physical values of the classical meson fields \\(\\bar{\\sigma}\\) and \\(\\bar{\\omega}_{0}\\) are determined by minimizing of the effective potential, \\[\\frac{\\partial\\mathcal{V}_{\\mathrm{eff}}(\\sigma,\\omega_{0})}{\\partial\\sigma}=0, \\quad\\frac{\\partial\\mathcal{V}_{\\mathrm{eff}}(\\sigma,\\omega_{0})}{\\partial\\omega_{0 }}=0. \\tag{43}\\]These extreme equations determine \\(\\bar{\\sigma}\\) and \\(\\bar{\\omega}_{0}\\) as functions of the temperature and chemical potentials. As this is done, the thermodynamic quantities can be obtained. The pressure is given by \\[p(T,\\mu_{\\rm B},\\mu_{\\rm Q})=-\\mathcal{V}_{\\rm eff}(\\bar{\\sigma},\\bar{\\omega}_{0 };T,\\mu_{\\rm B},\\mu_{\\rm Q}). \\tag{44}\\] For neutron star matter, electric charge neutrality requires that the net electric charge density should vanish, that is \\[\\frac{\\partial p(T,\\mu_{\\rm B},\\mu_{\\rm Q})}{\\partial\\mu_{\\rm Q}}=0. \\tag{45}\\] Hence the electric chemical potential \\(\\mu_{\\rm Q}\\) is not an independent thermodynamic variable. We should solve the above equation to obtain \\(\\mu_{\\rm Q}=\\mu_{\\rm Q}(T,\\mu_{\\rm B})\\). Therefore, only \\(T\\) and \\(\\mu_{\\rm B}\\) are independent thermodynamic variables and the grand canonical EoS takes the form \\(p=p(T,\\mu_{\\rm B})\\). In static neutron stars, they should satisfy the Tolman' law \\(T=T_{\\infty}e^{-\\Phi}\\) and the Klein's law \\(\\mu_{\\rm B}=\\mu_{\\rm B}^{\\infty}e^{-\\Phi}\\). The temperature of a stable neutron star is typically low and we can set \\(T=0\\). The zero temperature EoS \\(p(\\mu_{\\rm B})\\) is evaluated in Appendix A. We use this EoS to solve the TOV equation (35) and visualize the gravitational effect on the baryon chemical potential \\(\\mu_{\\rm B}\\). Briefly speaking, starting with a given local baryon chemical potential \\(\\mu_{\\rm B}^{c}\\) at the core (\\(r=0\\)), we obtain the profiles of the pressure and the baryon chemical potential from the TOV equation (36). The pressure decreases to zero at the surface, which determines the mass \\(M\\) and radius \\(R\\) of the neutron star and hence the mass-radius relation as displayed in Fig. 1. The gravitational potential at the surface is then known as \\(\\Phi_{\\rm s}=\\Phi(R)=\\frac{1}{2}\\ln(1-2GM/R)\\). The baryon chemical potential satisfies the Klein's law \\(\\mu_{\\rm B}(r)=\\mu_{\\rm B}^{\\infty}e^{-\\Phi(r)}\\). The constant \\(\\mu_{\\rm B}^{\\infty}\\) can be determined as \\(\\mu_{\\rm B}^{\\infty}=\\mu_{\\rm B}^{\\infty}e^{\\Phi_{\\rm s}}\\), where \\(\\mu_{\\rm B}^{\\rm s}\\) is the baryon chemical potential at the surface. Note that \\(\\mu_{\\rm B}^{\\infty}\\) is purely determined by the EoS, \\(p(\\mu_{\\rm B}^{\\rm s})=0\\), i.e., the critical chemical potential that separates the vacuum and the matter phase. With the known profile of the baryon chemical potential \\(\\mu_{\\rm B}(r)\\) computed from the TOV equation (36), the profile of the gravitational potential \\(\\Phi(r)\\) in the interior of neutron stars can be determined. The result is shown in Fig. 2. To visualize how the gravitation enhances the baryon chemical potential in the interior of neutron stars, we display the baryon chemical potential \\(\\mu_{\\rm B}^{c}\\) at the core, the baryon chemical potential \\(\\mu_{\\rm B}^{\\rm s}\\) at the surface, and the redshifted one \\(\\mu_{\\rm B}^{\\infty}\\) in Fig. 3. It is interesting to see that while a large central baryon density \\(n_{\\rm B}^{c}\\) dramatically enhances the gravitational effect at the core, the gravitational potential at the surface, \\(\\Phi_{\\rm s}\\), is almost a constant for sufficiently large central density (\\(\\Phi_{\\rm s}\\simeq-0.5\\) for \\(n_{\\rm B}^{c}>1\\)fm\\({}^{-3}\\) in this model). As a result, the redshifted chemical potential \\(\\mu_{\\rm B}^{\\infty}\\) also reaches a platform at large central density, regardless of the large baryon chemical potential \\(\\mu_{\\rm B}^{c}\\) at the core. The difference between \\(\\mu_{\\rm B}^{\\infty}\\) and \\(\\mu_{\\rm B}^{c}\\) (or between \\(\\mu_{\\rm B}^{\\rm s}\\) and \\(\\mu_{\\rm B}^{c}\\)) can be understood as an enhancement purely induced by the gravitational effect. Figure 2: The gravitational potential \\(\\Phi\\) in the interior of neutron stars computed from the Walecka model. (a) \\(\\Phi\\) at the surface and at the core as functions of the central baryon density \\(n_{\\rm B}^{c}\\). The colored dashed lines with arrows denote the corresponding baryon chemical potentials \\(\\mu_{\\rm B}^{c}\\) at the core. (b) The profile \\(\\Phi(r)\\) (thin dashed lines) in the interior of neutron stars with different central densities. The thick black line denotes the potential at the surface. Figure 1: The mass-radius relation of neutron stars (a) and the relation between the neutron star mass and the central baryon density (b) calculated from the Walecka model. ## V Summary In summary, we have shown that to study the hydrostatic equilibrium of dense matter within the framework of general relativity and relativistic fluid dynamics, the EoS of dense matter should be that determined by many-body theories or experiments in flat spacetime so that it is compatible with local thermodynamic relations and conservation of energy and momentum. This is also protected by the equivalence principle as well. We demonstrate this explicitly for the grand canonical EoS, which can be computed directly from field theoretical methods. As a by product, we demonstrate an alternative way to solve the TOV equation with the grand canonical EoS. In this approach, the enhancement of the baryon chemical potential inside the neutron star can be self-consistently regarded as a gravitational effect. This generalization may provide a way to extract the grand canonical EoS of dense matter via deep learning [33; 34]. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (Grant No. 11890712) the National Key R&D Program (Grant No. 2018YFA0306503). ## Appendix A EOS in the WALCKA model At zero temperature, the grand canonical EoS of dense matter in the Walecka model (without charge neutrality) can be evaluated as \\[p\\left(\\mu_{\\rm B},\\mu_{\\rm Q}\\right) = \\sum_{{\\rm N}=\\rm{n},p}p_{0}\\left(\\mu_{\\rm N}^{*},m_{\\rm N}^{*} \\right)+\\sum_{l=\\rm{e},\\bar{\\mu}}p_{0}\\left(\\mu_{l},m_{l}\\right) \\tag{11}\\] \\[-\\frac{1}{2}m_{\\sigma}^{2}\\bar{\\sigma}^{2}-U(\\bar{\\sigma})+\\frac{ 1}{2}m_{\\omega}^{2}\\bar{\\omega}_{0}^{2},\\] where the function \\(p_{0}(\\mu,m)\\) is defined as \\[p_{0}\\left(\\mu,m\\right) = \\frac{1}{24\\pi^{2}}\\Bigg{[}\\left|\\mu\\right|\\left(2\\mu^{2}-5m^{2} \\right)\\sqrt{\\mu^{2}-m^{2}} \\tag{12}\\] \\[+ 3m^{4}\\ {\\rm arccosh}\\left(\\frac{\\left|\\mu\\right|}{m}\\right) \\Bigg{]}\\Theta(\\left|\\mu\\right|-m).\\] The effective masses \\(m_{\\rm N}^{*}\\) and chemical potentials \\(\\mu_{\\rm N}^{*}\\) are defined as \\[m_{\\rm N}^{*}=m_{\\rm N}-g_{\\sigma}\\bar{\\sigma},\\quad\\mu_{\\rm N}^{*}=\\mu_{\\rm N }-g_{\\omega}\\bar{\\omega}_{0}. \\tag{13}\\] Note that the radiative correction from the vacuum contribution has been neglected. The meson condensates \\(\\bar{\\sigma}\\) and \\(\\bar{\\omega}_{0}\\) are determined by the following gap equations, \\[m_{\\omega}^{2}\\bar{\\omega}_{0}-\\sum_{{\\rm N}=\\rm{n},p}g_{\\omega }n_{0}\\left(\\mu_{\\rm N}^{*},m_{\\rm N}^{*}\\right)=0,\\] \\[m_{\\sigma}^{2}\\bar{\\sigma}+U^{\\prime}(\\bar{\\sigma})-\\sum_{{\\rm N }=\\rm{n},p}g_{\\sigma}n_{\\rm s}\\left(\\mu_{\\rm N}^{*},m_{\\rm N}^{*}\\right)=0, \\tag{14}\\] which minimize the effective potential \\(\\mathcal{V}_{\\rm eff}(\\sigma,\\omega_{0})\\). Here the functions \\(n_{0}(\\mu,m)\\) and \\(n_{\\rm s}(\\mu,m)\\) are defined as \\[n_{0}\\left(\\mu,m\\right) = \\frac{\\left(\\mu^{2}-m^{2}\\right)^{3/2}}{3\\pi^{2}}\\Theta(\\left|\\mu \\right|-m){\\rm sgn}(\\mu),\\] \\[n_{\\rm s}\\left(\\mu,m\\right) = \\frac{m}{2\\pi^{2}}\\left[\\left|\\mu\\right|\\sqrt{\\mu^{2}-m^{2}}-m^{2} {\\rm arccosh}\\left(\\frac{\\left|\\mu\\right|}{m}\\right)\\right].\\] The energy density \\(\\varepsilon\\) can be evaluated as \\[\\varepsilon\\left(\\mu_{\\rm B},\\mu_{\\rm Q}\\right) = \\sum_{{\\rm N}=\\rm{n},p}\\varepsilon_{0}\\left(\\mu_{\\rm N}^{*},m_{ \\rm N}^{*}\\right)+\\sum_{l=\\rm{e},\\bar{\\mu}}\\varepsilon_{0}\\left(\\mu_{l},m_{l}\\right) \\tag{16}\\] \\[+\\frac{1}{2}m_{\\sigma}^{2}\\bar{\\sigma}^{2}+U(\\bar{\\sigma})+\\frac{ 1}{2}m_{\\omega}^{2}\\bar{\\omega}_{0}^{2},\\] Figure 3: The baryon chemical potentials \\(\\mu_{\\rm B}^{\\varepsilon}\\) at the core, \\(\\mu_{\\rm B}^{*}\\) at the surface, and \\(\\mu_{\\rm B}^{\\infty}\\) at infinity. (a) The three chemical potentials corresponding to the neutron stars on the mass-radius curve. (b) The three chemical potentials as functions of the central baryon density. where the function \\(\\varepsilon_{0}(\\mu,m)\\) reads \\[\\varepsilon_{0}\\left(\\mu,m\\right) =\\frac{1}{8\\pi^{2}}\\Bigg{[}\\mu|\\left(2\\mu^{2}-m^{2}\\right)\\sqrt{\\mu ^{2}-m^{2}}\\] \\[-m^{4}\\mathrm{arccosh}\\left(\\frac{|\\mu|}{m}\\right)\\Bigg{]}\\Theta(| \\mu|-m). \\tag{26}\\] The baryon density \\(n_{\\mathrm{B}}\\) and the electric charge density \\(n_{\\mathrm{Q}}\\) are given by \\[n_{\\mathrm{B}}\\left(\\mu_{\\mathrm{B}},\\mu_{\\mathrm{Q}}\\right) =\\sum_{N=\\mathrm{n,p}}n_{0}\\left(\\mu_{\\mathrm{N}}^{*},m_{\\mathrm{ N}}^{*}\\right),\\] \\[n_{\\mathrm{Q}}\\left(\\mu_{\\mathrm{B}},\\mu_{\\mathrm{Q}}\\right) =n_{0}\\left(\\mu_{\\mathrm{p}}^{*},m_{\\mathrm{p}}^{*}\\right)+\\sum_ {l=\\mathrm{e},\\mu}n_{0}\\left(\\mu_{\\mathrm{Q}},m_{l}\\right). \\tag{27}\\] For neutron star matter, we should impose electric charge neutrality, \\[n_{\\mathrm{Q}}\\left(\\mu_{\\mathrm{B}},\\mu_{\\mathrm{Q}}\\right)=0. \\tag{28}\\] Thus the electric chemical potential is not an independent thermodynamic variable and should be solved as \\(\\mu_{\\mathrm{Q}}=\\mu_{\\mathrm{Q}}(\\mu_{\\mathrm{B}})\\). The grand canonical EoS of neutron star matter is the relation between the pressure and the baryon chemical potential, \\(p=p(\\mu_{\\mathrm{B}})\\). The pressure \\(p(\\mu_{\\mathrm{B}})\\) and the baryon density \\(n_{\\mathrm{B}}(\\mu_{\\mathrm{B}})\\) can be numerically evaluated for given model parameters, as shown in Fig. 4. In the calculation, the model parameters are set as follows. The particle masses are taken as \\(m_{\\mathrm{h}}=m_{\\mathrm{p}}=m_{\\mathrm{N}}=939\\mathrm{MeV}\\), \\(m_{\\mathrm{\\sigma}}=550\\mathrm{MeV}\\), \\(m_{\\mathrm{\\omega}}=783\\mathrm{MeV}\\), \\(m_{\\mathrm{e}}=0.511\\mathrm{MeV}\\), and \\(m_{\\mu}=105.66\\mathrm{MeV}\\). The coupling constants are chosen as \\(g_{\\mathrm{\\sigma}}=8.685\\), \\(g_{\\mathrm{\\omega}}=8.646\\), \\(b=7.950\\times 10^{-3}\\), and \\(c=6.952\\times 10^{-4}\\) to fit the empirically known properties of nuclear matter [18]. ## References * (1) N. K. Glendenning, _Compact Stars: Nuclear Physics, Particle Physics, and General Relativity_ (Springer Science & Business Media, New York, 2012). * (2) P. A. Mazzali, F. K. Ropke, S. Benetti, and W. Hillebrandt, Science **315**, 825 (2007). * (3) L. Rezzolla, E. R. Most, and L. R. Weih, Astrophys. J. Lett. **852**, L25 (2018). * (4) H. T. Cromartie _et al._ (NANOGrav Collaboration), Nat. Astron. **4**, 72 (2020). * (5) E. Fonseca, H. T. Cromartie, T. T. Pennucci, P. S. Ray, A. Y. Kirichenko, S. M. Ransom, P. B. Demorest, I. H. Stairs, Z. Arzoumanian, L. Guillemot, _et al._, Astrophys. J. Lett. **915**, L12 (2021). * (6) E. Annala, T. Gorda, A. Kurkela, J. Nattila, and A. Vuorinen, Nat. Phys. **16**, 907 (2020). * (7) I. Bombaci, A. Drago, D. Logoteta, G. Pagliara, and I. Vidana, Phys. Rev. Lett. **126**, 162702 (2021). * (8) J. R. Ellis, J. I. Kapusta, and K. A. Olive, Nucl. Phys. B**348**, 345 (1991). * (9) K. Masuda, T. Hatsuda, and T. Takatsuka, Astrophys. J. **764**, 12 (2013). * (10) I. Bombaci, J. Phys. Soc. Jpn. Conf. Proc. **17**, 101002 (2017). * (11) K. Masuda, T. Hatsuda, and T. Takatsuka, Eur. Phys. J. A **52**, 65 (2016). * (12) W.-Z. Jiang, B.-A. Li, and L.-W. Chen, Astrophys. J. **756**, 56 (2012). * (13) R. Somasundaram, I. Tews, and J. Margueron, arXiv:2112.08157 [nucl-th]. * (14) S. Nishizaki, T. Takatsuka, and Y. Yamamoto, Prog. Theor. Phys. **105**, 607 (2001). * (15) S. Nishizaki, T. Takatsuka, and Y. Yamamoto, Prog. Theor. Phys. **108**, 703 (2002). * (16) M. Baldo, G. F. Burgio, and H. J. Schulze, Phys. Rev. C **61**, 055801 (2000). * (17) R. Machleidt, Phys. Rev. C **63**, 024001 (2001). * (18) J. I. Kapusta and C. Gale, _Finite-Temperature Field Theory: Principles and Applications_ (Cambridge University Press, Cambridge, England, 2011). * (19) J. R. Oppenheimer and G. M. Volkoff, Phys. Rev. **55**, 374 (1939). * (20) R. C. Tolman, Phys. Rev. **55**, 364 (1939). * (21) G. M. Hossain and S. Mandal, J. Cosmol. Astropart. Phys. **02**, 026. * (22) G. M. Hossain and S. Mandal, Phys. Rev. D **104**, 123005 (2021). * (23) R. C. Tolman, Phys. Rev. **35**, 904 (1930). * (24) R. Tolman and P. Ehrenfest, Phys. Rev. **36**, 1791 (1930). * (25) O. Klein, Rev. Mod. Phys. **21**, 531 (1949). Figure 4: The grand canonical EoS \\(p(\\mu_{\\mathrm{B}})\\) and the corresponding baryon density \\(n_{\\mathrm{B}}(\\mu_{\\mathrm{B}})\\) of cold neutron star matter computed from the Walecka model. * (26) S. Weinberg, _Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity_ (John Wiley and Sons, New York, 1972). * (27) S. M. Carroll, _Spacetime and Geometry: An Introduction to General Relativity_ (Cambridge University Press, Cambridge, England, 2019). * (28) U. Aydemir and J. Ren, arXiv:2201.00025 [gr-qc]. * (29) J. A. S. Lima, A. Del Popolo, and A. R. Plastino, Phys. Rev. D **100**, 104042 (2019). * (30) H.-C. Kim and Y. Lee, Phys. Rev. D **105**, L081501 (2022). * (31) W. Israel, Ann. Phys. (N.Y.) **100**, 310 (1976). * (32) J. D. Walecka, Ann. Phys. (N.Y.) **83**, 491 (1974). * (33) Y. Fujimoto, K. Fukushima, and K. Murase, J. High Energy Phys. **03**, 273. * (34) S. Soma, L. Wang, S. Shi, H. Stocker, and K. Zhou, J. Cosmol. Astropart. Phys. **08**, 071.
Neutron stars are regarded as natural laboratories for the study of dense strong interaction matter. The equation of state (EoS) of dense matter computed in flat spacetime is used to predict the structure of neutron stars by solving the Tolman-Oppenheimer-Volkoff (TOV) equation. Recently, it has been reported that the curved spacetime effect or specifically gravitational time dilation effect on the EoS of dense matter leads to a significant increase of the maximum mass limit of neutron stars [Phys. Rev. D **104**, 123005 (2021) and J. Cosmol. Astropart. Phys. 02 (2021) 026]. However, in this work, we show that to study the hydrostatic equilibrium of dense matter within the framework of general relativity and relativistic fluid dynamics, the grand canonical EoS of dense matter, \\(p(T,\\mu)\\), should be the same as that computed in flat spacetime, otherwise it is not consistent with local thermodynamic relations and energy-momentum conservation of the fluid. The gravitation influences the pressure \\(p\\) only through enhancing the temperature \\(T\\) and the chemical potential \\(\\mu\\), known as Tolman's law and Klein's law. We rewrite the TOV equation as an alternative version so that the grand canonical EoS computed by using field theoretical methods can be used as a direct input. This may provide a tool to study the grand canonical EoS of dense matter via deep learning.
Summarize the following text.
309
isprs/ace5d6e4_53c8_4122_b8c6_4521bf995263.md
# Study on Smoothing Browser in Multi-View Virtual Space Based on Panorama Li Yi-jing (School of Civil Engineering and Architecture NanChang University, 999XueFu Road, HongGuTan New Developed Area,NanChang 330031, China - [email protected]) ## 1 Introduction Constructing panoramic images of IBR is an important research topic in computer vision, image processing, and other areas. This technology constructing a virtual scene by real images, provides people with visual experience. Constructing panoramic images already enter into more and more industries because of its advantages such as drawing fast, easy modeling, photorealistic and so on. These industries include Site construction, real estate exhibition, virtual tour, Hotels exhibitions. The panoramic images may express complete surroundings information of a scene from one fixed viewpoint. It reflect a three-dimensional perspective of space. It is the basic unit of a virtual scene. But in many cases, there are often some objects which make people unable to watch the scene behind them in a real scene. So people should complete all parts of the scene's browser by several perspective space. Bacause the cross-cutting in the multi-viewpoints space can make user browse every corner of the entire scene. The effective organizations of these viewpoints space which could make users have a smoothing view in the virtual scene is an important issues. So the key of this problem is the structure among all of these viewpoints space. Currently, all panorama manufacture software only complete smoothly browse on a single-perspective, but between these viewpoints space, most of them use hotkey switching means. And the user clicks a button, the software automatically jump from one viewpoint to another viewpoint space. This way can achieve faster, but lose smooth transition between two perspective space, so make user lack of realism and sense of direction. In this paper, an effective construction approach of multi-perspective of the virtual scene is proposed. The virtual scene was builted by cylindrical single-viewpoints space and bidirectional transition-chain between perspectives.The replacement texture mapping of cylindrical scene can describe different virtual space,as well as the transitional-images'changing on transitional-chain can simulate the visual effects that a person could see when he roam from a perspective to another one. In order to achieve the effect of smooth roaming, the experiment use of the line extraction and matching, and other image processing methods. ## 2 The Construction Approach of Multi-Viewpoints Virtual Scene In order to browse every corner, a scene which has several obstructions often needs not only one perspective:each perspective space map corresponding panoramic image,users can look around in the space. Moreover, in order to achieve a smooth roaming in the whole scene, bidirectional transitional-chain need to be constructed between two adjacent viewpoints.Several transitional-nodes are distributed on these chains.When user browse along the chain to these nodes,the virtual scene will change corresponding transitional-images to make he feel more closer to the next perspective.And on the nodes user can only be allowed to watch the direction of the advance. Figure 1: The transitional chain construction between two view spacesWhen user go forward from perspective1 to perspective2,the scene he saw will not change as long as he doesn't change the direction he watched.But the changing thing is the distance from scene on perspective2,which looks like the change of image's scale.Therefore,the important things on browse between perspectives are user's advancement and the trasitional image's replacement and the relationship between these images is the distance from the scene when they are collected. The reason that interpolating transitional-nodes on the chain is: 1 the distance between two scene is so long that trasitional-image's resolution is not high enough after magnification. Accordingly,when user advance from one viewpoint to another without any transitional-images, will hard to be smoothly because of the significant differences on image quality. 2 the scene user faced has more than one object which are not in the same plane.This phenomenon result in different distortion of images after center projection,and these images have different distance from the scene when they are collected.The farther distance between two collecting place,the greater image distortion after projection is.Therefore,if the distance between two scene is long,the transitional-nodes should be interpolated on the chain to shorten the distance of collecting place, improve the image quality,reduce the image distortion and make the roaming smoothly. ## 3 Tecnology in System Implementation ### Panorama Coalition Panorama representation has four formats plane, sphere, cube and cylinder. Cylindrical panorama among them is used widely which is the approach this experimental used.One of the reason is that this kind of panorama's images are easy to collected and storaged in computer because of the easily spreading from cylinder to plane.And the first step of virture scene's construction is the production of cylindrical panorama. 1 Image collection Image collection is mainly completed in a single spot with 360-degree omnidirectional superposition sequential images collection and single image collection along transition direction at several transition nodes. 2 Image projection Because the images on one perspective are collected by circumrotating camera,so they have different central projection plane and the same object on different images have different projection distortion(Fig.2). If merging those sequential images directly, there will be local contorted phenomena, it will wreck the coherence of the objects in the practical scenes. In order to avoid the distortion and keep the spacial restictic relationship of images,we have to project these images which have their own projection plane to a standard projection surface which is cylinder(Fig2). 3 Image matching,stitching,smooth The leading assignment of this part are the template matching base on feature points,finding Stitching Line by homonymy point.If we overlap two images together without any process,there will have obvious borderline of stitching.So we should eliminate the borderline by smoothing.The smoothing method we choose is gradual mean of weighted.It is the effect of smoothing. Through the above several steps of work,the original collected sequence images can be made into panoramic image,Fig3 reflects the single viewpoint of spacial scene before the WuHan university of Engineering's Library. ### Obtaining of transitional images The transitional-images collecting is not restricted to equidistant shooting, the shooting point just along the transitional-direction without lining and the distance between any two shooting point is unknown.So the position of an object on different images have changed,and the scale of different images is unkown.These problem will hinder the smoothing effect when viewer go forward on the transitional-chain,so the important assignment is finding the relationship of scale and position between any two images by Image Processing. 1) Scale relationship between transitional-images Trasitional-nodes distributed on the chain should follow two principle,the collecting order and scale of transitional-images. When user go forward along the chain in browse system,the image he faced could magnify.When the image is magnified to the scale of the next transitional-image, browse system will interpolate a transitional-node on the chain and use the next image instead of the former one. The shooting scene isn't change on the transitional-chain and the spatial geometry relationships are consistent,so the scale relationship between transitional-images can be obtained by the ratio of distance between two pairs of homonymy line(Fig.4). Figure 3: A panorama about a library of WuHan University Figure 2: The adjacent original images and the images after cylindrical projection
In a scene which is not open, people cann't watch the scene behind the barrier through the single-viewpoint. And the cross-cutting in the multi-viewpoints space can make up for its shortcomings. The Panoramic technology of single perspective is mature, but the study on multi-perspective is comparatively lack. In this paper a multi-view virtual scene was builted which can be developed the number of viewpoints by cylindrical single-viewpoints space and bidirectional transitional-chain between perspectives. The author designed a smooth roaming approach in the virtual space based on multi-viewpoints by using the image-matching and the line-extraction method, and implemented a single-view panoramic and transitional-image between viewpoints generation platform, as well as the multiple-viewpoints scene browser platform, and got a good visual effects in the smooth transition between the viewpoints. Panoramic image; Multi-viewpoints; Smoothing browser; Image Stitching
Give a concise overview of the text below.
181
ieee/2d35c63e_e6bf_4297_8637_5bd7c9aae5eb.md
# SSF-Net: A Spatial-Spectral Features Integrated Autoencoder Network for Hyperspectral Unmixing Bin Wang \\({}^{\\text{\\textcircled{C}}}\\), Huizheng Yao \\({}^{\\text{\\textcircled{C}}}\\), Dongmei Song \\({}^{\\text{\\textcircled{C}}}\\), Jie Zhang, and Han Gao \\({}^{\\text{\\textcircled{C}}}\\), Manuscript received 13 July 2023; revised 19 September 2023; accepted 17 October 2023. Date of publication 25 October 2023; date of current version 22 December 2023. This work was supported in part by the Natural Science Foundation of Shandong Province under Grant ZR2022MD015, in part by the Key Program of Joint Fund of the National Natural Science Foundation of China and Shandong Province under Grant U1906217 and Grant U22A20586, in part by the National Natural Science Foundation of China under Grant 41701513, Grant 61371189, and Grant 41772350, and in part by the Key Research and Development Program of Shandong Province under Grant 2019GXG10133. _(Corresponding author: Dongmei Song.)_ The authors are with the College of Oceanography and Space Informatics, China University of Petroleum (East China), Qingdao 266580, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).Digital Object Identifier 10.1109/ISTARS.2023.3327549 ## I Introduction Hyperspectral image (HSI) can capture detailed spectral information of ground objects in hundreds of continuous bands from visible light to short-wave infrared and even wider spectral intervals while obtaining spatial distribution information of ground objects. Because of its rich spectral information, it has received a great deal of attention, especially in the fields of military investigation, target tracking, target identification, environmental monitoring, etc. [1, 2, 3, 4, 5]. However, due to the limitation of spatial resolution and the complex diversity of the natural land surfaces, the phenomenon of mixed pixels [6] is common in HSI. The mixed pixels are composed of a variety of pure material spectra, and their existence will have an enormous impact on the accuracy of hyperspectral remote sensing applications [7]. To better solve this problem, the hyperspectral unmixing (HU) technique is often used to decompose the mixed pixels into a series of different pure material spectra (endmembers) and the coverage ratio (abundances) of the endmembers [8]. Currently, HU has been widely used in mineral detection [9, 10] and agricultural detection [11, 12]. HU models can simply be classified into two categories: linear mixing model (LMM) [1] and nonlinear mixing model (NLMM) [13]. In this regard, the LMM is based on the assumption that the electromagnetic wave energy received by the sensor does not undergo secondary scattering during transmission, i.e., the spectrum of a mixed pixel is a linear combination of multiple pure spectra (endmembers) of the ground object according to certain proportions (abundances). Moreover, considering the physical mechanism in HU, the abundance needs to satisfy the nonnegative constraint (ANC) and abundance sum-to-one constraint (ASC) [14]. NLMM is often used to describe the intricate interactions between scattered light from multiple materials within a scene [1]. Although the NLMM is more in line with the actual transmission of electromagnetic waves, it requires consideration of numerous complex factors in implementation. Given the explicit physical mechanism and relatively straightforward solving process of the LMM and the relatively simple solution process, the simulation of mixed spectra can be efficiently achieved. Therefore, this study focuses on the LMM-based HU. Traditional unmixing methods can be mainly categorized into geometric-based, statistical-based, and sparse regression-based unmixing methods. Among the geometry-based unmixing methods, the typical representatives include N-finder (N-FINDR) [15] and vertex component analysis (VCA) [16]. N-FINDR employs the projection of pixels into the feature space to form a simplex, where the endmembers are efficiently selected by identifying the pixel that constitutes the maximum volume simplex. The VCA does this by iteratively projecting the pixels into a direction orthogonal to the subspace formed by the already identified endmembers, where the new endmember corresponds to the extreme of the projection. Due to the complexity and variety of natural surfaces, it is a formidable challenge to identify the pure pixels in remote sensing images. Furthermore, geometry-based unmixing methods tend to fall into local optima in HSI with highly mixed ground objects. In contrast, statistical-based unmixing methodsare able to obtain the global optima, such as the unmixing methods with a Bayesian framework [17] and nonnegative matrix decomposition (NMF) [18]. Bayesian methods can effectively incorporate _a priori_ information into the unmixing process, thus improving the accuracy of unmixing [19, 20]. Due to the advantages of learning part-based representations, NMF has become a prominent research focus in the field of HU. Notably, NMF can simultaneously capture the endmembers and abundances of HSI after completing the unmixing task [21]. Currently, many NMF-based HU methods have been proposed, mainly focusing on improving the unmixing accuracy through the incorporation of regularization constraints or the integration of spectral and spatial information [22, 23]. Besides, there is also unmixing by nonnegative tensor factorization [24, 25] to minimize the information loss in the unmixing process. Furthermore, there are model-inspired network-based unmixing approaches [26, 27, 28] that make the unmixing process more physically interpretable by combining it with a physical model. Finally, the sparse regression-based unmixing method [29] effectively estimates the endmembers and their corresponding abundances in HSI using _a priori_ knowledge of a known spectral library. Although the sparse regression-based unmixing methods can mitigate the adverse effects of inaccurate endmember extraction, they cannot be widely applied to HSI images acquired in complex environments due to the poor mobility of the spectral library. Deep learning has attracted much attention in the field of HU owing to its powerful feature representation ability. In particular, the autoencoder (AE) and its variants have been applied to HU with excellent unmixing results. Up to now, the self-encoder-based unmixing networks can be simply classified into two categories: pixel-level unmixing networks and spatial-level unmixing networks. And the typical representative of pixel-level unmixing networks includes EndNet [30], uDAS [31], DAEN [32], MUNet [33], CyCU-Net [34], and EGU-Net [35]. Among them, EndNet forms an unmixing network through two AEs and uses spectral angular distance (SAD) and K-L divergence as loss functions. uDAS reduces the effect of noise on unmixing by introducing denoising constraints into the AE and enhances the abundance estimation by introducing the so-called \\(l_{21}\\)-norm into the decoder [31]. To enhance the robustness of the model, DAEN first processes the outliers and noise in the data by using a stacked autoencoder (SAE) and then feeds the processed data into a variational AE to obtain more accurate unmixing results. MUNet constructs a multimodal unmixing network by additionally introducing LiDAR data, which improves the unmixing performance by integrating the elevation differences of the LiDAR data into the HSI. CyCU-Net uses a cyclic consistency network structure with two cycle-connected AEs to reduce the information loss during image processing, thus enhancing the unmixing ability of the network. EGU-Net reduces the influence of spectral variability (SV) on the unmixing results by introducing pseudoendmembers and utilizes the unmixing information derived from pseudoendmembers to guide the unmixing process to improve network performance. With the rise of CNNs in the field of computer vision, many spatial-level unmixing networks have emerged as well. Typically, CNNAEU [36] introduces CNNs into AE-based unmixing networks, omitting the use of any pooling or upsampling operations to maximize the retention of spatial information from HSI. SSCC-Net [37] utilizes both spectral and spatial information to train the spatial AE networks and spectral AE networks, respectively, in an end-to-end manner. Particularly, DeepTrans [38] was the first to use a transformer in combination with the convolutional AE for HU. Although the above methods perform remarkable performance in HU tasks, they still have some limitations. The pixel-level unmixing networks mainly utilize the spectral information of pixels for unmixing without considering the spectral differences between pseudoendmembers. The spatial-level unmixing networks enhance the receptive field by introducing convolutional operations to capture spatial contextual information but neglect the spatial differences between different ground objects. Although these joint spectral-spatial networks utilize the contextual information of the spectral and spatial features in HSI, they still do not take into account the effects of spectral differences between the pseudoendmembers as well as the spatial heterogeneities of distributed features. Therefore, how to make full use of the spatial discrepancy in HSI and the spectral difference between pseudoendmembers to maximize the accuracy of HU has become an urgent challenge to overcome. To this end, this article proposes a novel spatial-spectral features integrated AE HU network, called SSF-Net, which integrates spectral attention mechanism and spatial attention mechanism within the AE unmixing framework to effectively extract the spatial discrepancy in HSI and the spectral difference between pseudoendmembers in an unsupervised manner. Compared with the current unmixing methods that only use spectral information or neighboring pixel information, SSF-Net can better extract the feature difference information between different ground objects, thus significantly improving the network unmixing accuracy. Specifically, the main contributions of this study can be summarized as follows. 1. SSF-Net improves the unmixing performance by exploiting the spatial difference information in HSI and the spectral difference information between pseudoendmembers. To the best of our knowledge, this study represents the first utilization of DL to investigate the unmixing task of multifeature fusion. 2. A two-branch feature fusion module incorporating a spatial-spectral attention mechanism is built into SSF-Net to address the problem of underutilization of spatial-spectral information. This module extracts the relevant spatial-spectral information by integrating spatial and channel attention, thus improving the accuracy of the unmixing. 3. An unsupervised learning approach is adopted, which makes the training process no longer dependent on labeled data. The network model is able to learn and extract features from the data itself autonomously, thus improving the generalization ability of the model. The rest of this article is organized as follows. Section II briefly describes the principles of AE-based unmixing. Section III describes the network structure of SSF-Net in detail. Section IV gives the results of SSF-Net on several datasets. Finally, Section V concludes this article. ## II AE-Based Unmixing Model This section mainly devotes to the introduction of an unmixing method based on LMM, which can usually be expressed as follows: \\[Y=EA+N \\tag{1}\\] where \\(Y\\in\\mathbb{R}^{l\\times n}\\) is the HSI expressed in the form of a two-dimensional (2-D) matrix, \\(l\\) is the number of bands, and \\(n\\) is the number of pixels. Besides, \\(E\\in\\mathbb{R}^{l\\times p}\\) is an endmember matrix representing the \\(p\\) endmembers in the HSI, \\(A\\in\\mathbb{R}^{p\\times n}\\) is the abundance matrix corresponding to the \\(p\\) endmembers, and \\(N\\in\\mathbb{R}^{l\\times n}\\) refers to the added noise matrix. Considering that abundance represents the proportion of different feature types in each mixed pixel, the abundance vector \\(a_{j}\\) also needs to satisfy the constraints of ANC and ASC \\[\\begin{cases}a_{i}\\geq 0\\\\ \\sum_{i=1}^{p}a_{i,j}=1.\\end{cases} \\tag{2}\\] This study primarily focuses on HU based on the AE to simultaneously obtain the endmember matrix \\(E\\) and abundance matrix \\(A\\) in an unsupervised manner with the powerful learning and characterization capabilities of deep neural networks. As illustrated in Fig. 1, a comprehensive AE typically consists of an encoder and a decoder. _Encoder_: The encoder is typically a multilayer network structure that converts the input original hyperspectral data \\(\\left\\{y_{i}\\right\\}_{i=1}^{n}\\ \\in\\ \\mathbb{R}^{l}\\) into a hidden layer data denoted by \\(h_{i}\\), as formulated in the following equation: \\[h_{i}=f_{E}\\left(y_{i}\\right)=f\\left(W^{(e)T}y_{i}+b^{(e)}\\right) \\tag{3}\\] where \\(f(\\,\\cdot\\,)\\) represents the activation function of the encoder, \\(W^{(e)}\\) represents the weight of the \\(e\\)th layer encoder, and \\(b^{(e)}\\) represents the bias of the \\(e\\)th layer encoder. _Decoder_: The decoder converts the hidden layer data \\(h_{i}\\) back to the original input data, denoted by \\(\\left\\{\\hat{y}_{i}\\right\\}_{i=1}^{n}\\ \\in\\ \\mathbb{R}^{l}\\), which can be formulated as follows: \\[\\hat{y}_{i}=f_{D}\\left(h_{i}\\right)=W^{(d)T}h_{i} \\tag{4}\\] where \\(W^{(d)}\\) represents the weight of the \\(d\\)th layer decoder. Conventionally, the metric of mean square error (MSE) standard formulation (5) is always employed to quantify the reconstruction error of AE. However, when dealing with hyperspectral data, additional error metrics, such as the SAD, as depicted in (6), need to be taken into consideration for evaluating the reconstruction accuracy \\[L_{\\mathrm{AE-MSE}}\\left(\\hat{y}_{i},y_{i}\\right)=\\frac{1}{n}\\sum_{i=1}^{n} \\left\\|\\hat{y}_{i}-y_{i}\\right\\|^{2} \\tag{5}\\] \\[L_{\\mathrm{AE-SAD}}\\left(\\hat{y}_{i},y_{i}\\right)=\\cos^{-1}\\left(\\frac{\\hat{y }_{i}^{\\mathrm{T}}y_{i}}{\\left\\|\\hat{y}_{i}\\right\\|_{2}\\left\\|y_{i}\\right\\|_{2 }}\\right). \\tag{6}\\] Considering the inherent advantages of AE, such as a simple training process, flexible stacking of layers, and unsupervised learning paradigm, this study adopts the employment of AE for the HU. By leveraging the encoder to transform the input HSI data into abstract features and then convert them into abundance maps according to the ANC and ASC constraints, such operations ensure that the decoder is fully compliant with the LMM principle to reconstruct the abundance map back to HSI data. At this point, the decoder weights can be interpreted as the desired endmembers. The workflow of AE-based HU is illustrated in Fig. 2, wherein the network enables the simultaneous estimation of both abundances and endmembers. It is noteworthy that the entire AE unmixing process is conducted in an unsupervised manner, which effectively mitigates the issue of insufficient labeled samples in HSI data [39]. ## III Proposed Method In this study, a spatial-spectral features integrated AE network, abbreviated as SSF-Net, is proposed for the HU, with its overall structure, as shown in Fig. 3. The network consists of two parts: a spatial-spectral feature fusion encoder and a decoder. The former component fuses the deep-level features extracted from both HSI data and pseudoendmembers to fully exploit the spatial and spectral characteristics inherent in the original data. The latter component employs a commonly used decoder architecture to reconstruct the HSI by leveraging the extracted abundances. It is noteworthy that within the encoder, the in-depth integration of spectral features and spatial features is achieved by employing a dedicated module known as the spatial-spectral fusion module (SSFM). Through the SSFM, the fused high-level features encompass the intrinsic attributes from pseudoendmember spectra as well as the spatial global information from the HSI, which synergistically contributes to the enhancement of the unmixing accuracy of the network. Fig. 1: Schematic diagram of an AE. The abstract features are first obtained by encoding the input data, and then the abstract features are decoded to reconstruct the input data. Fig. 2: Workflow of AE-based HU. To endow the network with abundant spectral features, the regional VCA endmember extraction method is employed to acquire the pseudoendmember spectra from HSI. Specifically, the HSI data are first partitioned into several subpatches with a certain overlap rate, and the pseudoendmembers of each subpatches are extracted using the VCA algorithm. Subsequently, the _K_-means clustering algorithm is utilized to eliminate the duplicate pseudoendmembers, and all remaining pseudoendmembers are then aggregated into \\(K\\) clusters [35]. And the pseudoendmembers are then obtained by computing the centers of each cluster. Notably, the number of subpatches and \\(K\\) values can be determined by referring to the literature [40]. In this study, the \\(K\\) value is deliberatively set to about 20% of all hyperspectral pixels according to several trial experimental results. It should be noted that the pseudoendmembers obtained by the above process can reduce the influence of SV on the unmixing results because they contain rich spectral information of features, perturbation information, and a certain amount of noise. The following sections devote to a detailed description of the SSF-Net framework. ### _Spatial-Spectral Feature Fusion Encoder_ To fully exploit the information contained in the HSI data, the spatial-spectral fusion encoder is ingeniously designed as a dual-branch structure. The encoder consists of a spectral branch and a spatial branch, which encode the input data from different views to capture the high-level spectral features and spatial features in the HSI. In the spectral branch, the spectral features in the pseudoendmembers are extracted by using three consecutive feature extraction blocks. Within each feature extraction block, a \\(1\\times 1\\) convolution is employed to compress the spectral information of the pseudendmembers, and an activation function (such as Sigmoid, ReLU, or LeakyReLU) is introduced after the convolution. Since the ReLU activation function may lead to the problem of neuron invalidation [41], the LeakyReLU activation function is employed in this network. Moreover, the batch normalization strategy is introduced to alleviate problems, such as gradient vanishing and gradient exploding, as well as to improve the overall computational speed of the network. To mitigate overfitting in the network, the dropout is added at the end of the blocks. The high-level spectral features, denoted as \\(F_{\\text{spe}}\\), are obtained from the pseudoendmembers through the three feature extraction blocks in the spectral branch. Moreover, considering the strong correlation between pixels and their surrounding scenes in HSI [37, 42, 43], the 3 \\(\\times\\) 3 convolutions are incorporated in the feature extraction blocks of the spatial branch to effectively capture the spatial information of neighboring pixels. Thus, the high-level spatial features obtained from the HSI data through these feature extraction blocks are denoted as \\(F_{\\text{spa}}\\). To effectively combine the spectral features from the pseudoendmembers with the spatial features from the HSI, SSFM is constructed in this study. This module aims to enhance the network model's ability of utilizing and integrating spectral and spatial information to improve the unmixing accuracy. As shown in Fig. 4, the overall structure of SSFM contains the channel attention mechanism and the spatial attention mechanism, which are described in detail as follows. 1. _Channel Attention Mechanism:_ During the process of HU, the pseudoendmembers are remarkably high resemblance to the pure endmembers. Therefore, in the absence of the pure endmembers, the spectral differences between different pseudoendmembers can be used as a substitute for the spectral differences between the pure endmembers. In view of this, a channel attention mechanism is introduced to enhance or suppress the channel features that are responsive to the differences between different Fig. 4: Framework of SSFM. Fig. 3: Network structure of the proposed SSF-Net. pseudoendmembers to improve the accuracy of abundance estimation during HU. The workflow of the channel attention module is shown in Fig. 5(a). First, the input spectral feature \\(F_{\\text{spe}}\\in\\mathbb{R}^{l\\times H\\times W}\\) is processed using the global average pooling and global maximum pooling operations to obtain the average pooling feature \\(F_{\\text{avg}}^{c}\\in\\mathbb{R}^{l\\times 1\\times 1}\\) and the maximum pooling feature \\(F_{\\text{max}}^{c}\\in\\mathbb{R}^{l\\times 1\\times 1}\\), respectively. Then, to capture the association information between \\(F_{\\text{avg}}^{c}\\) and \\(F_{\\text{max}}^{c}\\), they are separately put into a shared multilayer perceptron (MLP) and the outputs are summed. Finally, the channel attention map is output \\(M_{C}(F_{\\text{spe}})\\) by virtue of the sigmoid function. To reduce the number of parameters, the hidden layer size in the MLP is set to \\(l/r\\), where \\(r\\) is the compression ratio. Through multiple sets of experiments, it is found that when \\(r\\) is 16, the weight operator corresponding to the entire channel attention module can significantly improve the representation ability of the network model for abundance. The channel attention module can be expressed as follows: \\[M_{C}\\left(F_{\\text{spe}}\\right)\\] \\[=\\delta\\left[\\text{MLP}\\left(\\text{Maxpool}\\left(F_{\\text{spe}} \\right)\\right)\\oplus\\text{MLP}\\left(\\text{Avgpool}\\left(F_{\\text{spe}}\\right) \\right)\\right]\\] \\[=\\delta\\left[\\text{MLP}\\left(F_{\\text{max}}^{c}\\right)\\oplus \\text{MLP}\\left(F_{\\text{avg}}^{c}\\right)\\right] \\tag{7}\\] where \\(\\delta\\) represents the sigmoid function, and \\(\\oplus\\) refers to the operation of elementwise addition. 2. _Spatial Attention Mechanism:_ The complex distribution of real-world environments and the susceptibility of the HSI imaging process to interference from external factors cause the spectral profiles of pixels in the HSI to be affected by SV [44, 45], resulting in significant differences in the contributions to the unmixing of HIS pixels from different regions. The SV can often lead to deviations between the spectral curves of some pixels in the HSI and the ideal spectral curves, thereby affecting the accuracy of HU. To this end, the spatial attention mechanism is introduced to enhance or suppress the importance of pixels in different regions during the HU process to improve the unmixing ability of the network. The workflow of the spatial attention mechanism is shown in Fig. 5(b). First, the input spatial features \\(F_{\\text{spa}}\\in\\mathbb{R}^{l\\times H\\times W}\\) are subjected to the operations of global average pooling and global max pooling, which results in the average-pooled feature \\(F_{\\text{avg}}^{s}\\in\\mathbb{R}^{1\\times H\\times W}\\) and the max-pooled feature \\(F_{\\text{max}}^{s}\\in\\mathbb{R}^{1\\times H\\times W}\\), respectively. Then, these two feature maps are merged together by concatenation operation along the channel dimension, yielding the fused feature \\(F_{A-M}^{s}\\in\\mathbb{R}^{2\\times H\\times W}\\), which serves for calculating the spatial weights. Subsequently, the fused feature \\(F_{A-M}^{s}\\) is further processed by a 2-D convolution with a kernel size of 7 \\(\\times\\) 7, yielding the spatial weight operator. Finally, this spatial weight operator is converted to a spatial attention map \\(M_{S}(F_{\\text{spa}})\\) using the sigmoid activation function. The spatial attention module can be formulated as follows: \\[M_{S}\\left(F_{\\text{spa}}\\right)\\] \\[=\\delta\\left[f^{7\\times 7}\\left(\\text{Concat}\\left(\\text{Maxpool} \\left(F_{\\text{spa}}\\right);\\text{Avgpool}\\left(F_{\\text{spa}}\\right)\\right) \\right)\\right]\\] \\[=\\delta\\left[f^{7\\times 7}\\left(\\text{Concat}\\left(F_{\\text{max}}^{s };F_{\\text{avg}}^{s}\\right)\\right)\\right] \\tag{8}\\] where \\(f^{7\\times 7}\\) denotes the 2-D convolution with a kernel size of \\(7\\times 7\\). Concat represents the concatenation operation of vertically stacking the feature maps along the channel dimension, and \\(\\delta\\) refers to the sigmoid function. 3. _Fusion of Spectral-Spatial Features:_ SSFM facilitates the comprehensive mining and utilization of spectral and spatial information by synchronizing the fusion of features using the channel attention mechanism and spatial attention mechanism, thus significantly improving the unmixing accuracy. The former mechanism evaluates the importance of channels based on the spectral features to improve the abundance estimation accuracy, while the latter mechanism focuses on elevating the significance of different pixels in space to obtain better endmember extraction results. The fusion process of SSFM can be formulated as follows: \\[F_{\\text{fused}}=M_{C}\\left(F_{\\text{spe}}\\right)\\otimes\\left[F_{\\text{spa}} \\oplus\\left(F_{\\text{spa}}\\otimes M_{S}\\left(F_{\\text{spa}}\\right)\\right)\\right]\\] (9) where \\(\\otimes\\) and \\(\\oplus\\) denote the operations of element-by-element multiplication and addition, respectively; and \\(F_{\\text{fused}}\\) refers to the fused features. Furthermore, the abundance maps are obtained by employing a 3 \\(\\times\\) 3 convolutional operation upon the \\(F_{\\text{fused}}\\), where the number of abundance maps is consistent with the number of endmembers. In the end, a softmax function is used immediately after the convolution layer to ensure that the abundance results satisfy the ANC and ASC constraints. ### _Decoder_ The decoder reconstructs the input pixels by integrating the estimated abundances and the corresponding endmembers. Such a process can be expressed as follows: \\[\\hat{y}_{i}=f\\left(W^{(d)}\\hat{a}_{i}\\right)=\\hat{E}\\hat{a}_{i}. \\tag{10}\\] In the equation above, \\(\\left\\{\\hat{y}_{i}\\right\\}_{i=1}^{n}\\in\\mathbb{R}^{l}\\) is the reconstructed pixels, and \\(\\hat{E}\\in\\mathbb{R}^{l\\times p}\\) represents the estimated endmember matrix. \\(\\left\\{\\hat{a}_{i}\\right\\}_{i=1}^{n}\\in\\mathbb{R}^{p}\\) denotes the generated abundance vector. Fig. 5: Framework of attention mechanism. (a) Channels attention. (b) Spatial attention. Notably, to reduce the training time, the method of VCA is employed to initialize the weights \\(W^{(d)}\\) of the decoder. ### _Objective Function_ To achieve the best possible training results, the loss function of the SSF-Net model is designed to consist of SAD and MSE. The SAD exhibits spectral scale invariance as it evaluates the similarity between two spectral curves by calculating the angle between the target spectrum and the reference spectrum. And a smaller angle between the two spectral curves indicates more similarity. The calculation formula of SAD is given as follows: \\[J_{\\text{SAD}}=\\frac{1}{n}\\sum_{i=1}^{n}\\arccos\\frac{\\langle\\hat{y}_{i},y_{i} \\rangle}{\\|\\hat{y}_{i}\\|\\cdot\\|y_{i}\\|} \\tag{11}\\] where \\(y_{i}\\) and \\(\\hat{y}_{i}\\) denote the input and reconstructed pixel data, respectively. \\(n\\) denotes the total number of pixels. Although SAD can improve the accuracy of endmember extraction, it is prone to larger errors in abundance estimation as it only considers the scale invariance of endmembers. To this end, the MSE is also introduced into the objective function to ensure that the network can obtain more accurate abundance \\[J_{\\text{MSE}}=\\frac{1}{n}\\sum_{i=1}^{n}\\|\\hat{y}_{i}-y_{i}\\|^{2}. \\tag{12}\\] To strive for better unmixing results, the overall loss function of the network in this study is defined as a weighted combination of SAD error and MSE error, as shown in the following equation: \\[L=\\alpha J_{\\text{SAD}}+\\beta J_{\\text{MSE}}. \\tag{13}\\] Here, \\(\\alpha\\) and \\(\\beta\\) represent the hyperparameters of the loss function. ## IV Experiments In this section, a comparatively experimental analysis is carried out with the state-of-the-art HSI unmixing methods to demonstrate the superiority of the SSF-Net network. The algorithms selected for comparison include three classical methods: fully constrained least-squares unmixing (FCLSU) [14], multilayer nonnegative matrix factorization (MLNMF) [46], spatial group sparsity regularized nonnegative matrix factorization (SGSNMF) [47], SNMF-Net [48], and four deep-learning-based methods: uDAS [31], DAEU [49], CNNAEU [36], and CyCU-Net [34]. These methods are widely recognized and highly represented in the field of HU. To ensure fairness in the experiments, the VCA [16] is first adopted for generating the initial endmember for all the comparison algorithms. ### _Data Description_ #### Iv-A1 Synthetic Dataset The synthetic dataset is composed of five randomly selected spectral curves from the ASTER spectral library, as curated by Jin et al. [50]. This dataset consists of 60 \\(\\times\\) 60 pixels, with a total of 200 spectral bands spanning from 0.4 to 14 \\(\\mu\\)m. The abundance maps follow a Dirichlet distribution. To simulate the endmember variability in real HSI data, this dataset is made by using asphalt as the background color and the remaining four endmembers are randomly scattered in the corners. Moreover, to enhance the realism of the synthetic hyperspectral data, Gaussian noise with different signal-to-noise ratios (SNRs) was introduced to the synthetic dataset. The data contain a total of five endmembers: limestone, conifer, basalt, concrete, and asphalt. Fig. 6(a) shows the RGB true color image corresponding to this data area. #### Iv-A2 Samson Dataset The Samson dataset is acquired using the SAMSON sensor. The image consists of 952 \\(\\times\\) 952 pixels with a total of 156 spectral bands ranging from 0.401 to 0.889 \\(\\mu\\)m. Considering that the size of the original image is too large, an area of 95 \\(\\times\\) 95 pixels is cropped out starting from the position of (252,332) pixels in the original image as the experimental data. Specifically, the cropped data contain three endmembers: Soil, Tree, and Water. Fig. 6(b) presents the corresponding RGB true color image of these data. #### Iv-A3 Jasper Dataset The Jasper data were collected by the airborne visible infrared imaging spectrometer sensor. The image is 521 \\(\\times\\) 614 pixels and contains 224 bands with a spectral range from 0.38 to 2.50 \\(\\mu\\)m. Since the original image was too large, only a cropped subimage containing 100 \\(\\times\\) 100 pixels is used in this experiment, with its first pixel starting from the position of (252,332) pixels in the original image. After removing some of the bands affected by high water vapor concentration and atmospheric effects, only 198 channels are retained in these data, which contain four endmembers: Road, Soil, Water, and Tree. Fig. 6(c) shows the RGB true color image corresponding to this data area. #### Iv-A4 Urban Dataset The Urban data were obtained from the hyperspectral digital image collection experimental sensor for the urban area of Copoas, Texas, USA. The image consists of 307 \\(\\times\\) 307 pixels and contains 210 bands with a spectral range from 0.4 to 2.5 \\(\\mu\\)m. Only 162 bands are retained after removing bands affected by high water vapor concentrations and atmospheric effects. In these HIS data, there are five endmembers: Asphalt, Grass, Tree, Roof, and Dirt. Fig. 6(d) presents the corresponding RGB true color image of these data. ### _Experimental Settings_ #### Iv-B1 Hyperparameter Settings The implementation of the SSF-Net model in this study is based on the PyTorch framework with an i9-9900K CPU and an NVIDIA 2080 8GB GPU as the hardware platform. During the training process, the Adam optimizer is used to update the network parameters, where the learning rate is set to \\(1\\times 10^{-3}\\). To further improve the network Fig. 6: RGB image of four datasets. (a) Synthetic. (b) Samson. (c) Jasper. (d) Urban. accuracy, the learning rate decay strategy is adopted, that is, the learning rate is decayed once every 40 epochs of training, and the maximum number of iterations is set to 800. #### V-B2 Evaluation Metrics To evaluate the network unmixing performance, the following two metrics are introduced into the experiments: root-mean-square error (RMSE) and SAD, which are defined as follows: \\[L_{\\text{RMSE}}\\left(a_{i},\\hat{a}_{i}\\right) =\\sqrt{\\frac{1}{N}\\sum_{1}^{N}\\left\\|\\hat{a}_{i}-a_{i}\\right\\|_{2} ^{2}} \\tag{14}\\] \\[L_{\\text{SAD}}\\left(m_{i},\\hat{m}_{i}\\right) =\\arccos\\left(\\frac{\\langle m_{i},\\hat{m}_{i}\\rangle}{\\|m_{i}\\|_{2} \\|\\hat{m}_{i}\\|_{2}}\\right) \\tag{15}\\] where \\(a_{i}\\) and \\(\\hat{a}_{i}\\) represent the true abundance vector and generated abundance vector, respectively. \\(m_{i}\\) and \\(\\hat{m}_{i}\\) denote the true endmember and extracted endmember, respectively. ### _Experimental Result and Analysis_ #### V-C1 Synthetic Dataset The quantitative results of the RMSE as well as the SAD for each endmember on the synthetic dataset by the different algorithms are presented in Table I. Meanwhile, Figs. 7 and 8 show the abundance maps and corresponding endmember results extracted by different algorithms on the synthetic dataset. In the synthetic dataset, SGSNMF achieves better unmixing results than FCLSU and MLNMF. SGSNMF uses spatial information to divide the HSI into spatial groups and incorporates a sparsity constraint of spatial group into nonnegative matrix factorization, which results in a superior decomposition structure. Although SNMF is constructed based on a nonnegative matrix model with L-p sparse constraints, its failure to take the spatial information into account results in unmixing performance in synthetic data without significant advantages. The uDAS algorithm achieves promising results in Fig. 7: Abundance maps of five materials from the synthetic data obtained by different algorithms. abundance estimation by incorporating denoising constraints and attaching certain physical constraints. The DAEU, on the other hand, only focuses on spectral information and ignores spatial information. Although CNNAEU and CyCU-Net consider spatial information, they ignore the inter-SV, which leads to their unsatisfactory performance on synthetic datasets. The experimental results of these algorithms demonstrate the importance of taking both spectral and spatial information into consideration to obtain accurate HU results. In contrast, the proposed SSF-Net achieves superior results on synthetic data, which proves its superiority in the unmixing task. To verify the robustness of the proposed SSF-Net network, the varying SNR values from 20 to 40 dB are added to the synthetic dataset. Correspondingly, the quantitative experimental results are presented in Table II. In general, the unmixing accuracy of these algorithms tends to decrease as the noise increases. The classical algorithm SGSNMF exerts good performance on the synthetic dataset. Benefiting from denoising processing, the uDAS network exhibits minimal variation in unmixing accuracy under different noise conditions. In contrast, the models of SNMF, CNNAEU, and CyCU-Net perform poorly in high-noise situations. Particularly important, the proposed SSF-Net has obtained remarkable accuracy in both abundance estimation and endmember extraction under varying noise levels, which fully demonstrates its effectiveness and robustness. #### Iv-A2 Samson Dataset The results of the RMSE and SAD quantification for each endmember on the Samson dataset by the different algorithms are presented in Table III. Figs. 9 and 10 show the abundance maps and corresponding endmember results extracted by different algorithms on the Samson dataset. The Samson dataset is characterized by a relatively uniform spatial distribution of different materials, making it widely regarded as a relatively simple unmixing dataset. In general, the algorithms have all achieved relatively excellent results. However, despite the promising results obtained by the algorithm of SGSNMF on synthetic datasets, its performance on the Samson dataset is subbar. The reason for this discrepancy can be attributed to the distribution complexity of ground objects and the non-Gaussian nature of noise distribution in real scenes. Moreover, the unmixing accuracy of all classical algorithms is lower than that of deep learning models. Specifically, the different algorithms are presented in Table IV. Figs. 11 and 12 show the abundance maps and the corresponding endmember results extracted by different algorithms on this dataset. The distribution of ground objects in the dataset of Jasper exhibits a higher level of complexity compared with the Samson dataset. As shown in Fig. 11, the unmixing algorithms, such as FCLSU, MLNMF, SGSNMF, SNMF, and uDAS, have deficiencies in accurately extracting all the roads, which is manifested in the fact that some of the roads are misclassified as water bodies, which affects the final unmixing accuracy. In contrast, deep learning algorithms are better able to identify roads and, thus, generate more accurate abundance maps. As can be seen in Table IV, the proposed SSF-Net network achieves excellent results in the Jasper dataset. The quantification results show that all accuracy evaluation metrics, except for SAD_Tree, achieve optimal accuracy levels, indicating that SSF-Net has excellent performance in the unmixing task. #### Iv-B4 Urban Dataset The results of the RMSE and SAD quantification for each endmember on the Urban dataset by the different algorithms are presented in Table V. Figs. 13 and 14 show the abundance maps and corresponding endmember results extracted by different algorithms on the Urban dataset. Among four experimental datasets, the Urban dataset is the most heavily mixed dataset in terms of the mixing degree of ground objects. Visually, the abundance maps generated by the proposed SSF-Net network exhibit the highest degree of similarity to the real abundance maps. From the quantitative results, SSF-Net achieves the best accuracy in terms of the Mean_SAD Fig. 10: Comparison of endmembers between SSF-Net (blue curves) and the corresponding GT (red curves) on the Samson dataset. Fig. 9: Abundance maps of three materials from the Samson data obtained by different algorithms. and Mean_RMSE. By comparing the experimental results on multiple datasets, the proposed SSF-Net network can consistently outperform other methods in generating more accurate and reliable estimates of endmembers and abundances, which fully demonstrates the remarkable superiority of the SSF-Net network in HU tasks. ### _Computational Cost_ Table VI presents the run time by all comparison methods on different datasets. It can be seen that the classical methods, FCLS, MLNMF, and SGSNMF, have a relatively simple computational process and, thus, have a low time overhead. In contrast, deep learning models usually run significantly longer than classical methods due to their complex network structure and inclusion of a large number of parameters. Among these deep learning models, SNMF, uDAS, and DAEU belong to the pixel-level unmixing networks. Since they are unmixed pixel-by-pixel, their processing time increases with the number of pixels. However, CNNAEU and CyCU-Net are spatial-level unmixing networks, which capture spatial context information by introducing a receptive field. However, it is also the inclusion of the receptive field that causes their time overhead to significantly exceed that of pixel-level unmixing networks. Overall, the time expenditure of the proposed method in this study is between the pixel-level Fig. 11: Abundance maps of four materials from the Jasper ridge data obtained by different algorithms. Fig. 12: Comparison of endmembers between SSF-Net (blue curves) and the corresponding GT (red curves) on the Jasper dataset. and spatial-level unmixing network models as mentioned above. The network model proposed in this study also constructs an SSFM module in the encoder, which contains spatial attention and spectral attention mechanisms, which helps to take into account the spatial contextual information in the HSI data and the spectral disparity information between the pseudoendmembers. Despite the increase in model runtime overhead caused by the introduction of the attention mechanism, the proposed method is able to significantly improve the unmixing accuracy. ### _Ablation Analysis_ To verify the effectiveness of each module in the SSF-Net network, ablation experiments are conducted in this section for the spectral attention module and the spatial attention module on the Urban dataset. As can be seen from Table VII, when the SSF-Net network removes both the spectral attention module in spectral branch and the spatial attention module in spatial branch, the performance of the network becomes the worst, which also indicates to a certain extent that the potential information contained in the HSI data is not fully exploited. Fig. 14: Comparison of endmembers between SSF-Net (blue curves) and the corresponding GT (red curves) on the urban dataset. Fig. 13: Abundance maps of five materials from the urban data obtained by different algorithms. In the SSF-Net network, the addition of the spectral attention module can enhance the network's ability of abundance estimation. The spatial attention module, on the other hand, is able to improve the endmember extraction accuracy of the network. It is worth emphasizing that the spectral attention module plays a crucial role in enhancing the abundance estimation capability of the network, mainly by focusing on the distinct disparities between different spectral features. And that, the spatial attention module exhibits higher sensitivity to the difference between different spatial regions, which significantly enhances the endmember extraction ability of the network. The results of the ablation experiments show that the joint use of both spectral attention module and spatial attention module in SSF-Net can be more effective in mining high-dimensional features from HSI, thus obtaining better unmixing results. ### _Discussion_ By conducting a quantitative analysis of the experimental results on the four datasets, it is found that SSF-Net behaves prominently superior unmixing performance to the other comparative methods. Meanwhile, the complexity of the spatial distribution of features in real images far exceeds that of synthetic datasets, resulting in the poor performance of some NMF-based unmixing methods in real datasets. Besides, SNMF-Net is of high physical interpretability as it is built by unrolling L-p sparsity constrained NMF model, so it achieves higher accuracy than other NMF methods in the real datasets. However, because it only performs unmixing at the pixel level without introducing spatial information, it does not achieve particularly good unmixing accuracy. Similarly, the pixel-level-based unmixing method also includes DAEU, which pays more attention to endmember extraction, and a good endmember extraction result will further enhance the abundance estimation result, thus it also achieves relatively excellent unmixing accuracy in these comparative experiments. Although CNNAEU introduces spatial information, its loss function only considers SAD, which makes its unmixing results not as good as other DL unmixing networks. The accuracy of CyCU-Net is poor because its receptive field does not cover the complete image, resulting in its lack of extensive information and remote dependencies. Most importantly, the proposed method in this study, however, achieves the optimal accuracy in real datasets and the unmixing results can be perceived as the closest to the ground truth in terms of visualization. This further confirms the excellence of the proposed network model in the unmixing task. ## V Conclusion In this study, a convolutional AE HU network called SSF-Net is proposed ingeniously integrating both spatial and spectral features. The architecture of this network is conceived in such a manner that it initiates its operation by employing a regional VCA algorithm to extract the pseudodendmembers from HSI data. Then, the network utilizes the spatial attention module along with the spectral attention module to learn the spatial difference information contained within the HSI data and spectral difference information amongst the pseudoendmembers, respectively, in such a way that the network makes the best use of the information inherent in the HSI data, resulting in more reasonable and superior unmixing results. Experiments confirm the effectiveness of the SSF-Net network proposed in this article on synthetic and real hyperspectral datasets with higher unmixing accuracy compared with other state-of-the-art HU methods. The proposed SSF-Net network is built based on LMM. However, considering the complexity of the hyperspectral imaging process, the NLMM is more suitable for elaborating its imaging principles. Thus, our future research aims to develop more general and powerful NLMM-based unmixing networks that integrate the spatial-spectral features. Meanwhile, the introduction of LiDAR data to aid in HU has been shown to be feasible. Thus, designing a network architecture that fuses the spatial-spectral features of HSI with those extracted from the LiDAR point cloud can help to address the unmixing issue more effectively. ## References * [1] J. M. Bioucas-Dias et al., \"Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 5, no. 2, pp. 354-379, Apr. 2012, doi: 10.1109/JSTARS.2012.2194696. * [2] X. Mei, Y. Ma, C. Li, F. Fan, J. Huang, and J. Ma, \"Robust GBM hyperspectral image unmixing with superpixel segmentation based low rank and sparse representation,\" _Neurocomputing_, vol. 275, pp. 2783-2797, Jan. 2018, doi: 10.1016/j.neuom.2017.11.052. * [3] J. Jiang, J. Ma, Z. Wang, C. Chen, and X. Liu, \"Hyperspectral image classification in the presence of noisy labels,\" _IEEE Trans. Remote Sens._, vol. 57, no. 2, pp. 851-865, Feb. 2019, doi: 10.1109/TGRS.2018.2861992. * [4] D. Manolakis, C. Sirascusa, and G. Shaw, \"Hyperspectral subpixel target detection using the linear mixing model,\" _IEEE Trans. Geosci. Remote Sens._, vol. 39, no. 7, pp. 1392-1409, Jul. 2010, doi: 10.1109/36.934072. * [5] J. Ma, H. Zhou, J. Zhao, Y. Gao, J. Jiang, and J. Tian, \"Robust feature matching for remote sensing image registration via locally linear transforming,\" _IEEE Trans. Geosci. Remote Sens._, vol. 53, no. 12, pp. 6469-6481, Dec. 2015, doi: 10.1109/TGRS.2015.2441954. * [6] Q. Jin et al., \"Gaussian mixture model for hyperspectral unmixing with low-rank representation,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, Yokohama, Japan, 2019, pp. 294-297, doi: 10.1109/IGARS.2019.8898410. * [7] D. Hong, N. Yokoya, J. Chanussot, J. Xu, and X. X. Zhu, \"Joint and progressive subspace analysis (JPSA) with spatial-spectral manifold alignment for semisupervised hyperspectral dimensionality reduction,\" _IEEE Trans. Cybern._, vol. 51, no. 7, pp. 3602-3615, Jul. 2021, doi: 10.1109/TCYB.2020.3028931. * [8] N. Keshava and J. F. Mustard, \"Spectral unmixing,\" _IEEE Signal Process. Mag._, vol. 19, no. 1, pp. 44-57, Jan. 2002, doi: 10.1109/79.794727. * [9] P. Poulet, B. L. Elhmann, J. F. Mustard, M. Vincendon, and Y. Langevin, \"Modal mineralogy of planetary surfaces from visible and near-infrared spectral data,\" in _Proc. 2nd Workshop Hyperspectral Image Signal Process., Evol. Remote Sens., Reykjavik, Iceland_, 2010, pp. 1-4, doi: 10.1109/WHISPERS.2010.5594898. * [10] D. A. Roberts, M. Gardner, R. Church, S. Ustin, G. Scheer, and R. O. Green, \"Mapping chapari in the Santa Monica mountains using multiple endmember spectral mixture models,\" _Remote Sens. Environ._, vol. 65, no. 3, pp. 267-279, Sep. 1998, doi: 10.1016/S0034-4257(98)00037-6. * [11] C. Yang, J. H. Everitt, Q. Du, B. Luo, and J. Chanussot, \"Using high-resolution airborne and satellite imagery to assess crop growth and yield variability for precision agriculture,\" _Proc. IEEE_, vol. 101, no. 3, pp. 582-592, Mar. 2013, doi: 10.1109/PROC.2012.2196249. * [12] M. Teke, H. S. Deveel, O. Halligolu, S. Z. Gurbuz, and U. Sakarya, \"A short survey of hyperspectral remote sensing applications in agriculture,\" in _Proc. 6th Int. Conf. Recent Adv. Space Technol._, Istanbul, Turkey, 2013, pp. 171-176, doi: 10.1109/RAST.2013.6581194. * [13] R. Heylen, M. Parenette, and P. Gader, \"A review of nonlinear hyperspectral unmixing methods,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 7, no. 6, pp. 1844-1868, Jun. 2014, doi: 10.1109/JS-TARS.2014.2320576. * [14] D. C. Heinz and C.-H. Chang, \"Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 39, no. 3, pp. 529-545, Mar. 2001, doi: 10.1109/36.911111. * [15] M. E. Winter, \"N-FINDR: An algorithm for fast autonomous spectral end-member determination in hyperspectral data,\" _Proc. SPIE_, vol. 3753, pp. 266-275, 1999, doi: 10.1117/12.366289. * [16] J. M. P. Nascimento and J. M. B. Dias, \"Vertex component analysis: A fast algorithm to umink hyperspectral data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 4, pp. 898-910, Apr. 2005, doi: 10.1109/TGRS.2005.844293. * [17] N. Dobigeon, S. Moussaoui, M. Coulon, J.-Y. Tourneret, and A. O. Hero, \"Joint Bayesian endmember extraction and linear unmixing for hyperspectral imagery,\" _IEEE Trans. Signal Process._, vol. 57, no. 11, pp. 4355-4368, Nov. 2009, doi: 10.1109/TSP.2009.2025797. * [18] L. Miao and H. Qi, \"Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization,\" _IEEE Trans. Geosci. Remote Sens._, vol. 45, no. 3, pp. 765-777, Mar. 2007, doi: 10.1109/TGRS.2006.888466. * [19] O. Eches, N. Dobigeon, C. Mailhes, and J.-Y. Tourneret, \"Bayesian estimation of linear mixtures using the normal compositional model: Application to hyperspectral imagery,\" _IEEE Trans. Image Process._, vol. 19, no. 6, pp. 1403-1413, Jun. 2010, doi: 10.1109/TIP.200.2042993. * [20] J. M. P. Nascimento and J. M. Bioucas-Dias, \"Hyperspectral unmixing based on mixtures of Dirichlet components,\" _IEEE Trans. Geosci. Remote Sens._, vol. 50, no. 3, pp. 863-878, Mar. 2012, doi: 10.1109/TGRS.2011.2163941. * [21] S. Jia and Y. Qian, \"Constrained nonnegative matrix factorization for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 47, no. 1, pp. 161-173, Jan. 2009, doi: 10.1109/TGRS.2008.2002882. * [22] F. Zhu, Y. Wang, B. Fan, S. Xiang, G. Meng, and C. Pan, \"Spectral unmixing via data-guided sparsity,\" _IEEE Trans. Image Process._, vol. 23, no. 12, pp. 5412-5427, Dec. 2014, doi: 10.1109/TIP.2014.2363423. * [23] R. Huang, X. Li, and L. Zhao, \"Spectral-spatial robust nonnegative matrix factorization for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 10, pp. 8235-8254, Oct. 2019, doi: 10.1109/TGRS.2019.2919616. * [24] Y. Qian, F. Xiong, S. Zeng, J. Zhou, and Y. Y. Tang, \"Matrix-vector nonnegative tensor factorization for blind unmixing of hyperspectral imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 3, pp. 1776-1792, Mar. 2017, doi: 10.1109/TGRS.2016.2633279. * [25] F. Xiong, Y. Qian, J. Zhou, and Y. Y. Tang, \"Hyperspectral unmixing via total variation regularized nonnegative tensor factorization,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 4, pp. 2341-2357, Apr. 2019, doi: 10.1109/TGRS.2018.2872888. * [26] Y. Qian, F. Xiong, Q. Qian, and J. Zhou, \"Spectral mixture model inspired network architectures for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 58, no. 10, pp. 7418-7434, Oct. 2020, doi: 10.1109/TGRS.2020.2982490. * [27] F. Xiong, J. Zhou, M. Ye, J. Lu, and Y. Qian, \"NMF-SAE: An interpretable sparse autoencoder for hyperspectral unmixing,\" in _Proc. IEEE Int. Conf. Acoust., Speech Signal Process._, Toronto, ON, Canada, 2021, pp. 1865-1869, doi: 10.1109/ICASSP39728.2021.9414084. * [28] Y. Qian, F. Xiong, M. Ye, and J. Zhou, \"Model-inspired deep neural networks for hyperspectral unmixing,\" in _Advances in Hyperspectral Image Processing Techniques_, C.-L. Chang, Ed., listed. Hoboken, NJ, USA: Wiley, 2022, pp. 363-403, doi: 10.1002/9781119687788.ch13. * [29] W. He, H. Zhang, and L. Zhang, \"Hyperspectral unmixing using total variation regularized weighted space non-negative matrix factorization,\" in _Proc. IEEE Int. Geosci. Remote Sens. Symp._, Beijing, China, 2016, pp. 7034-7037, doi: 10.1109/GARSS.2016.7730834. * [30] S. Ozkan, B. Kaya, and G. Bozdagi Akar, \"EndNet: Sparse autoencoder network for endmember extraction and hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 1, pp. 482-496, Jan. 2019, doi: 10.1109/TGRS.2018.2856929. * [31] Y. Qu and H. Qi, \"uDAS: An untied denoising autoencoder with sparsity for spectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 3, pp. 1698-1712, Mar. 2019, doi: 10.1109/TGRS.2018.2868690. * [32] Y. Su, J. Li, A. Plaza, A. Marinoni, P. Gamba, and S. Chakraroverty, \"DAN: Deep autoencoder networks for hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 7, pp. 4309-4321, Jul. 2019, doi: 10.1109/TGRS.2018.28996033. * [33] Z. Han, D. Hong, L. Gao, J. Yao, B. Zhang, and J. Chanussot, \"Multi-modal hyperspectral unmixing: Insights from attention networks,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, Mar. 2022, Art. 05524913, doi: 10.1109/TGRS.2022.23155794. * [34] L. Gao, Z. Han, D. Hong, B. Zhang, and J. Chanussot, \"CyCU-Net: Cycle-consistency unmixing network by learning cascaded autoencoders,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5503914, doi: 10.1109/TGRS.2021.3064958. * [35] D. Hong etal., \"Endmember-guided unmixing network (EGU-Net): A general deep learning framework for self-supervised hyperspectral unmixing,\" _IEEE Trans. Neural Netw. Learn. Syst._, vol. 33, no. 11, pp. 6518-6531, Nov. 2022, doi: 10.1109/TFNLS.2021.3082289. * [36] B. Palsson, M. O. Ulfarsson, and J. R. Svinesson, \"Convolutional autoencoder for spectral-spatial hyperspectral unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 59, no. 1, pp. 535-549, Jan. 2021, doi: 10.1109/TGRS.2020.2992743. * [37] L. Qi, F. Gao, J. Dong, X. Gao, and Q. Du, \"SSCU-Net: Spatial-spectral collaborative unmixing network for hyperspectral images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5407515, doi: 10.1109/TGRS.2022.23150970. * [38] P. Ghosh, S. K. Roy, B. Koirala, B. Rasti, and P. Scheunders, \"Hyperspectral unmixing using transformer network,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5535116, doi: 10.1109/TGRS.2022.3196057. * [39] C. Cui, Y. Zhong, X. Wang, and L. Zhang, \"Realistic mixing miniature scene hyperspectral unmixing: From benchmark datasets to autonomous unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 61, 2023, Art. no. 5502515, doi: 10.1109/TGRS.202.33236677. * [40] B. Somers, M. Zortea, A. Plaza, and G. P. Asner, \"Automated extraction of image-based endmember bundles for improved spectral unmixing,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 5, no. 2, pp. 396-408, Apr. 2012, doi: 10.1109/JSTARS.2011.2181340. * [41] B. Xu, N. Wang, T. Chen, and M. Li, \"Empirical evaluation of rectified activations in convolutional network,\" _2015, arXiv:1505.00853_. * [42] O. Eches, N. Dobigeon, and J.-Y. Tourneret, \"Enhancing hyperspectral image unmixing with spatial correlations,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 11, pp. 4239-4247, Nov. 2011, doi: 10.1109/TGRS.2011.2140119. * [43] P. V. Giampouras, K. E. Themelis, A. A. Rontogiannis, and K. D. Koutroumbas, \"Simultaneously sparse and low-rank abundance matrix estimation for hyperspectral image unmixing,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 8, pp. 4775-4789, Aug. 2016, doi: 10.1109/TGRS.2016.2551327. * [44] J. Theiler, A. Ziemann, S. Matteoli, and M. Diani, \"Spectral variability of remotely sensed target materials: Causes, models, and strategies for mitigation and robust exploitation,\" _IEEE Geosci. Remote Sens. Mag._, vol. 7, no. 2, pp. 8-30, Jun. 2019, doi: 10.1109/MGRS.2019.2890997. * [45] R. A. Borso et al., \"Spectral variability in hyperspectral data unmixing: A comprehensive review,\" _IEEE Geosci. Remote Sens._, vol. 9, no. 4, pp. 223-270, Dec. 2021, doi: 10.1109/MGRS.2021.3071158. * [46] R. Rajabi and H. Ghasemani, \"Spectral unmixing of hyperspectral imagery using multilayer NMF\\\\ \\end{tabular} \\begin{tabular}{c c} & Huizheng Yao received the B.S. degree in remote sensing science and technology from Shandong Agricultural University, Tai'an, China, in 2021. He is currently working toward the master's degree in surveying and mapping engineering with the School of Marine and Spatial Information, China University of Petroleum (East China), Qingdao, China. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jie Zhang received the B.S. and M.S. degrees in mathematics from Inner Mongolia University, Hohhot, China, in 1984 and 1987, respectively, and the Ph.D. degree in applied mathematics from Tsinghua University, Beijing, China, in 1993. He is a Professor with the College of Oceanography and Space Informatics, China University of Petroleum (East China), Qingdao, China, and the Laboratory of Marine Physics and Remote Sensing, Ministry of Natural Resources, First Institute of Oceanography, Qingdao. He has a broad interest in marine physics and remote sensing applications. His research mainly focuses on the following: the SAR retrieval of ocean dynamics' processes and the SAR detection of marine targets, ocean hyperspectral remote sensing, high-frequency surface-wave radar ocean detection techniques, and the integration of marine remote sensing application systems. He has served as a member of multiple domestic/intemational committees and a principal investigator/coinvestigator of many projects from the National Science Foundation of China, the State High-Tech Development Plan (863), and other funding agencies. He has been the supervisor of nearly 40 Ph.D. degree students and has authored or coauthored more than 200 articles. \\\\ \\end{tabular} \\begin{tabular}{c c} & Han Gao (Member, IEEE) received the B.S. and M.S. degrees in geodesy and surveying engineering and the Ph.D. degree in photogrammetry and remote sensing from Central South University, Changsha, China, in 2015, 2018, and 2022, respectively. Since 2022, he has been a Lecturer with the College of Oceanography and Space Informatics, China University of Petroleum (East China), Qingdao, China. His research interests include crop and ocean remote sensing, time-series polarimetric SAR image processing, and pattern recognition. Dr. Gao is a Reviewer for _Remote Sensing of Environment, IEEE Geoscience and Remote Sensing Letters_, and several other international journals in the remote sensing and image processing field. \\\\ \\end{tabular}
In recent years, deep learning has received tremendous attention in the field of hyperspectral unmixing (HU) due to its powerful learning capabilities. Particularly, the unsupervised unmixing method based on an autoencoder (AE) has become a research hotspot. Most of the current AE unmixing networks mainly focus on information about pixels and their neighborhoods in images. However, they make insufficient use of information about spatial heterogeneity and spectral differences of endmembers in hyperspectral image (HSI) data. To this end, an AE HU network with the name of SSF-Net is proposed for fusing the spatial-spectral features. The network first extracts pseudoendmember information from the HSI using a regional vertex component analysis algorithm. Then, a dual-branch feature fusion module incorporating a spatial-spectral attention mechanism is constructed to make full use of the information in the HSI data, thereby improving the network's unmixing performance. It is worth stating that SSF-Net can fuse spatial-spectral information and utilize different attention maps to obtain more significant spectral difference information and more discriminative spatial difference information about the scene. The experimental results on synthetic and real datasets demonstrate that the proposed SSF-Net outperforms state-of-the-art unmixing algorithms. Attention, autoencoder (AE), deep learning (DL), feature fusion, hyperspectral unmixing (HU).
Condense the content of the following passage.
281
isprs/acc5c769_6879_4948_b3d1_3706734c6422.md
# Soil Resource Appraisal of Emirate of Dubai for Optimum Landuse Planning B. R. M. Rao M.A.Fyzee G. Sujatha M. M.Waddkar National Remote Sensing Agency, Hyderabad - 500037, [email protected] ## 1 Introduction There is evidence to show that a majority of land resources world over are under pressure and are undergoing degradation at an unacceptable rate. The situation in Dubai Emirate is no different. Moreover, Dubai being located in an arid desert belt, it is highly sensitive to a number of critical environmental issues. Soil is one such important issue, as it is a non-renewable natural resource. Soil is in-fact at the heart of terrestrial ecology and is vital to our very existence. Information on soils with regard to their nature, extent and spatial distribution along with their potential and limitations is required for a variety of uses, namely agriculture, engineering, sanitary, recreation, landscing, etc. In addition, such information is also required for modeling and environmental impact analysis. Therefore, it is imperative that we manage and conserve soils judiciously to meet the growing need for food, fodder, fibre and fuel. For this purpose, we must have an in-depth knowledge about different soils, their morphology, characterization, behaviour, kind and degree of problem and their extent and distribution in the landscape. This can be achieved through soil survey and mapping, the scales of mapping however depends on the purpose and type of terrain. The soil surveys can be carried out using the modern technology of space borne remote sensing. In-fact this technique has proved to be a powerful tool, because it enables to study resources in spatial domain in time and cost effective manner. Therefore, remote sensing techniques are now operationally used for studying soil resources. Survey of literature reveals that satellite data of LANDSAT MSS / TM, SPOT, IRS LISS I, II, III, IV and PAN and IKONOS etc. were used to map soils at different scales from 1: 250000 to 1: 12500 scales. The information on soil resources was generated by understanding of the spectral response pattern of soils (Westin and Frazee,1976; Dwivedi, 1985). The studies with Landsat TM, SPOT and IRS satellites had set the trends of rapid development and wider acceptability of remote sensing application in soil resources study. (Biswas 1987; Frazier and Cheng 1989; NRSA and AIS&LUS 1986). Similar studies have been conducted using SPOT HRV (Agbu,1991) and Indian Remote Sensing Satellite ( IRS-IA, IRS 1B, IRS IC/ID) Linear image self scanning sensor (LISS-I, LISS-II- LISS-III) data (Rao _et al_1998; Rao _et al_2001). National Remote Sensing Agency, Hyderabad has prepared soil maps at 1:50,000 scale on operational basis (NRSA 1995, NRSA 1996, NRSA 2001, NRSA 2002) in various parts of the country for various user Departments like Agriculture, Command Area Authorities, etc. The soil mapping was carried out for specific purposes like land capability classification, land irrigability assessment, optimum land use planning etc. To address the critical environmental issues, the UAE government and the Dubai Municipality, have taken various measures. They have in fact, considered environment protection as one of the prerequisites for its development. In this endeavor, National Remote Sensing Agency, Hyderabad has prepared an inventory of soil resources of Dubai Emirate on the request of Dubai Municipality. ## 2 Study Area The project area (Dubai Emirate) covers an area of 4000 sq km. The overall climate of the Eminates is subtropical, warm and arid. Air temperatures range between 35\\({}^{\\rm o}\\) to 50\\({}^{\\rm o}\\) C from May to October during the middle of the day and between 20\\({}^{\\rm o}\\) to 35\\({}^{\\rm o}\\) at mid-day during the winter months. In the interior of the desert, the highest temperatures on the ground during summer rise to 70\\({}^{\\rm o}\\)C and the lowest may fall below 6\\({}^{\\rm o}\\)C during winter months. The average annual rainfall of the Emirate is less than 100 mm and it occurs mostly during winter months. Some monsoon showers are also received during summer months on the east coast and in the mountain belt that form the watershed between the Arabian Gulf and the Gulf of Oman. The rainfall, however, is very erratic and varies extremely both from year to year and place to place. Some moisture also condenses in the form of fog and dew, especially in the coastal belts. Strong winds and sand storms are also of common occurrence throughout the Emirate. They are especially more frequent and severe during summer months. Sand dunes are the dominant feature of the landscape over most of the Emirate. ## 3.Methodology The soil map of Dubai and Hatta was prepared at 1: 25,000 scale using the Indian Remote sensing Satellite (IRS-ID) Linear Imaging Self Scanning Sensor (LISS IV) data. The soils of the study area were classified as per USDA (2003) upto soil series and their association level. Besides the satellite data,published reports, climatic data were also used. Essentially soil survey consists of systematic examination, description, classification and mapping of soils of an area and it comprises of a group of interlinked operations involving * Preliminary visual interpretation of satellite data * Fieldwork to study important characteristics of soils and associated land characteristics such as landform, natural vegetation, slope etc. * Laboratory analysis to support and supplement the field observations. * Correlation and classification of soils into defined taxonomic units. * that is establishing and drawing soil boundaries of different kinds of soils on standard geographical base map. ### Preliminary visual interpretation The steps involved in pre-field interpretation is monoscopic visual interpretation of Indian Remote Sensing Satellite (IRS) ID LISS-III and IRS P6 LISS IV data at 1: 25,000 scale based on the standard remote sensing techniques using image characteristics such as tone, texture, pattern, shape, size, association etc. in conjunction with the collateral information available in the form of published maps and reports. On screen visual interpretation was carried out on the analog satellite data. Satellite data was subjected to different image enhancement techniques so as to derive maximum information. The correlation, thus observed is validated in the field. Having delineated broad physiologic units and the underlying parent material, further divisions within these units were made based on land use / land cover, slope, erosion, drainage pattern and image elements such as tone, texture, size, pattern, shape and association. A tentative interpretation key in terms of lithology, physiography, land use/ land cover, erosion /salinity / alkalinity hazards and image elements was developed. Sample strips representing ample variation in the delineated physiographic units were selected for field verification. The location of sample strips was transferred onto the base map for precisely locating them on the ground. ### Field work A field visit was undertaken in Dubai, to study important characteristics of soils and associated land features for mapping soils.A preliminary study on the landform, geology, climate and vegetation of the study area was undertaken. After these preliminary studies, the survey work was undertaken with the objective to study soils under natural condition and to prepare a mapping legend based on soil properties. The detailed soil-site study was undertaken in each soil-mapping unit by general traversing and by collecting surface soil, minipiti and soil profile observations at intervals depending on soil variability. The soil profiles /pedons (A vertical cut from the surface down to the hard rock from which the soil is formed gives the soil profile and in the profile several successive characteristic layers can be identified) were studied by digging pits of approximately 1 x 0.5 x 1 m in dimension (length, width, depth) at representative areas. Each of these layers (horizons) was studied for various morphological features such as colour, texture, structure, consistency etc. The depth to bedrock or compact layer was determined. External features such as slope, erosion, surface stones etc were also noted. The frequent profile sampling enabled to determine the depth of various horizons and also the horizons of gains (illuvial) and losses (eluvial). Observations on land use, land cover were also noted. ### Laboratory analysis Laboratory analysis was carried out for the soil samples collected during the field work. All the physical and chemical properties were carried out in the lab as per the standard international procedures. ### Post field interpretation Preliminary interpreted soil boundaries from IRS-P6 were modified using field information and final thematic details were transferred on to the base map. Finally the soils were classified in the light of soil morphology features, soil physical and chemical properties as described in soil survey procedure (USDA, 2003). Thus, the landscape map was converted into soil scape map in terms of soil series and / associations thereof. ## 4. Results and discussion ### Soils and their classification The relationship between physiology of an area and soils has been widely recognized as the factors involved in the physiographic processes correspond close to that of soil formation. This relationship between landscape features and soil conditions makes possible for prediction about nature and distribution pattern of different soils. The present soil-scape is the result of different geomorphic processes that have taken place in the past and modified the soil-scape in its present manifestation. Sufficient observations in the form of profiles, minipits, have been recorded for the study. Based on the variations in the soil and site characteristics 26 soil series have been identified in Dubai area and 13 series in Hatta area. ### Description of soils The soils are generally coarse, sandy, highly calcareous and undeveloped. They are deficient in organic matter. Soils in the coastal belt and low-lying areas and depressions are highly saline and where as the soils in the interior of the desert are either saline or sodic. The major Landscapes identified in the study area are coastal plain, lower aeolian plain and upper aeolian plain. These major landscape units were further subdivided into different physiographic units such as Beach, tidal flats/ mudflats, salt flats ( young and old) and dunes over the coastal plain The lower aeolian plain has low sand dunes,longitudinal dunes, intertendual flat areas ( sandy, saline & sodic ), dual complex areas and residual hills and linear ridge. The upper anion plain has dual complex, intertendual flats (Sandy and sodic) low sand dunes. The soil map of Dubai is shown in Figure 1. Table 1 shows the different soil mapping units encountered in the study area along with their description and areal extent. In the hilly area of Hatta the major Physiography units identified are structural valley region, piedmont area residual hills, denudational hills (Periodite/ dunite/ gabbro), denudational hills ( limestone /dolomite/marble) and structural hills (periodic/ dunite / gabbrointerbedded),The soil temperature regime of Dubai is hypertherm, The soil moisture regime is aridic / torric. In general all the soils of Dubai are calcareous. As the study area falls under arid region the soils occurring in these areas will normally have an aridic (torric) moisture regime. On analysis of all the soil samples collected during the ground truth, it can be inferred that all the soils of Dubai and Hatta area are calcareous. ### The land capability units in Dubai The land capability grouping for the study area has been made based on the above mentioned parameters i.e. the grouping the inherent soil characteristics, external land features and environmental factors following the criteria laid down in soil survey manual (All India Soil Survey and land use Survey, 1970). In Dubai area four land capability classes were identified and they are discussed as under. Land capability Class -IV: These lands have severe limitations due to shallowness, gravel, stone. The major limitation of this land capability unit for carrying out any agricultural practice in the study area is the climate besides, the soils also have shallow soil depth and severe root zone limitations. The units identified under the sub class -IV es are soil mapping unit 8.9, 12 and 14 comprising the soil series namely Al Awir2, Margham, Al Murquab, Al Labhab-1, Al Labhab 2, Al faqa, Margham 2 Margham 3 and Hatta 4. The total area covered under the capability class IV is 68589 hectares and accounts for 17 % of the study area. These lands are marginally suitable for agriculture as the average annual rainfall of Dubai is extremely low < 100 mm per year, agriculture can be taken up only with the support of assured irrigation. Land capability Class - VI: These lands are also non-arable due to limitations due to severe soil erosion. The areas earmarked under this category include the low sand dunes which are subjected to severe wind erosion, The soil mapping units which are included under this category are 5, 6, 7, 10, 12, and 15 which include the soil series namely; Jebel Ali 6, Al Lisaili, Juneirah, Al Awir 1, Um Nahad, Figure 1: Soil map of Dubai emirate Emirates Rd.2, Emirates Rd.1, Tawi Nizwa1, Tawi Nizwa 2, Hatta 1, Hatta 2 and Hatta3. The area covered under this sub class ( VI es) is 259372 hectares and accounts for 66 % of the study area in Dubai. Land capability Class - VII: These lands are also non-arable due to limitations of steep slopes, shallow soil depth due to soil erosion. The soil series included under this category is Tawi Nizwa 4., The area covered under this sub class is 36 hectares and accounts for 0.09 % of the study area. \\begin{table} \\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \\hline & Al-faqa & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} \\\\ & & & & & \\\\ & & & & & \\\\ \\hline 14 & Margham 2 & Brown to yellowish brown, shallow, moderately well drained, fine sand and sodic; associated with soils which are yellowish brown, very shallow, well drained, loamy sand, sodic, swiders coupled with sand dunes. & Sandy Skeletal (sodic) & \\multirow{4}{*}{3907} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\hline 15 & Tawi Nizwa 1 & Strong brown, very deep, excessively drained, fine sand, sodic; associated with soils which are strong brown, moderately deep, well drained, fine sand, sodic, occurring over gently sloping dunes associated with interdural flats. & Topic (sodic) & \\multirow{4}{*}{VI es} & \\multirow{4}{*}{19946} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & \\\\ \\hline 16 & Tawi Nizwa 3 & Roedish brown, very deep, excessively drained, fine sand, sodic,with surface covered by iron and manganese concretions and occurring over pierodment covered with sand dunes. & Topic (sodic) & \\multirow{4}{*}{VIII e} & \\multirow{4}{*}{1513} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\hline 17 & Tawi Nizwa 4 & Brown, shallow, well-drained, fine sand, sodic, generally occurring over hillside slopes (8-15%) with sand dune cover. & Rock outcrops Sandy Skeletal (sodic) & \\multirow{4}{*}{VII es} & \\multirow{4}{*}{36} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\hline 18 & Linear Ridge & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} \\\\ \\cline{3-3} & & & & & \\\\ \\hline 19 & Barchans & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} & \\multicolumn{1}{p{113.8pt}|}{} \\\\ \\cline{3-3} & & & & & \\\\ \\hline 20 & Hatta \\(-1\\) & Brown, very shallow, well drained, loamy sand, skeletal \\& stony, surface covered with stones (>75%) associated with brown, very shallow, well drained, gravelly sand, stony and occurring over the structural valley region. & Loamy skeletal Type & \\multirow{4}{*}{VI es} & \\multirow{4}{*}{1289} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\hline 21 & Hatta \\(-2\\) & Brown, very shallow, well drained, sand, stony, \\& Associated with brown, very shallow, well drained, lyany sand, skeletal soils with surface covered with (>75%) and occuring over the structural valley region. & Sandy skeletal Type & \\multirow{4}{*}{VI es} & \\multirow{4}{*}{552} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\hline 22 & Hatta \\(-4\\) & Dusky red, moderately deep, silty clay loam moderately well drained, gravelly, surface covered with stones 40-75\\%. & Loamy skeletal Type & \\multirow{4}{*}{IV es} & \\multirow{4}{*}{12} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & Hatta \\(-5\\) & Dark grayish brown, very shallow, sandy clay loam, well drained, gravelly with surface cover of stones (>75%) occurring over ward areas. & Loamy skeletal Fluventic & \\multirow{4}{*}{VI es} & \\multirow{4}{*}{280} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\hline 24 & Hatta \\(-7\\) & Brown, very shallow-to-shallow, well drained, loam, skeletal soils with surface covered with stones (>75%) and occurring over the piedmont area. & Loamy skeletal Type & \\multirow{4}{*}{VI es} & \\multirow{4}{*}{54} \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & & & \\\\ \\cline{3-3} & & & ### Salt affected soils Salt-affected soils are the characteristics of arid-environment. In regions, where precipitation is less than potential evapo-transpiration, the cations released by mineral weathering accumulate because there are not enough rains to thoroughly leach them away. The salt-affected soils occur in Dubai are either saline or sodia in nature. Besides, the quality of ground water varies a great deal in the Emirates. Water in the shallow aquifers derived from annual precipitation or its sub-surface flow from the mountains contains fewer salts. The ground water derived from the deeper aquifers is generally more brackish. The quality of the ground water gradually improves as one moves away from the coast into the interior. If more brackish water is used for irrigation purposseit quickly salinizes the soils. ## 5 Optimal Land Use Plan During the study various problems and potentials of Dubai were identified. The main problems include very low rainfall, extreme variation in temperature, strong winds and sand storm, very coarse textured soils, poor fertility of soils, soil salinity and society, degradation of natural vegetation, depletion and poor quality of ground water resources etc. Based on the resource constraints prevalent in Dubai, the critical areas can be managed effectively by some of the methods for optimum utilization of available resources. One of the major problem is Shifting sand dunes which occupy great areas in Dubai and therefore present a major problem confronting the development. Roads, habitations, cultivated land and forest plantations are liable to be encroached upon by moving sand and sand dunes in most places in the Dubai Emirates. This encroachment by sand is considerably much more where natural vegetation in the surrounding areas has either been destroyed or depleted (Gupta J.P., 1990). A variety of sand dunes are found in various parts of the Emirate (Mohamed Khan I.R., 2003). They are continually extending, moving or changing their shapes and forms. Some of the measures which are recommended are; control of wind erosion and sand dune stabilization. The other measures which can be adopted for optimal land utilization include, afforestation, development of pasture lands / silvipasture, horticulture development, management of salt affected soils and identifying the areas suitable for urbanization ## 6 Conclusions The study of soils in the region of Dubai Emirate using high resolution multispectral satellite data has resulted in providing a detailed natural resource inventory outlining the problems and potentials of various soils. As the Emirate is falling under extremely arid area permanent or sustained agriculture is not possible without artificial irrigation. ## References Agbu, P.A.1991 Comparisons between spectral mapping units derived from SPOT image texture and field soil map units. 1970. Photogrammetric Engineering and Remote Sensing. 57(4) pp. 397-405 Biswas R.R.1987. A soil map through Landsat Satellite imagery in part of Auranga catchment in Ranchi and Palamau districts of Bihar, India. Int. J. Rem. Sensing 8. 4, 541- 543 Dwivedi. R.S.1985. A multistage approach to mapping soil resources from remotely sensed data. Soil survey and land evaluation 5 (1) pp. 13 -18. Frazier B.E. and Cheng, Y. 1989. Remote sensing of soils in the Eastern Palouse region with Landsat Thematic Mapper, Remote Sensing of Environment, 28: 317 -325 Gupta J.P. (1990). Sand Dunes and their Stabilisation. _In_ Technologies for Wasteland Development ( Eds I P Abrol and V V Dhruva Narayana), ICAR, New Delhi Mohamed Khan I.R. (2003). Sand and Sand Dune Stabilization in the United Arab Emirates, Emirates Natural History Group, Served from Sacramento, California, USA. NRSA 1995 Soil resource mapping for part of Kurnool District, Andhra Pradesh, India. IMSD project report. NRSA 1996 Soil survey and land evaluation for agricultural land use planning in tribal areas of Andhra Pradesh NRSA and AIS &LUS 1986. Utility and relative efficiency of various remote sensing techniques in soil and land use data abstraction, Chitradurga (RS) project, Karnataka, India, Project report. NRSA 2001 Management of salt affected soils and rational land use at village level using remote sensing and GIS. NRSA 2002 Perspective land use planning of dadra & Nagar Haveli UT -A remote sensing based approach. Rao, B.R.M., Sreenivas K.and Fyzee M.A, and T. Ravi Sankar. 1998. Evaluation of IRS-IC PAN data for mapping soil resources. NNRMS bullet (22) pp.68-71 Rao, B.R.M.,.Fyzee M.A,. Thammappa S.S and K.V.Ramana, 2001. Utility of space borne multispectral data for soil and land irrigability assessment - A case study from Southern part of India. Geocarto International Journal, Vol. 16 (2) pp31 - 36. U. S. Soil Department of Agriculture, 2003. Keys to soil taxonomy Westin. F.C.and Frazee, C.J. 1976. Landsat data; its use in soil survey programme Soil science. Soc. Am. J. 40. pp. 81-89 ## Acknowledgements The project team is grateful to Dr. K. Radhakrishnan Director, NRSA and to Dr. P.S.Roy, Deputy Director - RS & GIS AA, of NRSA for providing necessary technical and logistic support. We thank all the staff of Global Scan Technologies, L.L.C Dubai for giving us full technical and logistical support and for their complete involvement in the project during and after the field visit to Dubai.
Information on soils with regard to their nature, extent and spatial distribution along with their potential and limitations is required for a variety of uses, namely agricultural development, engineering, sanitary, recreation, aesthetic, etc. The soils of Dubai were mapped using remote sensing satellite data (IRS- P6 LISS IV ) at 1: 25,000 scale and were classified upto series level and their associations as per the Keys to Soil Taxonomy (USDA, 2003). In Dubai 39 soil series have been identified. The soils of Dubai area are generally coarse textured, (sandy) highly calcareous and undeveloped. The soils of the inland areas are either saline or sodic whereas the soils in the hilly area of Hatta area are characterized by steep side slopes and devoid of vegetation and are highly calcareous. The other features of the soils occurring in the study area are discussed in this paper. The soils have major limitations of climate and soils, which can be improved by adopting various soil conservation measures like sand dune stabilization, shelter belts, afforestation etc. Soil resource map, remote sensing landscapes and physiography
Give a concise overview of the text below.
232
isprs/a76addb1_8c6c_4be1_8ae3_4ff46b96d522.md
# Light Diffusion in the Tropical Dry Forest of Costa Rica S. Calvo-Rodriguez*, G.A Sanchez-Azofefia* * (calvorod, gasanche)@ualberta.ca ## 1 Introduction Leaf Area Index (LAI) plays a key role on light interception by the canopy, as the leaf area attenuates and reduces the transmission of radiation to the forest interior. In many forests with closed canopies, only a small fraction (0.5-5%) of the solar radiation incident above the canopy reaches the understory (Chazdon and Pearcey, 1991). In Tropical Dry Forests (TDF) around 6-22% of the solar radiation incident above the canopy reaches the understory (Lehrija-Trejos et al. 2011) because of the low canopy height, simple vertical stratification and low leaf biomass. TDFs are defined as a vegetation type dominated by deciduous trees (at least 50% of the trees are deciduous), with an annual average temperature of at least 25degC or higher, annual precipitation of 700-2000 mm per year, and a dry season (precipitation less than 100 mm) of three or more months (Salanchez-Azofefia et al. 2014). In TDFs LAI varies seasonally, having a maximum value during the growing season when water is available and a minimum value at the end of the dry season (Mass et al. 1995; Kalacska et al. 2005). These forests are characterized for a fast increase from LAI-0 in the dry season to full canopy coverage (LAI higher than 7) within a few days after the first rains, and a slow but sustain loss of leaves at the end of the raining season until it reaches again a value of 0. Although LAI is important to characterize ecosystem process, availability of LAI data is rare for TDFs (Kalacska et al. 2005). Moreover, a systematic bias exists for LAI observations since most of the data is collected in old growth tropical rain forests without considering differences in canopy structure and composition found in secondary forests or the differences that may exist with other ecosystems (Murphy and Lugo, 1986; Weaver and Murphy, 1990; Lean and Rowntree, 1993; Kalacska et al. 2005). Although many instruments exist to characterize LAI from the ground, these methods are often laborious and costly. Unlike most flux sensors that are designed to run in all weather conditions, most ground-based instruments to estimate LAI operate largely under conditions of no precipitation (Wilson and Meyers, 2007) and their observations are punctual in nature since they are related to specific field campaigns when data is collected once without proper temporal follow-up. Measurements of LAI using traditional optical sensors (e.g., LAI-2000) also require multiple visits to the field under very specific sky conditions, making them unsuitable for inaccessible areas and forests with dense vegetation, as well as in areas where persistent sunny conditions are the norm. Continuous estimations of LAI using remote sensors can be obtained as a function of the spectral Vegetation Indices (VIs), such as the Normalized Differential Vegetation Index (NDVI) (Wilson and Meyers, 2007). NDVI is currently the most widely used reflectance vegetation index (Pontaineller et al. 2003). NDVI can be a sensitive indicator of the amount and vigor of the vegetation, because the two wavebands used to estimate it represent the section of the solar spectrum on which the Photosynthetically Active Radiation region takes place (PAR, 400nm to 700nm) (Carlson et al. 1994). In this context, the main objective of this study was to characterize light diffusion through the canopy in a TDF ecosystem at two different levels of ecological succession by integrating optical phenology observations. A successional stage is defined here as phase on which a given forest is found since its processes of functional recovery started as a community. Depending of their level of succession, a forestcanopy will have different vertical and horizontal structures, as well as different levels of species, functional traits (Quesada et al. 2009) and light attenuation. ## 2 Methods ### Study area The study was conducted at the Santa Rosa National Park located in the province of Guanacaste, Costa Rica. The park is located in the tropical dry forest and tropical moist forest according to the classification of Holdridge life zones (Holdridge, 1967). The area under study is specifically deployed in the early stage and intermediate stage of succession of IDF, having a bio-temperature not less than 24\\({}^{\\circ}\\)C and annual precipitation average between 1,500 and 2,200 mm (Hartshorn, 1983). Rain falls primarily between early April and early November, and the rest of the year is nearly without rain (Jazen, 1993). The present study was carried in two phases, from March 2010 to March 2011 field measurements were taken in the early stage of succession. From March 2013 to March 2015 field measurements were taken in the intermediate stage of succession. ### Experimental Design The first component of this research involved the deployment of two optical phenology towers. The optical phenology towers are structures with a set of radiation sensors. Two-pyranometers (measuring the solar radiation flux density) and two photosynthetic active radiation sensors (measuring the radiation only between 400 and 700 nm). Ratios of these measurements were used to derive vegetation indexes such as the NDVI. In the intermediate stage of succession one 20m height tower was deployed in 2013 and in the early stage one 15m height tower was deployed in 2009. In both cases sensors are at least 5 meters above the canopy (Pastorello et al. 2011). For the determination of the LAI, around each optical phenology tower 3 plots were established. The early plots contain 165 points in total for measuring LAI and the intermediate plots contain 84 points in total. The measurements were collected every month, using a Plant Canopy Analyzer (PCA) LAI-2000 which gives a measurement of the Plant Area Index (PAI) of each point. In the dry season, hemispherical photographs in each sampling point were taken, using a camera with a 180\\({}^{\\circ}\\) lens (fish-eye) to calculate the Woody Area Index (WAI) (Sanchez-Azofeifa et al. 2009). Finally, we calculated the specific Leaf Area Index (LAI) by removing the contribution of WAI from PAI values (Kalacsa et al. 2005). Further information about the study site, deployment of plots and LAI data and the optical phenology towers can be found in (Calvo-Rodriguez, 2015). ### Data analysis Using the information collected by the optical phenology towers in the plots, we calculate the Normalized Difference Vegetation Index (NDVI) for each successional stage using the formulas described in Wilson and Meyers (2007). For this study only the measurements obtained between 10:00 and 14:00 hours were considered, to avoid inference of bidirectional reflectance (Disney, 2004). In addition, we obtained NDVI data derived from MODIS/TERRA (product MOD13Q1, collection 5, 16-day L3 Global 250m) for all the plots where LAI was measured (ORNL DAAC, 2012). To calculate \"K\" we used the NDVI data and the LAI data to solve equation (1) given by (Campbell and Norman, 1998; Wilson and Meyers, 2007) \\[\\text{LAI}{=}\\text{Klog}\\frac{\\text{NDV}\\text{max}\\text{NDV}\\text{B}}{\\text{ NDV}\\text{max}\\cdot\\text{NDV}\\text{min}} \\tag{1}\\] Where \\(\\text{NDVI}_{\\text{max}}\\) corresponds to the average of NDVI values when the vegetation is dense (peak of rainy season); \\(\\text{NDVI}_{\\text{min}}\\) is the average of NDVI measured during the dry seasons, with leaf off; and \\(\\text{NDVI}_{\\text{is}}\\) is the mean of all changes in NDVI values during the rainy season. The LAI value is determined by the average of all the sampling points in each plot. To compare the potential of NDVI to estimate the \"K\" coefficient, we correlate NDVI with LAI. Coefficients of determination (R\\({}^{2}\\)) were used to assess their performance. After this, we proceeded to calculate the \"K\" coefficient from equation 1, using the NDVI data derived from the phenology towers and MODIS satellite and the LAI data derived from the plots. The \"K\" values were obtained for different days of the year (DOY) for each successional stage. Nonlinear regression analysis was used to assess the variation of the \"K\" coefficient throughout the year. Using the models derived from the regression curve, we estimated the LAI using only the \"K\" values and the NDVI derived from the optical phenology towers, and the MODIS data. To evaluate the estimation capability of the models, we examined the relationship between the observed LAI data using the LAI-2000, against the estimated data of LAI using the \"K\" coefficients derived from the phenology towers, and from MODIS NDVI using least squares linear regression. The performance of the models was evaluated using the coefficient of determination (R\\({}^{2}\\)), and the Root Mean Squared Error (RMSE). The higher the R\\({}^{2}\\) and the lower the RMSE, the better the accuracy of the model to estimate LAI. ## 3 Results and discussion The NDVI was correlated with LAI values in each successional stage using data from MODIS and the Optical Phenology Tower observations. The correlation with LAI (Figure 1) in the early successional stage was NDVI (R\\({}^{2}\\)=0.85) for the tower-based data and NDVI (R\\({}^{2}\\)=0.87) for MODIS satellite data. In the intermediate successional stage, the correlation with LAI was NDVI (R\\({}^{2}_{-}\\)=0.87) for the tower-based data followed by MODIS NDVI (R\\({}^{2}_{-}\\)=0.73) for satellite data. As expected, values of \"K\" were higher in the intermediate successional stage, lower in the early successional stage, and varied as a function of time (Figure 2). Maximum values occurred in the month of September, in both successional stages and minimum values occurred during the dry season (January-April). Table 1 presents results from the estimation of the \"K\" coefficient using equation 1. The average \"K\" coefficient for the growing season (End of May-beginning of December) was higher than 4 in the intermediate stage of succession and lower than 3 in the early stage of succession. Although \"K\" values using MODIS NDVI tended to be lower, there was no significant difference between the vegetation index (MODIS NDVI or Tower NDVI) used to estimate the \"K\" coefficient in the early stage (p=0.073) or in the intermediate stage (p=0.416). Variations can be explained by the lower values of NDVI found in MODIS data. Differences between NDVI values measured from the ground, and determined from satellites have been reported in the literature (Wang et al. 2004; Jenkins et al. 2007; Wilson and Meyers, 2007). Main variations in satellite NDVI data are due to atmospheric effects, strong effects of temporal sampling, remaining cloud cover, seasonality of vegetation and data processing (Wang et al. 2004). Also Himimima et al. (2012) emphasize that ground-based NDVI measurements are acquired at constant viewing angles, while MODIS satellite measurements are acquired with different viewing geometries. Moreover, any comparison with spectral-based NDVI must recognize that the wavelength used for the satellite-derived NDVI (R\\({}_{\\text{rot}}\\): 610nm-680nm, and R\\({}_{\\text{rot}}\\): 820nm-900nm) are quite different from those used in broadband NDVI calculations (R\\({}_{\\text{rot}}\\): 400-700nm, R\\({}_{\\text{rot}}\\): 305-2800nm) (Tittebrand et al. 2009). All the mentioned above might cause discrepancies in \"K\" values derived from ground measurements or satellite data. The strength of the relationship of the observed LAI (LAI-2000) and the estimated LAI derived from tower data (Figure 3) was strong in the early successional stage and in the intermediate \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline **Succession** & **NDVI** & \\begin{tabular}{c} **NDVI** \\\\ **min** \\\\ \\end{tabular} & \\begin{tabular}{c} **NDVI** \\\\ **max** \\\\ \\end{tabular} & \\begin{tabular}{c} **NDVI** \\\\ **max** \\\\ \\end{tabular} & \\begin{tabular}{c} **K** \\\\ **max** \\\\ \\end{tabular} & \\begin{tabular}{c} **K** \\\\ **K** \\\\ \\end{tabular} \\\\ \\hline Early Stage & \\begin{tabular}{c} Tower \\\\ MODIS \\\\ \\end{tabular} & 0.54 & 0.79 & 0.77 & 2.90 & 2.51 \\\\ \\cline{2-7} Intermediate Stage & \\begin{tabular}{c} Tower \\\\ MODIS \\\\ \\end{tabular} successful stage (R\\({}^{2}\\)=0.89 and R\\({}^{2}\\)=0.93 respectively). This relationship was also strong for the observed LAI (LAI-2000) and the estimated LAI derived from MODIS NDVI in the early successful stage and in the intermediate successional stage (R\\({}^{2}\\)=0.88 and R\\({}^{2}\\)=0.93 respectively). For both successional stages, RMSE (Table 2) were also lower using the phenology tower data (RMSE=0.20 in the early stage, and RMSE=0.37 in the intermediate stage). Numerous studies have reported a good relationship between the MODIS NDVI and ground measurements of phenological parameters (Wang et al. 2005; Fontana et al. 2008; Eklundh et al. 2011). Values of NDVI can differ between similar sites, depending on local environmental factors like understory vegetation, litter, soil surface conditions, humidity, and roughness (Lacaze et al. 1996). Overall, values of \"K\" obtained from the NDVI of the optical phenology towers seemed to be more accurate for the estimation of LAI. According to Wilson and Meyers (2007), tower-derived NDVI is less affected by cloudy conditions, because continuous 30-min measurements of incoming solar radiation at the flux towers allows the investigator to directly identify and remove cloudy data points and outliers for the derivation of NDVI. Moreover, the treatment of the \"K\" coefficient and LAI as variables rather than constants over time is an improvement to reduce serious deviations in the LAI estimation in TDFs and, thereby, in estimates of carbon balances and primary productivity. In most studies in temperate environments only a single dataset is used to establish the (LAI, NDVI) relationship. These temperate ecosystems relationships also do not take into consideration the seasonal or year-to-year dynamics of deciduous ecosystems, which are fundamentally important to estimate the temporal variability of LAI (Wang et al. 2005). ## 4 Conclusions Strong correlation between NDVI and LAI demonstrates that it is possible to obtain accurate LAI values based on spectral features of vegetation derived from phenology towers or satellite data. Furthermore, the \"K\" coefficients differed between successional stages, indicating sensitivity to structural changes during forest restoration. Once this \"K\" coefficient is validated, it could be used to obtain accurate and automated LAI values for larger areas in IDF. This research provides information for validating satellite products and calibrating new algorithms for such products. ## Acknowledgements This work is part of the Research Project TROPI-DRY (Human, Ecological and Biophysical Dimension on Tropical Dry Forest), which is a collaborative research network funded by the Inter American Institute for Global Change Research (AI) Collaborative Research Network Program (CRN3-025). We acknowledge logistical support by the University of Alberta and the Costa Rica Institute of Technology (ITCR). ## References * Calvo-Rodriguez (2015) Calvo-Rodriguez, S. 2015. _Ecosystem Services, Forest Characterization, and Light Diffusion of Tropical Dry Forests_. (dissertation). University of Alberta, Edmonton, Canada. * Campbell and Norman (1998) Campbell, G., Norman, J., 1998. _An Introduction to Environmental Biophysics_. Springer-Verlag, New York, pp. 247-260. * Carlson et al. (1994) Carlson, T. N., Gillies, R. R., Perry, E. M., 1994. A method to make use of thermal infrared temperature and NDVI measurements to infer surface soil water content and fractional vegetation cover. _Remote Sensing Reviews_, 9, pp. 161-173. * Chazdon and Pearcy (1991) Chazdon, R.L., Pearcy, R.W., 1991. The Importance of Sunflecks for Forest Understory Plants. _BioScience_, 41(11), pp. 760-766. * Disney et al. (2004) Disney, M., Lewis, P., Thackrah, G., Quaife, T., Bamsley, M., 2004. Comparison of MODIS broadband albedo over an agricultural site with ground measurements and values derived from earth observation data at a range of spatial scales. _International Journal of Remote Sensing_, 25(23), pp. 5297-5317. * Eklundh et al. (2011) Eklundh, L., Jin, H., Schubert, P., Guzinski, R., Heliasz, M., 2011. An optical sensor network for vegetation phenology monitoring and satellite data calibration. _Sensors_, 11, pp. 7678-7709. * Fontana et al. (2008) Fontana, F., Rixen, C., Jonas, T., Aberggg, G., Wunderle, S., 2008. Alpine grassland phenology as seen in AVIRR, VEGETATION, and MODIS NDVI time series a comparison with in situ measurements. _Sensors_, 8(4), pp. 2833-2853. * Hartshorn (1983) Hartshorn, G.S., 1983. Chapter 7: Plants. In D.H. Janzen (Ed.): _Costa Rican Natural History_, University of Costa Rica: San Jose, C.R, pp. 118-157. * Hriminna et al. (2013) Hriminna, G., Dufrene, E., Pontailler, J. Y., Delpierre, N., Aubinet, M., Caquet, B., de Grandcourt, A., Burban, B., Flechard, C., Granier, A., Gross, P., Heinesch, B., Longdoz, B., Moureaux, C., Ourcival, J.M., Rambal, S., Saint Andre, L., Soudani, K., 2013. Evaluation of the potential of MODIS satellite data to predict vegetation phenology in different biomes: An investigation using ground-based NDVI measurements. _Remote Sensing of Environment_, 132, pp. 145-158. * Holdridge (1967) Holdridge, L.R., 1967. _Life zone ecology_. Tropical Science Center, San Jose, CR, pp. 40-43. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Stage** & **Method** & **Model to determine “K”** & **RMSE** \\\\ \\hline \\multirow{4}{*}{Early} & Tower & 4.2\\(\\pm\\)0.059(DOY)-1.15E-4(DOY\\({}^{3}\\)) & 0.20 \\\\ & NDVI & 3.41\\(\\pm\\)0.047(DOY)-9.32E-5(DOY\\({}^{3}\\)) & 0.26 \\\\ \\cline{2-4} & Tower & -8.50\\(\\pm\\)0.103(DOY)-1.87E-4(DOY\\({}^{3}\\)) & 0.37 \\\\ \\cline{2-4} & NDVI & -7.05\\(\\pm\\)0.085(DOY)-1.54E-6(DOY\\({}^{3}\\)) & 0.42 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Evaluation of the different methods to derive LAI in Santa Rosa National Park, Guanacaste. Janzen, D.H., 1993. Caterpillar seasonality in a Costa Rican dry forest. In N.E. Stamp and T.M. Casey (Eds.): _Caterpillars: Ecological and evolutionary constraints on foraging_, Chapman & Hall, London, pp. 448-477. * Jenkins et al. (2007) Jenkins, J.P., Richardson, A.D., Braswell, B.H., Ollinger, S.V., Hollinger, D.Y., Smith, M.L., 2007. Refining light-use efficiency calculations for a deciduous forest canopy using simultaneous tower-based carbon flux and radiometric measurements. _Agricultural and Forest Meteorology_, 143, pp. 64-79. * Kalacska et al. (2005) Kalacska, M.E.R., Calvo-Alvarado, J.C., Sanchez-Azofeifa, G.A., 2005. Calibration and assessment of seasonal changes in species leaf area in a tropical dry forest in different states of succession. _Tree Physiology_, 25, pp. 733-744. * Kalacska et al. (2005) Kalacska, M.E.R., Sanchez-Azofeifa, G.A., Calvo-Alvarado, J.C., Rivard, B., Quesada, M., 2005. Effects of season and successional stage on leaf area index and spectral vegetation indices in three Mesoamerican tropical dry forests. _Bioropica_, 37, pp. 486-496. * Lacaze et al. (1996) Lacaze, B., Caselles, V., Coll, C., Hill, J., Hoff, C., Jong, S. De, Valor, E., 1996. Integrated Approaches To Desertification Mapping and Monitoring in the Mediterranean Basin. _Final report of the DeMon-1 Project, Space Applications Inst., Environmental Mapping and Modelling Unit_, Brussels, 165 pp. * Lean and Rowntree (1993) Lean, J., Rowntree, P. R. 1993., GCM simulation of the impact of Amazonian deforestation on climate using an improved canopy representation. _Quarterly Journal of the Royal Meteorological Society_, 119, pp. 509-530. * Lebrija-Trejos et al. (2011) Lebrija-Trejos, E., Perez-Garcia, E.A., Meave, J.A., Poorter, L., Bongers, F., 2011., Environmental changes during secondary succession in a tropical dry forest in Mexico. _Journal of Tropical Ecology_, 27, pp. 477-489. * Murphy and Lugo (1986) Murphy, P.G., Lugo, A.E., 1986. Ecology of tropical dry forests. _Annals Review of Ecology and Systematics_, 17, pp. 67-88. * Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC), 2012. _MODIS subsetted land products, Collection_ 5. Retrieved from [[http://dac.ornl.gov/MODIS/modis.html](http://dac.ornl.gov/MODIS/modis.html)] * Pastorello et al. (2011) Pastorello, G.Z., Sanchez-Azofeifa, G.A., Nascimento, M.A., 2011. Enviro-Net: From Networks of Ground-Based Sensor Systems to a Web Platform for Sensor Data Management. _Sensors_, 11(6), pp. 6454-6479. * Pontailler et al. (2003) Pontailler, J.Y., Hymus, G. J., Drake, B.G., 2003. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques. _Canadian Journal of Remote Sensing_, 29(3), pp. 381-387. * Quesada et al. (2009) Quesada, M., Sanchez-Azofeifa, G.A., Alvarez-Anorve, M., Stoner, K.E., Avila-Cabadilla, L., Calvo-Alvarado, J.C., Castillo, A., do Espirito-Santo, M. M., Fagundes, M., Fernandes, G. W., Gamon, J., Lopezraziza-Mikel, M., Lawrence, D., Morellato, L.P.C., Powers, J.S., Neves, F.S., Rosas-Guerrero, V., Sayago, R., Sanchez-Montoya, G., 2009. Succession and management of tropical dry forests in the Americas: Review and new perspectives. _Forest Ecology and Management_, 258, pp. 1014-1024. * Sanchez-Azofeifa et al. (2014) Sanchez-Azofeifa, G.A., Calvo-Alvarado, J.C., do Espirito-Santo, M., Fernandes, G.W., Powers, J., Quesada, M., 2014. Tropical Dry Forests in the Americas: The Tropin-Dry Endeavor. In: Sanchez-Azofeifa, G.A. and Powers, J. (Eds.): _Tropical Dry Forests in the Americas: Ecology, Conservation, and Management_, CRC Press, Boca Raton, FL, pp 1-16. * Sanchez-Azofeifa et al. (2009) Sanchez-Azofeifa, G.A., Kalacska, M., do Espirito-Santo, M., Fernandes, G. W., Schnitzer, S., 2009. Tropical dry forest succession and the contribution of lianas to wood area index (WAI). _Forest Ecology and Management_, 258(6), pp. 941-948. * Tittebrand et al. (2009) Tittebrand, A., Spank, U., Bernhofer, C.H., 2009. Comparison of satellite-and ground-based NDVI above different land-use types. _Theoretical and applied climatology_, 98(1-2), pp. 171-186. * Wang et al. (2005) Wang, Q., Adiku, S., Tenhunen, J., Granier, A., 2005. On the relationship of NDVI with leaf area index in a deciduous forest site. _Remote Sensing of Environment_, 94(2), pp. 244-255. * Wang et al. (2004) Wang, Q., Tenhunen, J., Dinh, N.Q., Reichstein, M., Vesala, T., Keronen, P., 2004. Similarities in ground-and satellite-based NDVI time series and their relationship to physiological activity of a Scots pine forest in Finland. _Remote Sensing of Environment_, 93(1), pp. 225-237. * Weaver and Murphy (1990) Weaver, P. L., Murphy, P. G., 1990. Forest structure and productivity in Puerto Rico's Lquillo Mountains. _Bioropica_, 22, pp. 69-82. * Wilson and Meyers (2007) Wilson, T.B., Meyers, T.P., 2007. Determining vegetation indices from solar and photosynthetically active radiation fluxes. _Agricultural and Forest Meteorology_, 144, pp. 160-179.
Leaf Area Index (LAI) has been defined as the total leaf area (one-sided) in relation to the ground. LAI has an impact on tree growth and recruitment through the interception of light, which in turn affects primary productivity. Even though many instruments exist for estimating LAI from ground, they are often laborious and costly to run continuously. Measurements of LAI from the field using traditional sensors (e.g., LAI-2000) require multiple visits to the field under very specific sky conditions, making them unsuitable to operate in inaccessible areas and forests with dense vegetation, as well as areas where persistent sunny conditions are the norm like tropical dry forests. With this context, we proposed a methodology to characterize light diffusion based on NDVI and LAI measurements taken from the field in two successful stages in the tropical dry forest of Santa Rosa National Park in Costa Rica. We estimate a \"K\" coefficient to characterize light diffusion by the canopy, based on field NDVI measurements derived from optical phenology instruments and MODIS NDVI. From the coefficients determined, we estimated LAI values and compared them with ground measurements of LAI. In both successional stages ground measurements of LAI had no significant difference to the tower-derived LAI and the estimated LAI from MODIS NDVI. Leaf area index, spectral vegetation indices, MODIS
Provide a brief summary of the text.
271
arxiv-format/2310_01164v4.md
# Segment Any Building For Remote Sensing Lei Li Computer Science Department of Copenhagen University, [email protected] ## 1 Introduction The building environment, constituting a diverse spectrum of structures, remains a crucial facet of our urban and rural landscapes. Structures, ranging from residential and commercial spaces to industrial facilities, play instrumental roles in shaping economic dynamics, facilitating societal interactions, and influencing environmental outcomes. Consequently, the task of building segmentation and subsequent analysis holds paramount significance across an array of disciplines, including but not limited to urban planning, real estate, and disaster management [1]. These analytical processes provide indispensable insights and contribute to both the theoretical understanding and practical applications within these domains, affirming the necessity of building segmentation in contemporary academic research and industry practice. Building segmentation significantly depends on data derived from an array of imaging sources, chiefly encompassing high-resolution aerial photography and remote sensing imagery. Each of these sources offers unique vantage points and insights, which collectively contribute to a holistic understanding of built environments and forest management [2; 3; 4]. High-resolution aerial photography, for instance, is instrumental in providing intricately detailed depictions of buildings and their immediate surroundings. These close-up views are invaluable for conducting fine-grained analyses that delve into the minutiae of individual structures and their architectural features. On the other hand, remote sensing imagery affords a more macroscopic perspective, capturing expansive urban and rural areas. The broader view provided by such imagery facilitates large-scale analyses and enables comparative studies across extensive geographical regions. Together, these data sources, each with its own strengths, enrich the process of building segmentation by offering different layers of information, ultimately allowing researchers to unearth nuanced understandings of the built environment from multiple scales and perspectives. Despite the valuable insights provided by high-resolution aerial photography and satellite imagery [5], these data sources do present inherent challenges that must be acknowledged. The primary limitation of aerial photography lies in its restricted spatial coverage, rendering it less applicable for expansive geographical analyses. In contrast, satellite imagery, while boasting extensive coverage, often suffers from a relatively lower resolution, potentially compromising the detail of analytical outputs. Both are also susceptible to image quality inconsistencies due to varying atmospheric conditions during data acquisition. Furthermore, significant differences may arise in their optical properties due to variations in camera technologies and configurations, which can impact image colorimetry, contrast, and sharpness. These discrepancies emphasize the need for sophisticated calibration methods and advanced image processing techniques to mitigate potential inaccuracies and maximize the utility of each data source in the field of building segmentation. The distinct characteristics of high-resolution aerial photography and satellite imagery necessitate tailored methodological approaches for each data source. This requirement emerges from variances in resolution, camera properties, and imaging conditions, which imply that analytical techniques successful with one data type may not achieve equivalent accuracy with another. It's imperative to identify and address these challenges to push forward in building segmentation research and its numerous applications. Understanding these complexities can lead to the development of robust, source-specific analytical models, capable of harmonizing data from varied sources to enhance building segmentation accuracy. This approach can potentially lead to innovative breakthroughs in related fields like urban planning, environmental monitoring, and disaster management. Acknowledging the challenges inherent to the differing data sources, and building upon the recent advancements in the field of general segmentation, particularly the Segment Anything (SA) [6] method, we adopt a nuanced approach. Our strategy involves the amalgamation of multiple datasets processed through pretrained models, thereby addressing data discrepancies and facilitating mutual learning across various data domains. Our research contributions within the realm of building segmentation are manifold: Firstly, we harness the robust framework of the SSA method, utilizing its capacity for extensive data processing within large models. This aids in augmenting the precision and efficiency of building segmentation tasks, illustrating the value of integrating sophisticated algorithms within such expansive data processes. Secondly, we confront the issue of inter-data discrepancies through the individual processing of various datasets, thereby encouraging cross-domain learning. This approach not only serves to alleviate the constraints tied to individual data sources but also amplifies the overall learning process through the incorporation of diverse and extensive information. Lastly, our adapted method exhibits commendable results across an array of datasets, thereby underlining its efficacy and flexibility. This superior performance, regardless of dataset variability, reinforces the potential of our approach in providing general insights within the field of building segmentation. This article will proceed as follows: an exploration of the related work in the field forms the initial focus, providing a contextual understanding of the current state of building segmentation research. This is followed by an in-depth examination of the specific methods employed in our approach and a comprehensive discussion of the corresponding data performance. Subsequently, we present the experimental results, augmented by visualizations to provide a tangible representation of our findings. The article culminates with discussion and conclusion that encapsulates the core insights derived from our research and their implications for future work in the field. ## 2 Related work Image SegmentationImage Segmentation, as an essential step in image analysis and interpretation, has received considerable attention in academic research over the past few years. Such as Unet [7], Segformer [8], Deeplab [9], Con-Next [10]. The body of work spans across various techniques and methods, ranging from traditional threshold-based and region-growing methods to more advanced machine learning and deep learning techniques. Some work [11] In particular, the U-Net architecture, first introduced by Ronneberger et al [12]. in 2015, has been widely adopted for biomedical image segmentation due to its impressive performance and then been utilized to other different data domains. However, despite the strides made in this field, segmentation remains a challenging task due to issues such as the variability of object shapes and sizes, background clutter, and imaging conditions, thus, necessitating continued exploration and innovation in this area. Image Data fusionImage data fusion has emerged as a critical process in numerous image segmentation tasks [13; 14; 15; 16], including building segmentation, due to its capacity to combine complementary information from multiple data sources, thereby enhancing the quality and utility of the resulting data. Extensive literature exists concerning the methods and techniques utilized for this purpose. Conventional methods often incorporate mathematical transformations such as Principle Component Analysis (PCA) [13; 17; 18; 19] and Intensity Huse Saturation (IHS) for fusing low-resolution multi-spectral data with high-resolution panchromatic data. Recent years have witnessed a surge in research exploring machine learning and deep learning techniques for image fusion. Channel fusion [20], particularly Convolutional Neural Networks (CNNs), have been employed for their ability to learn complex and high-level features from multi-source data with similarity [21]. Despite the significant advancements in this domain, the fusion of image data remains a non-trivial task due to issues like preserving spectral and spatial information and mitigating artifacts in fused images. This ongoing challenge underscores the need for continued research and development of sophisticated fusion techniques tailored to specific segmentation tasks. Pre-trained modelThe use of large pre-trained models [22] as the basis for various specialized tasks is a prevalent strategy in the contemporary machine learning landscape. This approach leverages the broad feature learning capabilities of these models, which have been trained on extensive and diverse datasets, thus providing a robust starting point for a variety of specialized tasks. Models such as BERT [23], GPT, and SA [24], for instance, have shown significant efficacy when fine-tuned for specific tasks like text classification, object detection, and semantic segmentation, amongst others. These models offer the advantage of leveraging transfer learning, which allows the application of learned features to new tasks, thereby reducing the need for extensive data and computational resources. In the context of image segmentation tasks, the use of pre-trained models has been increasingly popular. For example, researchers have also investigated the use of large pre-trained models like VGG16 and VGG19 [25] for tasks like building segmentation. These models are typically fine-tuned on task-specific data, thus allowing the model to adapt its learned features to the unique characteristics of the new task. Moreover, the use of prompt pre-trained Segment Anything(SA) [6] for semantic segmentation has been explored recently in several studies. While this approach has yielded promising results, the adaptation of large pre-trained models to new tasks is an ongoing area of research, with ample scope for exploring novel fine-tuning strategies and model architectures. ## 3 Methods ### Problem formulation In the Figure 1, The four datasets highlighted in this research, namely MapAI Building [26], Inria Aerial Image Labeling Benchmark[27], WHU Building Dataset [7], and FloodNet[1], each bring their unique strengths and nuances to building segmentation tasks. The MapAI Building dataset stands out for its incorporation of laser data and ground truth masks, along with aerial images. It covers diverse building types and environments spanning Denmark and Norway, and the real-world derived data poses unique challenges and authenticity. The Inria Aerial Image Labeling Benchmark excels in its wide geographical coverage and high-resolution aerial imagery. It uniquely tests the generalization capabilities of segmentation techniques across different regions, illumination conditions, and seasons. The WHU Building Dataset, comprising both aerial and satellite imagery, provides a comprehensive depiction across varied scales, geographical locations, and imaging sources. The segmentation task is further complicated by the presence of diverse remote sensing platforms. Lastly, the FloodNet dataset is specialized for disaster management with its UAS-based high-resolution imageries. The dataset uniquely categorizes building and road structures based on their flood status, thus bringing a novel dimension to building segmentation tasks. Each dataset's idiosyncrasies underscore the need for adaptable segmentation methods capable of handling different data types, resolutions, and scenario-specific complexities. ### Data MapAI Building[26].The dataset employed in this research amalgamates aerial images, laser data, and ground truth masks corresponding to building structures, catering to a diverse range of environmental and building types. The training dataset is composed of data derived from multiple locations across Denmark, thereby ensuring considerable variability and diversity in the nature of the data. Conversely, the test dataset consists of seven distinct locations in Norway, encompassing both urban and rural environments. It's worth noting that the data originates from real-world scenarios, leading to certain instances Figure 1: This study utilizes four distinct datasets, each embodying unique areas and scenes, non-intersecting in context, and are most effectively visualized through color view. where buildings in the aerial images do not align with the corresponding ground truth masks. An additional complexity is the method of generating ground truths in the test dataset using a Digital Terrain Model (DTM), which results in a certain degree of skewness in the building tops in the images, compared to the ground truths. In contrast, the training dataset ground truths are generated using a Digital Surface Model (DSM), which effectively circumvents the issue of skewness in the building tops. The full dataset will be released following the competition, thus enabling further examination and research. Inria Aerial Image Labeling Benchmark [27].The dataset under investigation is characterized by extensive coverage, spanning 810 km2, which is equally divided for training and testing purposes. It utilizes high-resolution (0.3 m) aerial orthorectified color imagery, which encompasses varied urban landscapes, from densely populated areas like San Francisco's financial district to less dense regions like Lienz in Austrian Tyrol. The ground truth data comprises two semantic classes: 'building' and 'not building', publicly accessible only for the training subset. Unique to this dataset is its geographical division across training and testing subsets; training employs imagery from cities like Chicago, while testing uses data from different regions. This structure tests the techniques' generalization capabilities under diverse conditions including varied illumination, urban landscape, and seasons. The dataset's assembly involved merging public domain imagery and official building footprints, providing a comprehensive depiction of building structures. WHU Building Dataset [7].The WHU building dataset, meticulously curated for this study, incorporates both aerial and satellite imagery of building samples. The aerial component of the dataset comprises over 220,000 distinct building structures, gleaned from aerial images with a fine spatial resolution of 0.075 m, and spans an area of 450 km2 in Christchurch, New Zealand. The satellite imagery dataset is bifurcated into two subsets: one encompasses images from diverse cities globally, sourced from multiple remote sensing platforms including QuickBird, World-view series, IKONOS, ZY-3, among others, encapsulating a broad range of geographic and urban contexts. The second subset consists of six contiguous satellite images covering an expanse of 550 km2 in East Asia with a ground resolution of 2.7 m. Collectively, the WHU building dataset offers a comprehensive and varied collection of images, affording the opportunity to explore building segmentation across different scales, geographical locations, and imaging sources. Floodnet [1].The FloodNet dataset is a meticulously curated resource aimed at revolutionizing disaster management through the provision of high-resolution and semantically detailed unmanned aerial system (UAS) imagery, specifically in the context of natural disasters such as hurricanes. It leverages the flexible and efficient data collection capabilities of small UAS platforms, namely DJI Mavic Pro quadcopters, which are especially valuable for rapid response and recovery in large-scale and difficult-to-access areas. The dataset was collated in the aftermath of Hurricane Harvey and comprises 2343 images, apportioned into training (approximately 60%), validation (around 20%), and testing (roughly 20%) subsets. The semantic segmentation labels within the dataset are notably comprehensive, covering categories such as background, flooded and non-flooded buildings, flooded and non-flooded roads, water bodies, trees, vehicles, pools, and grass. Despite the wealth of data provided by such UAS platforms, analyzing these large datasets and extracting meaningful information presents a considerable challenge, underscoring the significance of FloodNet's detailed semantic annotation in advancing disaster management research and applications. ### Overview In our comprehensive methodology, we integrate four distinct datasets: MapAI Building, Inria Aerial Image Labeling Benchmark, WHU Building Dataset, and Floodnet. Irrespective of the original pixel resolution disparities, all data are reformatted into a standardized size of 256x256 pixels. Correspondingly, we apply similar alterations to the associated masks, enabling the generation of uniformly dimensioned image-mask patches. We employ Segformer-B5 [8] as our backbone, a decision underpinned by its robust performance in diverse segmentation tasks. To augment the model's initial capabilities, we incorporate pre-trained parameters, which are instrumental in defining the initial weights of our network, thereby optimizing our model's learning trajectory. The culmination of our pipeline involves an up-sampling procedure, through which we transform the processed data into a style compatible with the target results. This rigorous, systematic approach aids in ensuring the efficacy of our segmentation processes and ultimately contributes to the production of efficient outputs. ### Network In this paragraph, we briefly introduce the more popular U-Net structure and the SegFormer network architecture. For more details about ConvNext, refer to ConvNext [28]. The U-Net architecture is a widely recognized model for biomedical image segmentation, characterized by its symmetric expansive and contracting structure. The architecture mathematically operates as a series of nonlinear mappings that progressively transform the input image into the output segmentation. Incorporating sequences of convolution operations, max-pooling, upsampling, and a softmax layer, this model is recognized for its effectiveness in detail retention and accurate segmentation. The SegFormer architecture is an innovative blend of transformer and U-Net components, offering a novel approach to semantic segmentation tasks. A core component of the transformer section of the architecture is the self-attention mechanism, which can be represented mathematically as: \\[\\text{Attention}(Q,K,V)=\\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_{k}}} \\right)V \\tag{1}\\] In the given formulation, the variables \\(Q\\), \\(K\\), and \\(V\\) denote the query, key, and value vec Figure 2: **The propose SegAnyBuild framework.** Initially, we align the architectural structures and corresponding images as per the utilized dataset. Subsequently, the entire dataset is homogenized into a uniform 256*256 patch accompanied by a mask. Feature pre-learning is performed utilizing the pretrained model of Semantic Segmentation Anything (SSA). Ultimately, the architectural structure is delineated via a segmentation network. The entire procedure is executed in an end-to-end manner. tors, respectively, which are extrapolated from the input feature mappings. The parameter \\(d_{k}\\) characterizes the dimensionality of the key vectors. Utilizing the _softmax_ operation ensures the normalization of these weights, mandating that their cumulative sum converges to unity. The term \\(\\frac{1}{\\sqrt{d_{k}}}\\) serves as a scaling factor, which is indispensable in ensuring a stable learning trajectory. The _SegFormer_ model adopts the self-attention mechanism at heterogeneous scales, a concept more broadly recognized as multi-head attention. This paradigm is instrumental in amalgamating information from a diverse range of feature scales and can be succinctly delineated mathematically as: \\[\\begin{split}\\text{MultiHead}(Q,K,V)=\\text{Concat}(\\text{head}_{1 },\\\\ \\text{head}_{2}, ,\\text{head}_{n})W_{O}\\end{split} \\tag{2}\\] Through this mechanism, the model aspires to achieve a nuanced comprehension of the encompassing input features by seamlessly integrating information across multiple scales. In this equation, the output from the multi-head attention is subsequently integrated into a feature map, which is processed by a segmentation head to yield the final semantic segmentation output. \\(head_{i}=\\) \\(\\text{Attention}(QW_{Qi},KW_{Ki},VW_{Vi})\\), with \\(W_{Qi},W_{Ki}\\), and \\(W_{Vi}\\) being parameter matrices. \\(W_{O}\\) serves as the output transformation matrix, while \"Concat\" represents the concatenation operation. ### Loss function In the context of semantic segmentation tasks, the CrossEntropyLoss is often utilized to measure the dissimilarity between the predicted pixel-wise class probabilities and the ground truth labeling. This loss function is crucial for training the segmentation models as it encourages the accurate prediction of the class of each pixel in the image, for a single pixel, the cross entropy loss is defined as: \\[L(y,\\hat{y})=-\\sum_{c=1}^{C}y_{c}\\cdot\\log(\\hat{y}_{c}) \\tag{3}\\] Here, \\(C\\) is the total number of classes, \\(y_{c}\\) represents the ground truth (which would be 1 for the true class, and 0 for all other classes), and \\(\\hat{y}_{c}\\) denotes the predicted probability of the pixel belonging to class \\(c\\). In our experiment, the classification task was binary, concentrating on differentiating between buildings and non-buildings. ## 4 Experiments ### Metrics IOUIntersection over Union (IoU), also known as the Jaccard index, is a commonly utilized metric for the quantitative evaluation of segmentation tasks. Mathematically, it is defined as the ratio of the area of intersection to the area of union between the predicted and ground truth segmentation masks. \\[\\text{IoU}=\\frac{\\text{Area of Intersection}}{\\text{Area of Union}}=\\frac{|A \\cap B|}{|A\\cup B|} \\tag{4}\\] In this equation, \\(A\\) denotes the set of pixels in the predicted segmentation and \\(B\\) represents the set of pixels in the ground truth. A higher IoU indicates a greater overlap between the predicted and actual regions, implying a more accurate segmentation. BIOUWe also use The Boundary Intersection over Union (BIoU) [29] as a important metric, extends the concept of traditional IoU by placing emphasis on the boundary pixels of the segmentation mask. The metric quantifies the degree of overlap between the boundaries of the predicted and ground truth segmentation masks. BIoU \\[=\\frac{\\text{Length of Intersection of Boundaries}}{\\text{ Length of Union of Boundaries}}\\] \\[=\\frac{|B_{A}\\cap B_{B}|}{|B_{A}\\cup B_{B}|}\\](5) In this equation, \\(B_{A}\\) denotes the set of boundary pixels in the predicted segmentation and \\(B_{B}\\) represents the set of boundary pixels in the ground truth. A higher BIoU signifies that the predicted and actual boundaries align more closely, signifying a more accurate delineation of the object's contours. ### Setting We executed a rigorous comparative study between the network configurations, delineating the performances with and without the integration of a pre-trained model. The results unequivocally demonstrated that the incorporation of a pre-trained model significantly enhances the network's performance, corroborating its strategic value in model optimization. In the empirical analysis, we utilized the robust MMsegmentation framework, conducting experiments on an advanced NVIDIA A100 Tensor Core GPU machine. We chose the SegFormer model for our examination, applying the AdamW optimizer. The initial learning rate was set to 0.0006, betas were in the range of 0.9 to 0.999, and the weight decay was adjusted to 0.01. We used a dynamic learning rate strategy, following a polynomial updating policy with a linear warmup phase of 1500 iterations. The warmup ratio was a minute 1e-6, and the power for the policy was set at 1.0, with a minimum learning rate of 0.0. These adjustments occurred within each iteration rather than at the epoch level. We carried out the experiments with a batch size of 32. The backbone is a 'MixVisionTransformer', a unique form of Vision Transformer architecture that employs multi-scale inputs. It has four stages, each stage consisting of a different number of layers and attention heads. The input image will be divided into patches of varying sizes at different stages, with the size of the patches decreasing as we go deeper into the model, thanks to the patch sizes configuration. These patch sizes are related to the spatial reduction at each stage. The model allows for a progressive decrease in spatial dimensions as we proceed from lower to higher stages. The second part, the decode head, is a 'SegformerHead'. It takes the outputs from various stages of the backbone as inputs, The number of input channels for each stage's output is specified in in channels. It then outputs a tensor with the number of channels equal to channels, where each channel corresponds to the prediction map of a specific class. The model uses a CrossEntropyLoss for the loss function during training. The align corners parameter is set to False to prevent the misalignment of corners in the bilinear interpolation during upsampling. ### Results Our study involved a thorough data analysis of the MapAI dataset, where we examined the impact of data fusion and pre-trained models on performance. Table 1 presents compelling evidence of a substantial improvement in results when incorporating these techniques. Notably, in the fifth row of the table, a notable distinction is observed between SegFormer-B5 and SegAnyBuild, solely based on the inclusion or exclusion of pre-trained models. In our analysis, we established U-net, ConvNext, and SegFormer as baseline models for comparative purposes. The findings unequivocally demonstrate that our proposed framework outperforms these baseline models, thus establishing its superiority on the MapAI dataset. This achievement can be attributed to the synergistic effects of data fusion and the utilization of pre-trained models, showcasing the potential of our framework to advance the field of data analysis in the domains of computer vision and machine learning. The robust empirical evidence substantiates the effectiveness and competitiveness of our approach within the specific context of the MapAI dataset. Furthermore, we conducted a comprehensive quantitative analysis by testing the dataset in various locations. The results, as depicted in Figure 3, clearly illustrate the effectiveness of our approach in performing building recognition and segmentation across diverse scenes in both urban and rural areas. This quantitative analysis further reinforces the robustness and generalizability of our framework in different geographical contexts. The ability to accurately identify and segment buildings in various settings, including towns and villages, highlights the versatility and practical applicability of our proposed methodology. These findings contribute to the growing body of evidence supporting the efficacy of our approach in addressing real-world challenges in the field of building recognition and segmentation. Our investigation also encompassed testing our approach on various datasets, as demonstrated in Table 2. The results unequivocally substantiate the efficacy of joint training through data fusion, coupled with the utilization of pre-trained \\begin{table} \\begin{tabular}{l c c} \\hline Model & IOU & BIOU \\\\ \\hline U-Net & 0.7611 & 0.5823 \\\\ ConvNext & 0.7841 & 0.6105 \\\\ SegFormer-B0 & 0.7632 & 0.5901 \\\\ SegFormer-B4 & 0.7844 & 0.6116 \\\\ SegFormer-B5 & 0.7902 & 0.6185 \\\\ \\hline \\hline SegAnyBuild & 0.8012 & 0.6213 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Performance of different models on the MapAIcompetition image test set (without post-processing). As baseline we show a standard U-Net [30], ConvNext [10], SegFormer [8]. And Last row is our SegAnyBuild performance. Figure 3: The effectiveness of our pipeline in performing quantitative segmentation across various scenes and regions is presented in our study. Red mask is the building prediction. large models. Notably, our approach consistently achieved favorable outcomes across multiple datasets thereby showcasing its robustness and generalizability. This ability to achieve impressive results on diverse datasets using a single model highlights the superiority and practicality of our proposed methodology. These findings contribute to the body of knowledge, providing empirical evidence of the effectiveness of our approach in addressing the challenges associated with multiple datasets, while emphasizing its potential for broader applications in the field. ### Ablation Study In Table 3. Through our verification process, we have confirmed the feasibility of joint learning by fusing multiple building datasets. This observation arises from the understanding that buildings often exhibit similar texture features across various datasets. Consequently, there is significant potential for enhancing performance by learning deep features that capture these similarities. The fusion of multiple building data facilitates the extraction of shared features, enabling the model to leverage the collective knowledge embedded in different datasets. This approach offers a promising avenue for improving performance in building recognition and segmentation tasks. The results of our study provide empirical evidence supporting the notion that joint learning, driven by the fusion of multiple datasets, can effectively enhance performance by leveraging deep, similar features. These findings contribute to advancing our understanding of how to leverage shared knowledge across datasets for improved performance in building-related tasks. ### Discussion #### Diversification of Data Sources. Our research successfully implemented robust building segmentation across an array of datasets, leveraging both dataset fusion techniques and prompts from pre-existing models. It is, however, crucial to recognize the availability of several other building datasets. When assimilated, these datasets could conceivably bolster building segmentation performance across a multitude of scenarios--especially under demanding circumstances such as low-list scenes, inclement weather, and low-resolution imagery. As depicted in Figure 4, segmentation efficacy wanes in situations marked by intricate architectural nuances or inadequate lighting, such as nocturnal settings. This attenuation can largely be ascribed to a duo of causative factors. First, the restrictive nature of the extant data can circumscribe the segmentation prowess in the aforementioned arduous scenarios. Second, discerning the texture features of edifices becomes increasingly onerous in dimly lit or blurry settings, given the attenuated visibility and definition of salient characteristics. Such intricacies accentu \\begin{table} \\begin{tabular}{l c c} \\hline \\hline Dataset & IOU & BIOU \\\\ \\hline MapAI & 0.8012 & 0.6213 \\\\ INRIA Aerial Image & 0.8265 & 0.6424 \\\\ WHU Building & 0.8452 & 0.6165 \\\\ Floodnet & 0.5031 & 0.4012 \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{|l|c c|c|} \\hline Method & self & fusion & \\(IOU(\\uparrow)\\) & BIOU(\\uparrow) \\\\ \\hline SegAnyBuild-self & ✓ & & 0.7642 & 0.6024 \\\\ SegAnyBuild-fusion & & ✓ & 0.8012 & 0.6213 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Performance of SegAnyBuild model on the different Dataset. Figure 4: The segmentation results illustrate two distinct night scenes: on the left, a scenario featuring illuminated street lights, and on the right, a comparatively darker setting devoid of prominent light sources. \\begin{table} \\begin{tabular}{|l|c c|c|c|} \\hline \\hline Method & self & fusion & \\(IOU(\\uparrow)\\) & BIOU(\\uparrow) \\\\ \\hline SegAnyBuild-self & ✓ & & 0.7642 & 0.6024 \\\\ SegAnyBuild-fusion & & ✓ & 0.8012 & 0.6213 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: The result is for using self data and fusion data on MapAI dataset. ate the challenges in executing segmentation under suboptimal conditions, thereby underlining the imperative for avant-garde models adept at navigating these constraints. Incorporation of 3D KnowledgeIt is worth noting that specific datasets, such as the MapAI dataset, proffer invaluable depth information. Harnessing this depth data can amplify the segmentation process, providing additional cues that facilitate a more precise and consistent demarcation of edifice peripheries. Advancements in Self-supervised LearningDelving deeper into the amalgamation of diverse datasets within the ambit of self-supervised learning [1; 20; 31; 32], especially those endowed with depth intelligence, holds considerable promise. This strategy portends significant enhancements in building segmentation capabilities across a wider spectrum of scenarios. Such revelations serve as a linchpin in the evolving tapestry of research in this domain, emphasizing the latent potential of integrating supplementary datasets and depth insights to refine building segmentation methodologies. ### Conclusion The present research underscores the indispensable role of datasets culled from diverse sources and the utilization of advanced representation learning models, particularly in building segmentation within remote sensing imagery. Our methodological approach, characterized by the strategic fusion of several datasets, not only augments the available informational spectrum for learning but also showcases superior performance across the entire gamut of datasets harnessed. The triumphant realization of a unified training paradigm stands testament to the robustness of our strategy, potentially ushering in transformative advancements in pivotal sectors such as urban planning, disaster mitigation, and environmental monitoring. Our pioneering integration of dataset fusion methodologies and insights from pre-existing models marks a distinct shift from traditional approaches, thereby redefining the modus operandi of building segmentation challenges within the domain of remote sensing imagery. This research, in essence, lays a solid groundwork for subsequent explorations, opening vistas for potentially groundbreaking innovations and applications in the realm of building segmentation. ## References * [1] Maryam Rahnemoonfar, Tashnim Chowdhury, Argho Sarkar, Debrat Varshney, Masoud Yari, and Robin Roberson Murphy. Floodnet: A high resolution aerial imagery dataset for post flood scene understanding. _IEEE Access_, 9:89644-89654, 2021. * [2] Stefan Oehmcke, Lei Li, Jaime C Revenga, Thomas Nord-Larsen, Katerina Trepekli, Fabian Gieseke, and Christian Igel. Deep learning based 3D point cloud regression for estimating forest biomass. In _Proceedings of the 30th International Conference on Advances in Geographic Information Systems_, pages 1-4, 2022. * [3] Adrian Boguszewski, Dominik Batorski, Natalia Ziemba-Jankowska, Tomasz Dziedzic, and Anna Zambrzycka. Landcover. ai: Dataset for automatic mapping of buildings, woodlands, water and roads from aerial imagery. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1102-1110, 2021. * [4] Jaime C Revenga, Katerina Trepekli, Stefan Oehmcke, Rasmus Jensen, Lei Li, Christian Igel, Fabian Cristian Gieseke, and Thomas Friborg. Above-ground biomass prediction for croplands at a sub-meter resolution using uav-lidar and machine learning methods. _Remote Sensing_, 14(16):3912, 2022. * [5] Stefan Oehmcke, Lei Li, Jaime Revenga, Thomas Nord-Larsen, Katerina Trepekli, Fabian Gieseke, and Christian Igel. Deep learning based 3D point cloud regression for estimating forest biomass. In _InternationalConference on Advances in Geographic Information Systems (SIGSPATIAL)_. ACM, 2022. * [6] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_, 2023. * [7] Shunping Ji, Shiqing Wei, and Meng Lu. Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set. _IEEE Transactions on Geoscience and Remote Sensing_, 57(1):574-586, 2018. * [8] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. _Advances in Neural Information Processing Systems_, 34:12077-12090, 2021. * [9] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. _IEEE transactions on pattern analysis and machine intelligence_, 40(4):834-848, 2017. * [10] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11976-11986, 2022. * [11] Lei Li, Tianfang Zhang, Stefan Oehmcke, Fabian Gieseke, and Christian Igel. Buildseg: A general framework for the segmentation of buildings. _Nordic Machine Intelligence_, 2(3), 2022. * [12] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _International Conference on Medical image computing and computer-assisted intervention_, pages 234-241. Springer, 2015. * [13] Tianfang Zhang, Lei Li, Siying Cao, Tian Pu, and Zhenming Peng. Attention-guided pyramid context networks for detecting infrared small target under complex background. _IEEE Transactions on Aerospace and Electronic Systems_, 2023. * [14] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1290-1299, 2022. * [15] Lei Li, Tianfang Zhang, Zhongfeng Kang, and Xikun Jiang. Mask-fpan: Semi-supervised face parsing in the wild with deocclusion and uv gan. _Computers & Graphics_, 116:185-193, 2023. * [16] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In _Proceedings of the IEEE international conference on computer vision_, pages 2961-2969, 2017. * [17] Tianfang Zhang, Lei Li, Christian Igel, Stefan Oehmcke, Fabian Gieseke, and Zhenming Peng. Lr-csnet: Low-rank deep unfolding network for image compressive sensing. _arXiv preprint arXiv:2212.09088_, 2022. * [18] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. _IEEE transactions on pattern analysis and machine intelligence_, 33(5):898-916, 2010. * [19] Lei Li. Edge aware learning for 3d point cloud. _arXiv preprint arXiv:2309.13472_, 2023. * [20] Yicheng Zhang, Lei Li, Li Song, Rong Xie, and Wenjun Zhang. Fact: fused attention for clothing transfer with generative adversarial networks. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 12894-12901, 2020. * [21] Meng Wu, Lei Li, and Hongyan Li. Fase: Feature-based similarity search on ecg data. In _2019 IEEE International Conference on Big Knowledge (ICBK)_, pages 273-280. IEEE, 2019. * [22] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. _arXiv preprint arXiv:2106.08254_, 2021. * [23] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018. * [24] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. _arXiv:2304.02643_, 2023. * [25] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_, 2014. * [26] Sander Jyhne, Morten Goodwin, Per-Arne Andersen, Ivar Oveland, Alexander Salveson Nossum, Karianne Oydegard Ormseth, Mathilde Orstavik, and Andrew C Flatman. Mapai: Precision in buildingsgementation. 2022. * [27] Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat, and Pierre Alliez. Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark. In _IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_. IEEE, 2017. * [28] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2022. * [29] Bowen Cheng, Ross Girshick, Piotr Dollar, Alexander C Berg, and Alexander Kirillov. Boundary iou: Improving object-centric image segmentation evaluation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15334-15342, 2021. * [30] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3431-3440, 2015. * [31] Changsheng Zhou, Chao Yuan, Hongxin Wang, Lei Li, Stefan Oehmcke, Junmin Liu, and Jigen Peng. Multi-scale pseudo labeling for unsupervised deep edge detection. _Available at SSRN 4425635_. * [32] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 16000-16009, 2022.
The task of identifying and segmenting buildings within remote sensing imagery has perennially stood at the forefront of scholarly investigations. This manuscript accentuates the potency of harnessing diversified datasets in tandem with cutting-edge representation learning paradigms for building segmentation in such images. Through the strategic amalgamation of disparate datasets, we have not only expanded the informational horizon accessible for model training but also manifested unparalleled performance metrics across multiple datasets. Our avant-garde joint training regimen underscores the merit of our approach, bearing significant implications in pivotal domains such as urban infrastructural development, disaster mitigation strategies, and ecological surveillance. Our methodology, predicated upon the fusion of datasets and gleaning insights from pre-trained models, carves a new benchmark in the annals of building segmentation endeavors. The outcomes of this research both fortify the foundations for ensuing scholarly pursuits and presage a horizon replete with innovative applications in the discipline of building segmentation. Image Segmentation Remote Sensing
Condense the content of the following passage.
188
arxiv-format/2110_15079v1.md
# Regular Braneworlds with Bulk Fluids Ignatios Antoniadis\\({}^{1,2}\\)1Spiros Cotsakis\\({}^{3,4}\\)2Iifigeneia Klaoudatou\\({}^{4}\\)3 \\({}^{1}\\)Laboratoire de Physique Theorique et Hautes Energies - LPTHE Sorbonne Universite, CNRS 4 Place Jussieu, 75005 Paris, France \\({}^{2}\\) Institute for Theoretical Physics, KU Leuven Celestijnenlaan 200D, B-3001 Leuven, Belgium \\({}^{3}\\)Institute of Gravitation and Cosmology, RUDN University ul. Miklukho-Maklaya 6, Moscow 117198, Russia \\({}^{4}\\)Research Laboratory of Geometry, Dynamical Systems and Cosmology University of the Aegean, Karlovassi 83200, Samos, Greece Footnote 1: [email protected] Footnote 2: [email protected] Footnote 3: [email protected] November 6, 2021 ###### ###### Contents * 1 Introduction * 2 Setup and field equations * 3 Energy conditions * 4 Linear fluid \t* 4.1 Flat brane \t\t* 4.1.1 Energy conditions \t\t* 4.1.2 Planck Mass \t* 4.2 The special case \\(\\gamma=-1\\) for a flat brane \t* 4.3 Curved brane \t\t* 4.3.1 The null energy condition \t\t* 4.3.2 Localisation of gravity \t* 4.4 The special case \\(\\gamma=-1\\) for a curved brane * 5 Non-linear fluid \t* 5.1 The case of \\(\\lambda=3/2\\) \t* 5.2 Solutions for general \\(\\lambda\\) * 6 Conclusions and open questions Introduction In this paper, we review the dynamical evolution and physical properties of a class of higher-dimensional models analysed in [1, 2, 3, 4, 5, 6, 7]. Our motivation for studying higher-dimensional models stems from the fact that they propose alternative approaches towards understanding and hopefully improving our view on challenging issues in cosmology and particle physics. In this review specifically, we focus on a class of brane-world models that offers interesting implications on the cosmological constant problem (hereafter abbreviated cc-problem). The cc-problem arises from the disagreement between predictions from quantum field theories and observations regarding the value of the cosmological constant. In particular, the theoretical quantum corrections to the cosmological constant are naturally some 120 orders of magnitude higher than its observed value. Based on theory, the huge value of the cosmological constant, would automatically imply a huge value of the vacuum energy which would in turn give rise to a highly curved universe, a prediction that is not compatible with observations. To resolve the discrepancy between theory and observations, the bare value of the cosmological constant has to be unreasonably fine tuned. A viable approach to this puzzling problem, is to re-examine it in the context of higher-dimensional models. A first study towards this way of resolving the cc-problem, was explored in [8]. The main idea was that in the framework of higher dimensions (4+2, where 2 are extra compactified a la Kaluza-Klein spatial dimensions), the cosmological constant could only curve the extra dimensions, leaving the four-dimensional observed universe (almost) flat. Following this scenario, a class of solutions with arbitrary (including zero) values of the observable cosmological constant was found in [8]. However, the question of why the solutions with a vanishing cosmological constant should be singled out was kept unanswered, while various factors that could favour these solutions were discussed, such as appropriate quantum corrections and/or additional interactions, or, even stability considerations. Later on, the idea to use extra dimensions to attack the cc-problem, was revisited in [9, 10, 11, 12, 13, 14, 15]. The setup of these models is based on the idea that the observed universe is modeled by a four-dimensional hypersurface situated at a fixed position of an extra spatial dimension. The whole spacetime is five dimensional: there are four dimensions of space, out of which only three are spanned by the hypersurface and one dimension of time. Such a hypersurface is called 3-brane while the full higher-dimensional spacetime is called the bulk. Branes play an important role in string theory as they are the designated locations where open strings end and at the same time they are essential in proving duality between different versions of the theory. Additional interest on brane-worlds was sparked from the realization of a novel proposal towards resolving the hierarchy problem [16, 17, 18, 19, 20, 21] within their context. In the brane-world scenario, the brane behaves like a thin surface layer that is embedded in the bulk and it is associated with a surface-energy momentum tensor that acts as a delta-source of matter. The surface-energy momentum tensor creates a jump in the extrinsic curvature and describes all Standard Model fields. Gravity and some non-standard fields on the other hand, experience also the fifth spatial dimension and interact with the fields on the brane, problem. Their proposal is to explore the possibility of a self-tuning mechanism. To understand better this mechanism, we have to keep in mind that the tension of the brane receives quartically divergent quantum corrections from vacuum energy and through the Israel conditions which, as mentioned above, are essentially boundary conditions describing the embedding of the brane in the bulk, transfers these corrections to the curvature of the brane. However, if it is possible to find flat-brane solutions irrespectively from the fluctuations of the brane tension, then we could end-up with a universe that is self-tuned to a vanishing cosmological constant on the brane. In [9], the bulk matter is modeled by a scalar field that is minimally coupled to gravity and conformally coupled to the fields on the brane. The conformal coupling is carefully chosen to allow only for a flat brane in support to the self-tuning mechanism. In [10] on the other hand, the bulk matter can also contain a 5D cosmological constant and a variety of forms of tension. A common feature of both models of [9] and [10], is the emergence of singularities that arise within finite distance from the position of the brane. While investigating the puzzling nature of the finite-distance singularities, it was argued initially in [9] that these singularities, on one hand, can be viewed to act like a reservoir through which all the vacuum energy is emptied, while on the other hand, can serve at successfully compactifying the extra dimension. Thus, it was implied that the existence of such singularities serves in achieving both 4D gravity macroscopically and a vanishing cosmological constant. The details of a mechanism that could lead to a smooth transition to 4D dynamics was not further explored. Later works [11] however, showed that the flat-brane solutions of [10] containing finite-distance singularities fail to meet a consistency condition that would ensure that the field equations are globally satisfied. This result was extended in [12] by considering a variety of vacuum configurations. As it was pointed out in [11, 12], to obtain consistency, the singularities have to be resolved. One way to achieve this, is by introducing extra branes at the positions of the singularities. Unfortunately, embedding more branesentails defining new boundary conditions which in turn introduces a type of fine tuning in the model. An alternative way to rectify the singularities, is by exploiting the mirror symmetry introduced by the embedding of the brane in the bulk and choosing the parameters of the model appropriately to construct a regular matching solution, by cutting and matching the part of the bulk that does not contain singularities. Again, this leads to more issues, since the matching solution gives an infinite Planck mass, thus failing to localize gravity on the brane. A way to overcome this was explored in [13], with the use of a bulk scalar field with an unorthodox Lagrangian. It was shown that by choosing the range of parameters appropriately and using the matching mechanism mentioned above, it is possible to avoid finite-distance singularities and construct a flat-brane solution that is both regular and at the same time suitable for localizing gravity on the brane. However, the derived solution faces stability issues. In [14], the possibility of finding general forms of bulk potentials leading to self-tuned solutions without a finite-distance singularity and being valid for a range of brane tensions compatible with localized gravity on the brane, was explored. It was found that singularities in self-tuned solutions are generic for localizing gravity on the brane. Resolving the singularities essentially re-introduces fine-tuning in accordance with [11, 12]. However, it was suggested that additional fields may help revive the self-tuning mechanism. In [1, 2, 3, 4, 5, 6, 7], we generalized the work of [9] by modeling the bulk matter with a variety of components such as an analog of a perfect fluid \\(p=\\gamma\\rho\\), where the 'pressure' \\(p\\) and the 'density' \\(\\rho\\) are functions of the extra dimension only, an interacting mixture, or more recently, a non-linear fluid with equation of state \\(p=\\gamma\\rho^{\\lambda}\\). In most of these cases our models contained also a curved brane. We find that the non-linear equation of state \\(p=\\gamma\\rho^{\\lambda}\\), is the most appropriate type of bulk matter for generating regular solutions with fundamental physical properties. Such an equation of state has been studied previously in cosmology for its role in avoiding big-rip singularities during late time asymptotics [24, 25, 26], obtaining inflationary models with special properties [27], unifying models of dark energy and dark matter [28, 29], but also in the analysis of singularities [30, 31, 32]. The mathematical tools that we used for performing our analysis in [1, 2, 3, 4, 5, 6, 7], include the method of asymptotic splittings [33] that detects all possible asymptotic behaviors of solutions around a singularity, combined with a method for tracing envelopes, cf. [4], Section 2, which are essentially solutions with a smaller number of arbitrary constants but which, nonetheless, play a crucial role in determining the general behavior of solutions. Another tool is the analysis of asymptotic behaviors of Gaussian hypergeometric functions that arise in solutions of curved branes, or, even of flat branes for a non-linear bulk fluid. In this paper, we overview the basic mathematical details and physical implications of the body of work of [1, 2, 3, 4, 5, 6, 7]. The structure of this paper is as follows. In Section 2, we give the setup and field equations for our brane-worlds. In Section 3, we study the weak, strong and null energy conditions for the bulk fluid. Next, in Section 4, we overview the case of a linear fluid for a flat, or, curved brane and analyse the corresponding energy conditions, as well as, the possibility of localizing gravity on the brane. Then, in Section 5, we study from the same perspective, a non-linear fluid, first, for \\(\\lambda=3/2\\) and then for general \\(\\lambda\\). Finally, in Section 6, we present our conclusions and discuss open questions. ## 2 Setup and field equations We study a class of brane-world models that consist of a flat, or, a curved 3-brane embedded in a five-dimensional bulk. The bulk metric is given by \\[g_{5}=a^{2}(Y)g_{4}+dY^{2}, \\tag{2.1}\\] where \\(g_{4}\\) is the four-dimensional flat, de Sitter or anti de Sitter metric, _i.e._, \\[g_{4}=-dt^{2}+f_{k}^{2}g_{3}, \\tag{2.2}\\]with \\[g_{3}=dr^{2}+h_{k}^{2}g_{2}, \\tag{2.3}\\] and \\[g_{2}=d\\theta^{2}+\\sin^{2}\\theta d\\varphi^{2}, \\tag{2.4}\\] with \\(f_{k}=1,\\cosh(Ht)/H,\\cos(Ht)/H\\) (\\(H^{-1}\\) is the de Sitter (or AdS) curvature radius), \\(h_{k}=r,\\sin r,\\sinh r\\), respectively and \\(a(Y)\\) is the warp factor (\\(a(Y)>0\\)) which we simply denote by \\(a\\). In our notation, capital Latin indices take the values \\(A,B,\\cdots=1,2,3,4,5\\), while lowercase Greek indices are taken to range as \\(\\alpha,\\beta,\\ldots=1,2,3,4\\), with \\(t\\) being the timelike coordinate, \\((r,\\theta,\\phi,Y)\\) the remaining spacelike ones and the 5th coordinate corresponding to \\(Y\\). The 5-dimensional Riemann tensor is defined by the formula, \\[R^{A}_{\\ \\ BCD}=\\partial_{C}\\Gamma^{A}_{\\ \\ BD}-\\partial_{D}\\Gamma^{A}_{\\ \\ BC}+\\Gamma^{M}_{BD}\\Gamma^{A}_{\\ MC}-\\Gamma^{M}_{BC}\\Gamma^{A}_{\\ MD} \\tag{2.5}\\] the Ricci tensor is the contraction, \\[R_{AB}=R^{C}_{\\ \\ ACB}, \\tag{2.6}\\] and the five-dimensional Einstein equations on the bulk space are given by, \\[G_{AB}=R_{AB}-\\frac{1}{2}g_{AB}R=\\kappa_{5}^{2}T_{AB}. \\tag{2.7}\\] We assume that the bulk is filled with a fluid analogue with energy-momentum tensor of the form \\[T_{AB}=(\\rho+p)u_{A}u_{B}-pg_{AB}, \\tag{2.8}\\] where the 'pressure' \\(p\\) the 'density' \\(\\rho\\) are functions only of the fifth dimension, \\(Y\\), and the velocity vector field is \\(u_{A}=(0,0,0,0,1)\\), that is \\(u_{A}=\\partial/\\partial Y\\), parallel to the \\(Y\\)-dimension. The five-dimensional Einstein equations can then be written as \\[\\frac{a^{\\prime 2}}{a^{2}} = \\frac{\\kappa_{5}^{2}}{6}\\rho+\\frac{kH^{2}}{a^{2}}, \\tag{2.9}\\] \\[\\frac{a^{\\prime\\prime}}{a} = -\\frac{\\kappa_{5}^{2}}{6}(\\rho+2p), \\tag{2.10}\\]where \\(k=\\pm 1\\), and the prime \\((\\ ^{\\prime})\\) denotes differentiation with respect to \\(Y\\). On the other hand, the equation of conservation, \\[\ abla_{B}T^{AB}=0,\\] gives \\[\\rho^{\\prime}+4\\frac{a^{\\prime}}{a}(\\rho+p)=0. \\tag{2.11}\\] We assume that the density and pressure of the fluid are related according to the general equation of state \\[p=\\gamma\\rho^{\\lambda}, \\tag{2.12}\\] where \\(\\gamma\\) and \\(\\lambda\\) are constants. Inputting (2.12) in (2.9)-(2.11), we find \\[\\frac{a^{\\prime 2}}{a^{2}} = \\frac{\\kappa_{5}^{2}}{6}\\rho+\\frac{kH^{2}}{a^{2}}, \\tag{2.13}\\] \\[\\frac{a^{\\prime\\prime}}{a} = -\\frac{\\kappa_{5}^{2}}{6}(\\rho+2\\gamma\\rho^{\\lambda}), \\tag{2.14}\\] \\[\\rho^{\\prime}+4\\frac{a^{\\prime}}{a}(\\rho+\\gamma\\rho^{\\lambda})=0. \\tag{2.15}\\] Before solving the system of Eqs. (2.13)-(2.15) and studying its asymptotic behaviors, we find it useful to outline below, the possible types of singularity that we will encounter in the next Sections. Denoting with \\(Y_{s}\\), the finite value of \\(Y\\) labeling the position of the singularity, we say that a finite-distance singularity is a * _collapse_ singularity, if \\(a\\to 0^{+}\\), as \\(Y\\to Y_{s}\\), * _big-rip_ singularity, if \\(a\\rightarrow\\infty\\), as \\(Y\\to Y_{s}\\). Depending on the values of \\(k\\), \\(\\gamma\\) and \\(\\lambda\\), the above behaviors may be accompanied by a divergence in the density, or, even in the pressure of the fluid. We emphasize that, these singularities are not related to geodesic incompleteness as in standard cosmology, but rather on a pathological behavior of the warp factor. In the absence of finite-distance singularities, we call the solutions _regular_ and include in this category the behaviorsof the warp factor given above, provided that these occur _only_ at infinite distance, _i.e_\\(Y\\rightarrow\\pm\\infty\\). Our study will be completed once we find a solution that has the following fundamental properties: * it is regular (no finite-distance singularities) * it satisfies physical conditions, such as energy conditions * it leads to a finite Planck mass, thus, it localizes gravity on the brane. Since the behaviors of solutions depend strongly on the curvature of the brane, as well as on the linearity/non-linearity of the equation of state, we present the various possibilities in separate Sections. Also, in the next Section, we explain briefly a way to formulate the weak, strong and null energy condition for our type of bulk matter. Detailed proofs of our results can be found in [4, 6]. ## 3 Energy conditions The weak, strong and null energy condition are physical requirements that we wish our solutions to fulfill. They can help us single out those solutions of Eqs. (2.13)-(2.15) as more plausible from a physics point of view. Classically, it is convenient to translate the energy conditions to restrictions imposed on \\(p\\) and \\(\\rho\\). To work out the energy conditions for our type of fluid, we start by noting that in the formulation of the field equations, both the metric given by (2.1) and the bulk fluid described by (2.8) and (2.12), appear as static with respect to the time coordinate \\(t\\), because the evolution is taken with respect to the fifth spatial coordinate \\(Y\\). Using this fact, we can reinterpret our fluid analogue as a real anisotropic fluid having the following energy momentum tensor, \\[T_{AB}=(\\rho^{0}+p^{0})u_{A}^{0}u_{B}^{0}-p^{0}g_{\\alpha\\beta}\\delta_{A}^{ \\alpha}\\delta_{B}^{\\beta}-p_{Y}g_{55}\\delta_{A}^{5}\\delta_{B}^{5}, \\tag{3.1}\\]where \\(u^{0}_{A}=(a(Y),0,0,0,0)\\), \\(A,B=1,2,3,4,5\\) and \\(\\alpha,\\beta=1,2,3,4\\). When we combine (2.8) with (3.1), we get the following set of relations, \\[p_{Y} = -\\rho \\tag{3.2}\\] \\[\\rho^{0} = -p\\] (3.3) \\[p^{0} = p. \\tag{3.4}\\] Note that, the last two relations imply that \\[p^{0}=-\\rho^{0}, \\tag{3.5}\\] which means that this type of matter satisfies a cosmological constant-like equation of state. Substituting (3.2)-(3.5) in (3.1), we find that \\[T_{AB}=-pg_{\\alpha\\beta}\\delta^{\\alpha}_{A}\\delta^{\\beta}_{B}+\\rho g_{55} \\delta^{5}_{A}\\delta^{5}_{B}. \\tag{3.6}\\] At this point, we can start to formulate the energy conditions. First, we study the weak energy condition according to which, every future-directed timelike vector \\(v^{A}\\) should satisfy \\[T_{AB}v^{A}v^{B}\\geq 0. \\tag{3.7}\\] This condition implies that the energy density should be non negative for all forms of physical matter [34]. Here we find that it translates to \\[p\\geq 0, \\tag{3.8}\\] and \\[p+\\rho\\geq 0. \\tag{3.9}\\] Second, we work out the strong energy condition which states that \\[\\left(T_{AB}-\\frac{1}{3}Tg_{AB}\\right)v^{A}v^{B}\\geq 0, \\tag{3.10}\\] for every future-directed unit timelike vector \\(v^{A}\\). In our case, \\[-p+\\rho\\geq 0, \\tag{3.11}\\]\\[p+\\rho\\geq 0. \\tag{3.12}\\] Finally, we study the null energy condition according to which, every future-directed null vector \\(k^{A}\\) should satisfy [35] \\[T_{AB}k^{A}k^{B}\\geq 0. \\tag{3.13}\\] Here we find that it translates to \\[p+\\rho\\geq 0. \\tag{3.14}\\] In later Sections we are going to express these conditions with respect to the values of the parameters \\(\\gamma\\) and \\(\\lambda\\) of the equation of state, as this will enable us to automatically recognize those solutions that are compatible with the energy conditions. ## 4 Linear fluid In this section, we review the behaviors of solutions for a linear fluid, which can be also viewed as an analogue of a 'perfect' fluid. Inputting \\(\\lambda=1\\) in the field equations (2.13)-(2.14), we find \\[\\frac{a^{\\prime 2}}{a^{2}} = \\frac{\\kappa_{5}^{2}}{6}\\rho+\\frac{kH^{2}}{a^{2}}, \\tag{4.1}\\] \\[\\frac{a^{\\prime\\prime}}{a} = -\\frac{\\kappa_{5}^{2}}{6}(1+2\\gamma)\\rho, \\tag{4.2}\\] while (2.15) gives \\[\\rho^{\\prime}+4(1+\\gamma)\\frac{a^{\\prime}}{a}\\rho=0. \\tag{4.3}\\] Naturally, the forms of solutions of Eqs. (4.1)-(4.3), depend strongly on the values of \\(k\\) and \\(\\gamma\\). We classify the types of solutions and examine each class, separately, in the following subsections. ### Flat brane To study the case of a flat brane, we first substitute \\(k=0\\) in (4.1) and find \\[\\frac{a^{\\prime 2}}{a^{2}}=\\frac{\\kappa_{5}^{2}}{6}\\rho. \\tag{4.4}\\] Next, we integrate (4.3) to obtain the relation between \\(\\rho\\) and \\(a\\) which reads \\[\\rho=c_{1}a^{-4(\\gamma+1)}, \\tag{4.5}\\] with \\(c_{1}\\) an arbitrary constant. We can now substitute (4.5) in (4.4) and integrate to derive the form of the warp factor, \\(a\\), \\[a=\\left(2(\\gamma+1)\\left(\\pm\\sqrt{\\frac{2Ac_{1}}{3}}Y+c_{2}\\right)\\right)^{1/( 2(\\gamma+1))},\\quad\\gamma\ eq-1, \\tag{4.6}\\] where \\(A=\\kappa_{5}^{2}/4\\). Finally, we input (4.6) in (4.5) to find \\(\\rho\\): \\[\\rho=c_{1}\\left(2(\\gamma+1)\\left(\\pm\\sqrt{\\frac{2Ac_{1}}{3}}Y+c_{2}\\right) \\right)^{-2}. \\tag{4.7}\\] Substitution of our solution for \\(k=0\\) of \\(a\\) and \\(\\rho\\) in (4.2) shows that the latter equation is satisfied. Our solution (4.6) and (4.7) holds for all values of \\(\\gamma\\) except from \\(\\gamma=-1\\). The case of \\(\\gamma=-1\\) is a special one and is studied separately in a following subsection. For all other values of \\(\\gamma\\), we see that for a flat brane and a linear equation of state, there is always a finite-distance singularity located at \\(Y_{s}=\\mp c_{2}\\sqrt{3/(2Ac_{1})}\\). The nature of the singularity depends on whether \\(\\gamma\\) is less, or, greater than \\(-1\\), and can be classified into a collapse type of singularity with \\[a\\to 0,\\quad\\rho\\to\\infty,\\quad Y\\to\\mp c_{2}\\sqrt{3/(2Ac_{1})},\\quad\\gamma>-1, \\tag{4.8}\\] or, big-rip type with \\[a\\to\\infty,\\quad\\rho\\to\\infty,\\quad Y\\to\\mp c_{2}\\sqrt{3/(2Ac_{1})},\\quad \\gamma<-1. \\tag{4.9}\\]Combining this outcome with similar results found in [9] for a massless scalar field, which can be also viewed as a fluid with \\(\\gamma=1\\), we realize that the emergence of finite-distance singularities persists even for this more general type of bulk matter. A next step should therefore be to look for possible ways to rectify these singularities. In [4], we explored the possibility of avoiding the singularities by constructing a regular matching solution from (4.6) and (4.7). The procedure we followed there, is similar to the one used in [13]: we cut and match the part of the bulk that is free from finite-distance singularities. This is indeed possible for an appropriate choice of the range of parameters. In particular, we have examined the following two choices: * \\(\\gamma<-1\\), \\(c_{2}\\leq 0\\), with the \\(+\\) sign for \\(Y<0\\) and the \\(-\\) sign for \\(Y>0\\). The matching solution then reads \\[a=\\left(2(\\gamma+1)\\left(-\\sqrt{\\frac{2Ac_{1}}{3}}|Y|+c_{2}\\right)\\right)^{1/ (2(\\gamma+1))},\\] (4.10) and \\[\\rho=c_{1}\\left(2(\\gamma+1)\\left(-\\sqrt{\\frac{2Ac_{1}}{3}}|Y|+c_{2}\\right) \\right)^{-2},\\] (4.11) with the brane placed at the origin \\(Y=0\\). Clearly then, both \\(a\\) and \\(\\rho\\) are non-singular since the term \\[\\left(-\\sqrt{2Ac_{1}/3}|Y|+c_{2}\\right)\\] is always negative. * \\(\\gamma>-1\\), \\(c_{2}\\geq 0\\), with the \\(+\\) sign for \\(Y>0\\) and the \\(-\\) sign for \\(Y<0\\). Then the matching solution is \\[a=\\left(2(\\gamma+1)\\left(\\sqrt{\\frac{2Ac_{1}}{3}}|Y|+c_{2}\\right)\\right)^{1/( 2(\\gamma+1))},\\] (4.12) and \\[\\rho=c_{1}\\left(2(\\gamma+1)\\left(\\sqrt{\\frac{2Ac_{1}}{3}}|Y|+c_{2}\\right) \\right)^{-2}.\\] (4.13)Again, \\(a\\) and \\(\\rho\\) are non-singular, since the term \\[\\left(\\sqrt{2Ac_{1}/3}|Y|+c_{2}\\right)\\] is always positive. We are going to focus only on the solution described by (4.10)-(4.11), which corresponds to \\(\\gamma<-1\\), since it is the only one of the two possibilities that leads to a finite four-dimensional Planck mass. To examine further the adequacy of this solution, we ought to check the boundary conditions that describe the embedding of the brane in the bulk. A natural condition to impose is, that the warp factor and energy density are continuous functions. Note that in what follows, by writing \\(c_{i}^{+}\\) (\\(c_{i}^{-}\\)) we refer to the value of an arbitrary constant \\(c_{i}\\) at \\(Y>0\\) (\\(Y<0\\)). The continuity of the warp factor at \\(Y=0\\) leads to the condition \\[(2(\\gamma+1)c_{2}^{+})^{1/(2(\\gamma+1))}=(2(\\gamma+1)c_{2}^{-})^{1/(2(\\gamma+1 ))}, \\tag{4.14}\\] or, since \\(c_{2}^{+}\\) and \\(c_{2}^{-}\\) are real numbers, we have \\[c_{2}^{+}=\\pm c_{2}^{-}, \\tag{4.15}\\] depending on the value of \\(\\gamma\\). Similarly, continuity of the density gives \\[\\frac{c_{1}^{+}}{(c_{2}^{+})^{2}}=\\frac{c_{1}^{-}}{(c_{2}^{-})^{2}}, \\tag{4.16}\\] and using (4.15) we find \\[c_{1}^{+}=c_{1}^{-}. \\tag{4.17}\\] On the other hand, the jump of the extrinsic curvature \\(K_{\\alpha\\beta}=1/2(\\partial g_{\\alpha\\beta}/\\partial Y)\\) (\\(\\alpha,\\beta=1,2,3,4\\)), is given by \\[K_{\\alpha\\beta}^{+}-K_{\\alpha\\beta}^{-}=-\\kappa_{5}^{2}\\left(S_{\\alpha\\beta}- \\frac{1}{3}g_{\\alpha\\beta}S\\right), \\tag{4.18}\\] where the surface energy-momentum tensor \\(S_{\\alpha\\beta}\\) (defined only on the brane and vanishing off the brane) is taken to be \\[S_{\\alpha\\beta}=-g_{\\alpha\\beta}f(\\rho), \\tag{4.19}\\] \\[-1\\leq\\gamma\\leq 1,\\] from which we find that it is possible to have either \\(p=0\\), or, \\[p<0\\quad\\mbox{and}\\quad-1\\leq\\gamma<0, \\tag{4.28}\\] or, \\[p>0\\quad\\mbox{and}\\quad 0<\\gamma\\leq 1. \\tag{4.29}\\] Finally the null energy condition (3.14) for \\(p=\\gamma\\rho\\) gives (4.24), which again implies that \\[\\gamma\\geq-1. \\tag{4.30}\\] The inequalities (4.26), (4.28), (4.29) and (4.30) show that the energy conditions restrict \\(\\gamma\\) to be at least greater than or equal to \\(-1\\), which means that the regular solution for \\(\\gamma<-1\\), cannot satisfy the energy conditions. #### 4.1.2 Planck Mass In this Section, we will show that the solution (4.10), provides the appropriate range of \\(\\gamma\\) to obtain a finite four-dimensional Planck mass. The value of the four-dimensional Planck mass, \\(M_{p}^{2}=8\\pi/\\kappa\\), is determined by the following integral [13] \\[\\frac{\\kappa_{5}^{2}}{\\kappa}=\\int_{-Y_{c}}^{Y_{c}}a^{2}(Y)dY. \\tag{4.31}\\] For our solution, Eq. (4.10), the above integral becomes [4], \\[\\int_{-Y_{c}}^{Y_{c}}\\left(2(\\gamma+1)\\left(-\\sqrt{\\frac{2Ac_{1}} {3}}|Y|+c_{2}\\right)\\right)^{1/(\\gamma+1)}dY= \\tag{4.32}\\] \\[= \\frac{1}{2(\\gamma+2)}\\sqrt{\\frac{3}{2Ac_{1}}}\\left(2(\\gamma+1) \\left(\\sqrt{\\frac{2Ac_{1}}{3}}Y+c_{2}\\right)\\right)^{(\\gamma+2)/(\\gamma+1)}| _{-Y_{c}}^{0}-\\] (4.33) \\[- \\frac{1}{2(\\gamma+2)}\\sqrt{\\frac{3}{2Ac_{1}}}\\left(2(\\gamma+1) \\left(-\\sqrt{\\frac{2Ac_{1}}{3}}Y+c_{2}\\right)\\right)^{(\\gamma+2)/(\\gamma+1)}| _{0}^{Y_{c}}, \\tag{4.34}\\]In the limit \\(Y_{c}\\to\\infty\\), we see that the Planck mass remains finite only for \\[-2<\\gamma<-1, \\tag{4.35}\\] and takes the form \\[\\frac{\\kappa_{5}^{2}}{\\kappa}=\\sqrt{\\frac{3}{2Ac_{1}}}\\frac{(2(\\gamma+1)c_{2})^{ \\frac{\\gamma+2}{\\gamma+1}}}{\\gamma+2}. \\tag{4.36}\\] This means that the interval \\((-\\infty,-1)\\) for which the solution (4.10) is defined, has to be refined to \\((-2,-1)\\), after taking into account the requirement of a finite Planck mass. Combining this fact with the results of the previous subsection, we conclude that for a flat brane and a linear fluid with \\(\\gamma\ eq-1\\), it is not feasible to construct a regular solution that satisfies both the requirement for a finite Planck mass given by (4.35) _and_ the energy conditions. ### The special case \\(\\gamma=-1\\) for a flat brane Putting \\(\\gamma=-1\\) in (4.3), we find \\[\\rho=c_{1}, \\tag{4.37}\\] where \\(c_{1}\\) is an integration constant. Since \\(\\rho\\geq 0\\) from (4.4), we see that \\(c_{1}\\) has to be non-negative. Substituting (4.37) in (4.4) we find \\[a(Y)=e^{\\pm\\sqrt{(\\kappa_{5}{}^{2}/6)c_{1}}Y+c_{2}}, \\tag{4.38}\\] where \\(c_{2}\\) is an integration constant. We note that this solution has no finite distance singularities and satisfies trivially the null energy condition (\\(\\gamma=-1\\)). For a finite four-dimensional Planck mass, we can make the following choice: we can choose the \\(+\\) sign for \\(Y<0\\) and the \\(-\\) sign for \\(Y>0\\) and place the brane at \\(Y=0\\). Then the matching solution reads \\[a(Y)=e^{-\\sqrt{(\\kappa_{5}{}^{2}/6)c_{1}}|Y|+c_{2}}. \\tag{4.39}\\] This reduces to the Randall-Sundrum solution of [20], by setting \\(c_{2}=0\\) and \\(c_{1}=-\\Lambda\\), where \\(\\Lambda<0\\) is the bulk cosmological constant in that model. Continuity of the warp factor and density at the position of the brane give \\[c_{2}^{+}=c_{2}^{-},\\quad\\text{and}\\quad c_{1}^{+}=c_{1}^{-}. \\tag{4.40}\\] For simplicity, we can also set \\(c_{2}=0\\). Then using the junction condition (4.20), we can find the form of the brane tension which reads \\[f(\\rho(0))=\\frac{\\sqrt{6c_{1}}}{\\kappa_{5}}, \\tag{4.41}\\] and we note that the tension is positive. Finally, the four-dimensional Planck mass is determined by (4.31). Here we have \\[a^{2}(Y)=e^{-2\\sqrt{(\\kappa_{5}{}^{2}/6)c_{1}}|Y|}, \\tag{4.42}\\] and using the symmetry of the solution we find \\[\\frac{\\kappa_{5}^{2}}{\\kappa}=\\int_{-Y_{c}}^{Y_{c}}a^{2}(Y)dY=2\\int_{0}^{Y_{c} }e^{-2\\sqrt{(\\kappa_{5}{}^{2}/6)c_{1}}Y}dY=-\\frac{\\sqrt{6}}{\\kappa_{5}\\sqrt{c_ {1}}}e^{-2\\sqrt{(\\kappa_{5}{}^{2}/6)c_{1}}Y}|_{0}^{Y_{c}}. \\tag{4.43}\\] Taking \\(Y_{c}\\to\\infty\\), we see that the four-dimensional Planck mass remains finite and is proportional to \\[\\frac{\\sqrt{6}}{\\kappa_{5}\\sqrt{c_{1}}}. \\tag{4.44}\\] ### Curved brane The impossibility of finding for a range of \\(\\gamma\\) a flat-brane solution bearing the required physical properties mentioned in the previous subsections, led us to further research the question of whether the situation could be resolved by allowing for a nonzero brane curvature. Of course, this would inevitably bring back the cc-problem, as mentioned in the introduction. Still, it is worth exploring the impact of the curvature, as this would offer a deeper understanding of the factors that monitor the dynamics and evolution of the brane-worlds under investigation. Assuming \\(k\ eq 0\\) in Eqs. (4.1) and (4.2) and substituting (4.5) in (4.1), we find \\[a^{\\prime 2}=\\frac{2}{3}Ac_{1}a^{-2(2\\gamma+1)}+kH^{2}. \\tag{4.45}\\]For simplicity, we can set \\(C=2/3Ac_{1}\\) and keep in mind that the sign of \\(C\\) follows the sign of \\(\\rho\\). Then (4.45) can be written as \\[a^{\\prime 2}=Ca^{-2(2\\gamma+1)}+kH^{2}. \\tag{4.46}\\] It automatically follows from (4.46) that the LHS of this equation restricts the acceptable combinations of the signs of \\(C\\) and \\(k\\) and the possible asymptotic behaviors of \\(a\\). For example, the case \\(C<0\\) and \\(k<0\\) becomes an impossible combination. Also, the case \\(C<0\\), \\(k>0\\) becomes possible only for, \\[\\mbox{dS brane-world:}\\quad 0<a^{-2(2\\gamma+1)}<-\\frac{kH^{2}}{C}, \\tag{4.47}\\] while, the case \\(C>0\\), \\(k<0\\) is allowed only for \\[\\mbox{AdS brane-world:}\\quad a^{-2(2\\gamma+1)}>-\\frac{kH^{2}}{C}>0. \\tag{4.48}\\] Clearly, the above inequalities show that both cases are likely to give rise to regular solutions. In particular, for the case \\(C<0\\) and a dS brane with \\(\\gamma>-1/2\\), (4.47) implies that \\(a^{2(2\\gamma+1)}>-C/(kH^{2})>0\\), so that the warp factor, \\(a\\), is bounded away from zero, which prevents collapse singularities from happening. The only way that this case may introduce a finite-distance singularity, is by allowing the warp factor to become divergent within a finite distance, thus signalling a big-rip singularity. However, further calculations presented in [6], showed that this singular behaviour is also excluded, which means that for \\(C<0\\) and a dS brane there is indeed a regular solution. On the other hand, for the case \\(C>0\\) and an AdS brane with \\(\\gamma<-1/2\\), (4.48) implies that the warp factor is again bounded away from zero, thus excluding the existence of collapse singularities here, as well. However, for this latter case, we have to further restrict \\(\\gamma\\) on the interval \\((-1,-1/2)\\), in order to avoid a finite-distance big-rip singularity [6]. To derive solutions for a curved brane, we can proceed by writing Eq. (4.46) in the form, \\[\\int\\frac{da}{\\sqrt{Ca^{-2(2\\gamma+1)}+kH^{2}}}=\\pm\\int dY. \\tag{4.49}\\]Naturally, the integration is more complicated in this case and cannot be performed directly. We can nevertheless, express our solutions implicitly by the Gaussian hypergeometric function \\({}_{2}F_{1}(\\alpha,b,c;z)\\), defined by \\[{}_{2}F_{1}(\\alpha,b,c;z)=\\frac{\\Gamma(c)}{\\Gamma(\\alpha)\\Gamma(b)}\\sum_{n=0}^{ \\infty}\\frac{\\Gamma(\\alpha+n)\\Gamma(b+n)}{\\Gamma(c+n)}\\frac{z^{n}}{n!} \\tag{4.50}\\] and convergent for \\(|z|<1\\), where \\(\\Gamma\\) is the Gamma function. For this purpose, we use standard substitution formulas and bring the integral on the LHS of Eq. (4.49) in the form of an integral representation of a hypergeometric function \\({}_{2}F_{1}(\\alpha,b,c;z)\\) given by [36] \\[{}_{2}F_{1}(\\alpha,b,c;z)=\\frac{\\Gamma(c)}{\\Gamma(b)\\Gamma(c-b)}\\int_{0}^{1}t^ {b-1}(1-t)^{c-b-1}(1-tz)^{-\\alpha}dt \\tag{4.51}\\] and provided that \\[0<Re(b)<Re(c). \\tag{4.52}\\] In our solutions, the corresponding parameters \\(b\\) and \\(c\\) are, either, constants or, functions of \\(\\gamma\\). This means that condition (4.52) above, determines the range of \\(\\gamma\\), for which the representations of solutions in terms of hypergeometric functions are valid. We give a full list of solutions derived, in this way, below. * For a dS brane, \\(C>0\\) and * \\(\\gamma<-1/2\\), the solution is, \\[\\pm Y+C_{2}=\\frac{a}{\\sqrt{kH^{2}}}\\,_{2}F_{1}\\left(\\frac{1}{2},-\\frac{1}{2(2 \\gamma+1)},\\frac{4\\gamma+1}{2(2\\gamma+1)};-\\frac{C}{kH^{2}}a^{-2(2\\gamma+1)} \\right),\\] (4.53) where \\(C_{2}\\) is an integration constant. * \\(\\gamma>-1/2\\), the solution reads, \\[\\pm Y+C_{2}=\\frac{1}{2(\\gamma+1)\\sqrt{C}}a^{2(\\gamma+1)}\\,_{2}F_{1}\\left(\\frac {1}{2},\\frac{\\gamma+1}{2\\gamma+1},\\frac{3\\gamma+2}{2\\gamma+1};-\\frac{kH^{2}}{ C}a^{2(2\\gamma+1)}\\right).\\] (4.54) * For dS brane, \\(C<0\\) and \\(IIa)\\): \\(\\gamma<-1/2\\), the solution is given by (4.53). \\(IIb)\\): \\(\\gamma>-1/2\\), the solution is, \\[\\pm Y+C_{2} = \\frac{(-C)^{-\\frac{\\gamma}{2\\gamma+1}}(kH^{2})^{\\frac{-1}{2(2\\gamma +1)}}}{2(2\\gamma+1)}\\sqrt{a^{2(2\\gamma+1)}+\\frac{C}{kH^{2}}}\\times\\] (4.55) \\[\\times_{2}F_{1}\\left(\\frac{1}{2},\\frac{\\gamma}{2\\gamma+1},\\frac{ 3}{2};1+\\frac{kH^{2}}{C}a^{2(2\\gamma+1)}\\right).\\] * For AdS, \\(C>0\\) and * \\(\\gamma>-1/2\\), or, \\(\\gamma<-1\\) the solution is given by Eq. (4.54). * \\(-1<\\gamma<-1/2\\) the solution is, \\[\\pm Y+C_{2} = -\\frac{C^{\\frac{\\gamma+1}{2\\gamma+1}}}{2(2\\gamma+1)(-kH^{2})^{ \\frac{4\\gamma+3}{4\\gamma+2}}}\\sqrt{a^{-2(2\\gamma+1)}+\\frac{kH^{2}}{C}}\\times\\] (4.56) \\[\\times {}_{2}F_{1}\\left(\\frac{4\\gamma+3}{2(2\\gamma+1)},\\frac{1}{2}, \\frac{3}{2};1+\\frac{C}{kH^{2}}a^{-2(2\\gamma+1)}\\right).\\] We can deduce the asymptotic behaviors of the warp factor either directly, or, indirectly from the above implicit solutions. For example, take the case \\(Ia)\\) that is valid for a dS brane with \\(C>0\\) and \\(\\gamma<-1/2\\): the argument of the hypergeometric function is \\[z=-\\frac{C}{kH^{2}}a^{-2(2\\gamma+1)}.\\] Note that, the power of \\(a\\) is positive and so, letting \\(a\\to 0\\), makes \\(z\\to 0\\) which means that the hypergeometric function is convergent and that it, actually, approaches one. Then from the LHS of (4.53), we see that \\(Y\\) will approach the finite value \\(\\pm C_{2}\\), which shows that this solution has a finite-distance collapse singularity. A more indirect case is the one described by \\(IIIb)\\). As it follows from (4.48), the warp factor is bounded away from zero so that no collapse singularities exist in this case. We should, however, check if it is possible to have a divergent warp factor within finite distance. Since the power of \\(a\\) in the argument \\[z=1+\\frac{C}{kH^{2}}a^{-2(2\\gamma+1)}\\] Substituting (4.60) in (4.56), we deduce that \\(a\\) behaves according to (we denote this below, with the symbol \\(\\sim\\)) [6] \\[a^{2(\\gamma+1)}\\sim\\pm Y+C_{2}, \\tag{4.63}\\] and so, letting \\(a\\to\\infty\\), gives \\(Y\\to\\pm\\infty\\). This means that the solution (4.56) is indeed regular. We can use the same procedure for determining the singular, or, regular nature of solutions. For a convenient overall view of the most important behaviors of the solutions of cases \\(Ia)\\)-\\(IIIb)\\), we use the table below to illustrate them. Each behavior is accompanied with information regarding the corresponding range of \\(\\gamma\\) and the solution from which it arises. For brevity, regular behaviors like \\(a\\to\\infty\\) as \\(Y\\to\\infty\\), that coexist with finite-distance singularities are not depicted in the table but can be found in [6]. Summarizing, for a curved brane we obtain regular solutions from the following two cases: * \\(IIb)\\), referring to a dS brane with negative density and \\(\\gamma>-1/2\\) * \\(IIIb)\\), referring to an AdS brane with positive density and \\(-1<\\gamma<-1/2\\). We note here that the special case of \\(\\gamma=-1\\), is studied separately in a subsequent subsection. \\begin{table} \\begin{tabular}{l|l|l|l|l} \\hline \\(\\gamma\\) & \\((-\\infty,-1)\\) & \\((-1,-1/2)\\) & \\((-1/2,1/2)\\) & \\((1/2,\\infty)\\) \\\\ \\hline dS, \\(C>0\\) & collapse sing., \\(Ia\\) & collapse sing., \\(Ia\\) & collapse sing., \\(Ib\\) & collapse sing., \\(Ib\\) \\\\ & big-rip sing., \\(Ia\\) & & & \\\\ \\hline dS, \\(C<0\\) & collapse sing., \\(IIa\\) & collapse sing., \\(IIa\\) & _regular_, \\(IIb\\) & _regular_, \\(IIb\\) \\\\ \\hline AdS, \\(C>0\\) & big-rip sing., \\(IIIa\\) & _regular_, \\(IIIb\\) & collapse sing., \\(IIIa\\) & collapse sing., \\(IIIa\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Asymptotic behaviors for a curved brane and a linear fluid #### 4.3.1 The null energy condition The case of curved branes, has already proved successful in providing regular solutions, which was an impossible outcome for a flat brane. We still have to check whether the regular solutions can fulfil energy conditions as well as the requirement of a finite Planck mass. For a linear fluid, we have already seen that the null energy condition is given by (4.25). For a curved brane it translates to having \\[\\rho\\geq 0\\quad\\mbox{and}\\quad\\gamma\\geq-1,\\quad\\mbox{or},\\quad\\rho\\leq 0\\quad \\mbox{and}\\quad\\gamma\\leq-1. \\tag{4.64}\\] These two conditions may be written equivalently with respect to \\(C\\) instead of \\(\\rho\\) as, \\[C\\geq 0\\quad\\mbox{and}\\quad\\gamma\\geq-1, \\tag{4.65}\\] and \\[C\\leq 0\\quad\\mbox{and}\\quad\\gamma\\leq-1. \\tag{4.66}\\] Combining conditions (4.65) and (4.66) with the ranges of \\(\\gamma\\) and \\(C\\) for which the regular solutions of cases \\(IIb)\\) and \\(IIIb)\\) are defined, we see that only the regular solution of \\(IIIb)\\) is compatible with the null energy condition. #### 4.3.2 Localisation of gravity To complete our study for a curved brane, we examine in this subsection, the requirement of a finite Planck mass. We continue, for illustration purposes, to focus on the regular solution of case \\(IIIb)\\). The 4D-Planck mass is given by the integral of Eq. (4.31). The behavior of \\(a^{2}\\) that we need to substitute in Eq. (4.31), can be deduced from (4.63) \\[a^{2}\\sim(|Y|+c_{2})^{\\frac{1}{\\gamma+1}}, \\tag{4.67}\\] after setting \\(\\pm Y=|Y|\\) and positioning the brane at \\(Y=0\\)[6]. It is straightforward to see that integration of \\(a^{2}\\) gives an expression with \\(Y\\) raised to the exponent, \\[\\frac{\\gamma+2}{\\gamma+1}, \\tag{4.68}\\]which is positive, since \\(-1<\\gamma<-1/2\\) for the case \\(IIIb)\\). Therefore, the Planck mass is infinite in this case. We note that for a finite Planck mass, we need to have \\[-2<\\gamma<-1.\\] As shown in [6], also the second regular solution of case \\(IIb)\\) fails to give a finite Planck mass, as well. The problem of localizing gravity on the brane, persists further in the case of regular matching solutions that can be constructed out of the singular solutions \\(Ia)\\) and \\(Ib)\\). Summarizing, for the case of a curved brane and a linear fluid with \\(\\gamma\ eq-1\\), there exist regular solutions for ranges of \\(\\gamma\\), however these ranges are inconsistent, with the requirement of a finite Planck mass (AdS brane with positive density), or, with both the null energy condition and the requirement of a finite Planck mass (dS brane with negative density). On the other hand, regular matching solutions that can be constructed by cutting the bulk and gluing the parts that are free from singularities, satisfy the null energy conditions but fail to localize gravity on the brane [6]. ### The special case \\(\\gamma=-1\\) for a curved brane For \\(\\gamma=-1\\) and a curved brane, we find from (4.3), that \\[\\rho=c_{3}, \\tag{4.69}\\] where \\(c_{3}\\) is an integration constant. Substituting (4.69) and \\(\\gamma=-1\\) in (4.2) we find \\[a^{\\prime\\prime}-\\kappa_{5}^{2}\\frac{c_{3}}{6}a=0. \\tag{4.70}\\] For \\(c_{3}>0\\) the above equation has the general solution \\[a(Y)=c_{1}e^{\\kappa_{5}\\sqrt{c_{3}/6}Y}+c_{2}e^{-\\kappa_{5}\\sqrt{c_{3}/6}Y}, \\tag{4.71}\\] where \\(c_{1}\\) and \\(c_{2}\\) are arbitrary constants. Substitution of (4.71) in (4.1), determines the arbitrary constant \\(c_{3}\\) in terms of \\(c_{1}\\) and \\(c_{2}\\). Here we find \\[c_{3}=-\\frac{3kH^{2}}{2c_{1}c_{2}\\kappa_{5}^{2}}. \\tag{4.72}\\] Non-linear fluid The problems we faced with the solutions of flat, or, curved branes analysed above, can be resolved by allowing \\(\\lambda\ eq 1\\), in the equation of state (2.12). We are going to briefly review here, the main features of solutions for a non-linear equation of state and a flat brane, while full details can be found in [7]. We start by substituting \\(k=0\\) in the system (2.13)-(2.15), leading to \\[\\frac{a^{\\prime 2}}{a^{2}} = \\frac{\\kappa_{5}^{2}}{6}\\rho, \\tag{5.1}\\] \\[\\frac{a^{\\prime\\prime}}{a} = -\\frac{\\kappa_{5}^{2}}{6}(2\\gamma\\rho^{\\lambda}+\\rho), \\tag{5.2}\\] and \\[\\rho^{\\prime}+4(\\gamma\\rho^{\\lambda}+\\rho)\\frac{a^{\\prime}}{a}=0. \\tag{5.3}\\] Before solving the system of equations above, we can first check the restrictions that the null energy condition imposes on the parameters of a non-linear fluid. Inputting \\(p=\\gamma\\rho^{\\lambda}\\) in the null energy condition (3.14), we obtain \\[\\gamma\\rho^{\\lambda}+\\rho\\geq 0, \\tag{5.4}\\] or, equivalently, \\[\\rho^{\\lambda}(\\gamma+\\rho^{1-\\lambda})\\geq 0. \\tag{5.5}\\] Since \\(\\rho\\geq 0\\) from Eq. (5.1), we see that the null energy condition can be written as \\[\\gamma+\\rho^{1-\\lambda}\\geq 0. \\tag{5.6}\\] To derive a solution of the system of Eqs. (5.1)-(5.3), we integrate the continuity equation (5.3) to find the relation between the warp factor and the density. In the integration process we arrive at a logarithmic term of the form \\(\\ln|\\gamma+\\rho^{1-\\lambda}|\\). To incorporate from the beginning the null energy condition (5.6), we choose to ignore the absolute value and simply put this term equal to \\(\\ln(\\gamma+\\rho^{1-\\lambda})\\). The resulting relation between \\(\\rho\\) and \\(a\\) is \\[\\rho=(-\\gamma+c_{1}a^{4(\\lambda-1)})^{1/(1-\\lambda)}, \\tag{5.7}\\]where \\[c_{1}=\\frac{\\gamma+\\rho_{0}^{1-\\lambda}}{a_{0}^{4(\\lambda-1)}}, \\tag{5.8}\\] with \\(\\rho_{0}=\\rho(Y_{0})\\), \\(a_{0}=a(Y_{0})\\) being the initial conditions. According to (5.6) this translates to \\(c_{1}\\geq 0\\). To avoid the singularity in the density with \\(\\rho\\to\\infty\\) for \\(\\lambda>1\\) and \\[a^{4(\\lambda-1)}=\\frac{\\gamma}{c_{1}}, \\tag{5.9}\\] we take, in what follows, \\(\\gamma<0\\). Next, we substitute (5.7) in (5.1) and integrate. We find \\[\\int\\frac{a}{(c_{1}-\\gamma a^{4(1-\\lambda)})^{1/(2(1-\\lambda))}}\\,da=\\pm\\frac{ \\kappa_{5}}{\\sqrt{6}}\\int dY. \\tag{5.10}\\] We note that we can calculate directly the above integral for values of \\(\\lambda\\) that make \\(1/(2(1-\\lambda))\\) a negative integer. These are: \\(\\lambda=(n+1)/n\\), with \\(n=2k\\) and \\(k\\) a positive integer. We study a characteristic example of such choice in the next paragraph. ### The case of \\(\\lambda=3/2\\) Let us focus first on the simplest case, \\(n=2\\) corresponding to \\(\\lambda=3/2\\). This makes the exponent \\(1/(2(1-\\lambda))\\) in the integral on the LHS of Eq. (5.10), equal to \\(-1\\). It is then straightforward to integrate. We arrive at the following implicit solution \\[\\pm Y+C_{2}=\\frac{\\sqrt{6}}{\\kappa_{5}}\\left(\\frac{c_{1}}{2}a^{2}-\\gamma\\ln a \\right), \\tag{5.11}\\] where \\(C_{2}\\) is an integration constant. Looking at solution (5.11), we see that we can have the following asymptotic behaviors \\[a \\to \\infty,\\quad\\rho\\to 0,\\quad p\\to 0,\\quad\\mbox{as}\\quad Y\\to\\pm\\infty \\tag{5.12}\\] \\[a \\to 0^{+},\\quad\\rho\\to 1/\\gamma^{2},\\quad p\\to-1/\\gamma^{2},\\quad \\mbox{as}\\quad Y\\to\\pm\\infty, \\tag{5.13}\\] which show that all pathological behaviors of \\(a\\), become possible only at infinite distance, and therefore this solution is regular. In addition to its good features of regularity and compatibility with the null energy condition, the solution (5.11) also offers the possibility to construct a matching solution that leads to a finite 4D-Planck mass; hence, it embodies all the required physical properties. The matching solution reads [7] \\[|Y|=\\frac{\\sqrt{6}}{\\kappa_{5}}\\left(-\\frac{c_{1}}{2}a^{2}+\\gamma\\ln a-\\frac{ \\gamma}{2}-\\gamma\\ln\\sqrt{\\frac{-\\gamma}{c_{1}}}\\right),\\quad 0<a\\leq\\sqrt{ \\frac{-\\gamma}{c_{1}}}, \\tag{5.14}\\] with the brane positioned at \\(Y=0\\). To calculate the 4D-Planck mass, we first figure out from (5.14) the behaviour of \\(a^{2}\\) as \\(Y\\to-\\infty\\), which reads \\[a^{2}\\sim e^{-(\\sqrt{6}\\kappa_{5}/(3\\gamma))Y}. \\tag{5.15}\\] Then by using the symmetry of (5.14), we write the integral (4.31) in the following form \\[\\int_{-Y_{c}}^{Y_{c}}a^{2}(Y)dY=2\\int_{-Y_{c}}^{0}a^{2}(Y)dY\\sim 2\\int_{-Y_{c} }^{0}e^{-(\\sqrt{6}\\kappa_{5}/(3\\gamma))Y}dY=-\\sqrt{6}\\frac{\\gamma}{\\kappa_{5}} (1-e^{(\\sqrt{6}\\kappa_{5}/(3\\gamma))Y_{c}}). \\tag{5.16}\\] Taking \\(Y_{c}\\to\\infty\\) and keeping in mind that we consider only negative values of \\(\\gamma\\), we see that the Planck mass remains finite and is proportional to \\[-\\sqrt{6}\\frac{\\gamma}{\\kappa_{5}}.\\] ### Solutions for general \\(\\lambda\\) For general \\(\\lambda\\), solving the system (5.1)-(5.3) becomes much more complicated. For a convenient overview, we outline below, the types of new solutions that we obtain and comment on their asymptotic behaviors. Full details can be found in [7]. For all values of \\(\\lambda>1\\), the solutions share all the fine qualities, previously, encountered in the solution for \\(\\lambda=3/2\\). In particular, for \\(\\lambda=1+1/(2k)\\), with \\(k\\) a positive integer, we find the following form of solution \\[\\pm Y+c_{2}=\\frac{\\sqrt{6}}{\\kappa_{5}}\\left(\\sum_{s=0}^{k-1}\\frac{k!}{(k-s)! s!}\\frac{c_{1}^{k-s}}{2-2s/k}a^{2-2s/k}(-\\gamma)^{s}+(-\\gamma)^{k}\\ln a\\right). \\tag{5.17}\\] Conclusions and open questions We have reviewed the effect of the curvature and equation of state of the bulk fluid in the behavior of solutions of brane-worlds, consisting of a 3-brane embedded in a five-dimensional bulk. For a linear equation of state with \\(\\gamma\ eq-1\\) and a flat brane, there is always a finite-distance singularity. The types of singularity that arise in this case are, the collapse type which is determined by a vanishing warp factor and a divergent density and pressure (for \\(\\gamma\ eq 0\\)), or, a big-rip type that is signatured by a divergent warp factor and also density and pressure. The avoidance of such singularities becomes possible only after cutting and matching the part of the bulk that is free from singularities. Still, the solutions we obtain in this way, cannot satisfy, simultaneously, requirements set by energy conditions _and_ localization of gravity on the brane. An exception to this result, is the case of \\(\\gamma=-1\\) which gives a solution that can be translated to the one arising within the scenario of [20]. For this particular value of \\(\\gamma\\), it is possible to have a regular solution (which is half of AdS\\({}_{5}\\)) that trivially satisfies the null energy condition and at the same time gives a finite four-dimensional Planck mass. For a curved brane and a linear fluid, on the other hand, the situation improves in the sense that, regular solutions now become possible for a range of \\(\\gamma\\). Some of the regular solutions can even satisfy the null energy condition; they correspond to AdS branes for \\(\\gamma\\) in the region \\([-1,1/2)\\). However, the problem of localizing gravity on the brane met, previously, in the case of a flat brane, continues to arise and the only way to overcome it, is by compactifying the bulk with a second brane as in [19]. Finally, for a flat brane and a non-linear equation of state, the situation is resolved: we can construct a regular solution consistent with the null energy condition which also localizes successfully gravity on the brane. It is an interesting open question whether such non-linear equation of state satisfying the null energy condition can be realised using an underlying microscopic description. In this paper, we have also presented results which establish that there is a close connection between 3-brane setups in a five-dimensional bulk and cosmological solutions having a 4-dimensional spatial slice and evolving in proper time. This is accomplished by transforming the \\(Y\\equiv Y_{OLD}\\)-dimension into proper time, \\(Y_{OLD}\\to it_{NEW}\\), and the time-like dimension on the brane into a fifth spatial coordinate, \\(t_{OLD}\\to-iY_{NEW}\\), leaving the remaining three spatial coordinates intact. Therefore we can map from a braneworld, where everything depends on the transverse bulk space coordinate \\(Y_{OLD}\\), to a cosmological spacetime in \\(4+1\\)-dimensions evolving in time. This establishes a possible connection between cosmological phenomena and braneworld properties. In particular, the interpretation of the singular (in the transverse bulk coordinate) brane solutions, or those with some regularity (as discussed here), may correspond to cosmological solutions with special properties. For instance, the entropy of black holes in the braneworld may correspond to the cosmological entropy of standard \\((3+1)\\)- spacetime. We expect to analyse this point in a future publication. ## Acknowledgments Work partially performed by I.A. as International professor of the Francqui Foundation, Belgium. ## References * [1] Antoniadis I, Cotsakis S, Klaoudatou I. 2007. Braneworld cosmological singularities. Proceedings of MG11 meeting on General Relativity _World Scientific_**3** 2054-2056 [arXiv:0701033 [gr-qc]]. * [2] Antoniadis I, Cotsakis S, Klaoudatou I. 2010. Brane singularities and their avoidance. _Class. Quant. Grav._**27** 235018 [arXiv:1010.6175 [gr-qc]]. * [3] Antoniadis I, Cotsakis S, Klaoudatou I. 2013. Brane singularities with mixtures in the bulk. _Fortschr. Phys._**61** 20-49 [arXiv:1206.0090 [hep-th]]. * [4] Antoniadis I, Cotsakis S, Klaoudatou I. 2014. Enveloping branes and braneworld singularities._Eur. Phys. J. C_**74** 3192 [arXiv:1406.0611v2 [hep-th]]. * [5] Antoniadis I, Cotsakis S, Klaoudatou I. 2015. Dynamics and asymptotics of brane-worlds. Proceedings of the 13th Marcel Grossmann Meeting on General Relativity. _World Scientific_ 1859-1861, doi: 10.1142/9789814623995-0299. * [6] Antoniadis I, Cotsakis S, Klaoudatou I. 2016. Curved branes with regular support. _Eur. Phys. J. C_**76** 511 [arXiv:1606.09453 [hep-th]]. * [7] Antoniadis I, Cotsakis S, Klaoudatou I. 2021. Regular branewords with nonlinear bulk-fluids. _Eur. Phys. J. C_**81** 8, 771 [arXiv:2106.15669v2 [hep-th]]. * [8] Rubakov VA, Shaposhnikov ME. 1983. Extra space-time dimensions: Towards a solution to the cosmological constant problem. _Phys. Lett. B_**125** 139. * [9] Arkani-Hamed N, Dimopoulos S, Kaloper N, Sundrum R. 2000. A small cosmological constant from a large extra dimension. _Phys. Lett. B_**480** 193-199 [arXiv:0001197v2 [hep-th]]. * [10] Kachru S, Schulz M, Silverstein E. 2000. Bounds on curved domain walls in 5d gravity. _Phys. Rev. D_**62** 085003 [arXiv:0002121 [hep-th]]. * [11] Forste S, Lalak Z, Lavignac S, Nilles H P. 2000. A Comment on Self-Tuning and Vanishing Cosmological Constant in the Brane World _Phys. Lett. B_**481** 360-364 [arXiv:0002164v2 [hep-th]]. * [12] Forste S, Lalak Z, Lavignac S, Nilles H P. 2000. The Cosmological Constant Problem from a Brane-World Perspective. _JHEP_**09** 034 [arXiv:0006139v2 [hep-th]]. * [13] Forste S, Nilles H P, Zavala I. 2011. Nontrivial Cosmological Constant in Brane Worlds with Unorthodox Lagrangians. _JCAP_**07** 007 [arXiv:1104.2570 [hep-th]]. * [14] Csaki C, Erlich J, Grojean C, Timothy Hollowood T. 2000. General Properties of the Self-tuning Domain Wall Approach to the Cosmological Constant Problem. _Nucl. Phys. B_**584** 359-386 [arXiv:0004133v2 hep-th]]. * [15] Gubser SS. 2000. Curvature singularities: The good, the bad, and the naked. _Adv. Theor. Math. Phys._**4** 679-745 [arXiv:0002160 [hep-th]]. * [16] Antoniadis I, A possible new dimension at a few TeV. 1990. _Phys. Lett. B_**246** 377-384. * [17] Arkani-Hamed N, Dimopoulos S, Dvali G. 1998. The hierarchy problem and new dimensions at a millimeter. _Phys. Lett. B_**429** 263-272 [arXiv:9803315 [hep-ph]]. * [18] Antoniadis I, Arkani-Hamed N, Dimopoulos D, Dvali G 1998. New dimensions at a millimeter to a Fermi and superstrings at a TeV. _Phys. Lett. B_**436** 257-263 [arXiv:hep-ph/9804398 [hep-ph]]. * [19] Randall L, Sundrum R. 1999. A large mass hierarchy from a small extra dimension, _Phys. Rev. Lett._**83** 3370-3373 [ arXiv:9905221 [hep-ph]]. * [20] Randall L, Sundrum R. 1999. An alternative to compactification. _Phys. Rev. Lett._**83** 4690-4693 [arXiv:9906064 hep-th]]. * [21] Antoniadis I. 2006. Physics of extra dimensions. _J. Phys. Conf. Ser._**33** 170-181. * [22] Israel W. 1966. Singular hypersurfaces and thin shells in general relativity. _Nuovo Cimento B_**44** 1-14. * [23] Israel W. 1967. Singular hypersurfaces and thin shells in general relativity. _Nuovo Cimento B_**48** 463. * [24] Srivastava SK. 2005. Future universe with \\(w<-1\\) without big smash. _Phys. Lett. B_**619** 1-4 [arXiv:0407048v4 [astro-ph]]. * [25] Gonzalez-Diaz PF. 2003. You need not be afraid of phantom energy. _Phys. Rev. D_**68** 021303 [arXiv:0305559 [astro-ph]]. * [26] Astashenok AV, Nojiri S, Odintsov SD, Yurov AV. 2012. Phantom Cosmology without Big Rip Singularity. _Phys. Lett. B_**709** 396-403 [arXiv:1201.4056 [gr-qc]]. * [27] Barrow JD. 1990. Graduated inflationary universes. _Phys. Lett. B_**235** 40-43. * [28] Kamenshchik AY, Moschella U, Pasquier V. 2001. An alternative to quintessence. _Phys. Lett. B_**511** 265-268 [arXiv:0103004 [gr-qc]]. * [29] Bento MC, Bertolami O, Sen AA. 2002. Generalized Chaplygin gas, accelerated expansion and dark energy-matter unification. _Phys. Rev. D_**66** 043507 [arXiv:0202064 [gr-qc]]. * [30] Cotsakis S, Klaoudatou I. 2005. Future Singularities of Isotropic Cosmologies. _J. Geom. Phys._**55** 306-315 [arXiv:0409022 [gr-qc]]. * [31] Cotsakis S, Klaoudatou I. 2007. Cosmological Singularities and Bel-Robinson Energy. _J. Geom. Phys._**57** 1303-1312 [arXiv:0604029 [gr-qc]]. * [32] Nojiri S, Odintsov SD, Tsujikawa S. 2005. Properties of singularities in (phantom) dark energy universe. _Phys. Rev. D_**71** 063004 [arXiv:0501025 [hep-th]]. * [33] S. Cotsakis and J. D. Barrow, The dominant balance at cosmological singularities, _J. Phys. Conf. Ser._**68** (2007) 012004. * [34] Wald RM. 1984 _General Relativity_. University of Chicago Press. * [35] Poisson E. 2004 _A Relativist's Toolkit_. Cambridge University Press. * [36] Wang ZX, Guo DR. 1989 _Special Functions_. World Scientific. This figure \"Capture01.PNG\" is available in \"PNG\" format from: [http://arxiv.org/ps/2110.15079v1](http://arxiv.org/ps/2110.15079v1)This figure \"Capture02.PNG\" is available in \"PNG\" format from: [http://arxiv.org/ps/2110.15079v1](http://arxiv.org/ps/2110.15079v1)This figure \"Capture3.PNG\" is available in \"PNG\" format from: [http://arxiv.org/ps/2110.15079v1](http://arxiv.org/ps/2110.15079v1)This figure \"Capture4.PNG\" is available in \"PNG\" format from: [http://arxiv.org/ps/2110.15079v1](http://arxiv.org/ps/2110.15079v1)This figure \"Capture5.PNG\" is available in \"PNG\" format from: [http://arxiv.org/ps/2110.15079v1](http://arxiv.org/ps/2110.15079v1)This figure \"Capture6.PNG\" is available in \"PNG\" format from: [http://arxiv.org/ps/2110.15079v1](http://arxiv.org/ps/2110.15079v1)
We review studies on the singularity structure and asymptotic analysis of a 3-brane (flat or curved) embedded in a five-dimensional bulk filled with a 'perfect fluid' with an equation of state \\(p=\\gamma\\rho\\), where \\(p\\) is the 'pressure' and \\(\\rho\\) is the 'density' of the fluid, depending on the 5th space coordinate. Regular solutions satisfying positive energy conditions in the bulk exist only in the cases of a flat brane for \\(\\gamma=-1\\) or of AdS branes for \\(\\gamma\\in[-1,-1/2)\\). More cases can be found by gluing two regular brunches of solutions at the position of the brane. However, only a flat brane for \\(\\gamma=-1\\) leads to finite Planck mass on the brane and thus localises gravity. In a more recent work, we showed that a way to rectify the previous findings and obtain a solution for a flat brane and a range of \\(\\gamma\\), that is both free from finite-distance singularities and compatible with the physical conditions of energy and finiteness of four-dimensional Planck mass, is by introducing a bulk fluid component that satisfies a non-linear equation of state of the form \\(p=\\gamma\\rho^{\\lambda}\\) with \\(\\gamma<0\\) and \\(\\lambda>1\\).
Write a summary of the passage below.
305
arxiv-format/2204_00132v2.md
# Real-Time and Robust 3D Object Detection Within Roadside LiDARs Using Domain Adaptation Walter Zimmer\\({}^{1}\\)\\({}^{\\copyright}\\), Marcus Grabler\\({}^{1,2}\\)\\({}^{\\copyright}\\) and Alois Knoll\\({}^{1}\\)\\({}^{\\copyright}\\) *This research was supported by the Federal Ministry of Education and Research in Germany within the project _AUTOtech.agl_, Grant Number: 011522088U.\\({}^{1}\\)The authors are with the Informatics Faculty, Technical University of Munich (TUM), 85748 Garching-Hochbrueck, Germany [email protected], {grabler, knoll}@tum.de\\({}^{2}\\)Autonomous Reply, Riesstrasse 22, 80992 Munich, Germany [email protected] ## I Introduction High quality and balanced data is crucial to achieve high accuracy in deep learning applications. The creation of labeled data of roadside LiDARs is a difficult task. Considering the high labor cost of manually labeling 3D LiDAR point clouds, we need to find a solution to deal with small datasets. Publicly available LiDAR datasets were recorded and labeled from a vehicle perspective which makes is difficult to apply these trained detectors on roadside LiDARs. The focus of this work lies in the area of domain adaptation to tackle the domain shift problem. How can a neural network that was trained in one operational design domain (ODD), e.g. an urban area like in the A9 dataset [1], be adapted to a slightly different domain, e.g. an intersection in a different city with different LiDAR sensors and mounting positions? This process is known as transfer learning - training a model on a large dataset (source domain) and fine-tuning it on another dataset (target domain). Another challenge is real-time 3D object detection on roadside LiDARs, i.e. to detect objects at a high frame rate to prevent accidents. This highly depends on the LiDAR type, the rotation rate, and the number of 3D points. The final challenge this work is dealing with is a robust 3D detection of all traffic participants. Detecting small and occluded objects at different weather conditions and rare traffic scenarios is a highly important research area to increase safety of automated vehicles. We create and open source a large semi-synthetic roadside dataset with 7,000 labeled point cloud frames (see Fig. 1). This dataset is balanced in terms of object classes and contains a high variety so that objects can be detected in different scenarios and different environment conditions. We analyze whether transfer learning from a larger roadside LiDAR dataset, such as the A9 dataset, can improve the model performance on other roadside datasets. The first batch of the published A9 dataset includes 459 manually labeled point cloud frames and contains 3,104 labeled 3D objects. In this work we propose a single-stage 3D object detector, train it in one domain and finetune it on a different domain. An intersection, that is part of the A9 Test Stretch for Autonomous Driving [5, 6, 7], is equipped with five LiDAR sensors (see Fig. 3), in order to represent a real-time digital twin of the traffic. This work provides a domain adaptation solution for the single LiDAR detection task. The main contributions in this work are summarized as follows: * We propose a robust single-stage LiDAR-only detector based on _PointPillars_. We introduce five extensions to improve PointPillars and evaluate the performance of the model on the test set of the A9 [1], the A11 and D16 dataset from the _Regensburg Next_ project [8]. * We introduce a synthetic data generation module for the CARLA simulator [9], that converts the labels to the OpenLABEL format [10]. * We propose a novel domain adaptation technique, the semi-synthetic data generation method, which decreases the sim-to-real gap, as shown in experiments. * We create a semi-synthetic dataset, called _proSynthSemi_, with 7,000 labeled LiDAR point cloud frames using the Fig. 1: Labeled 3D objects in the semi-synthetic _proSynthSemi_ dataset. CARLA simulator [9] and train our model on that. In addition, we provide two synthetic datasets, A11 and D16, with 2,581 and 3,315 labeled frames respectively. * Experiments show that our _DASE-ProPillars_ model outperforms the _SE-ProPillars_[11] model by 30.56% 3D mAP on the A9 test set (Car class), while the inference speed is maintained at 45 Hz (22 ms). ## II Related work First, we compare one-stage and two stage methods, types of point cloud representations, single- and multi-frame approaches, supervised and unsupervised as well as center and anchor-based methods. In the second part we analyze the importance of data augmentation and the generation of synthetic data to solve the domain shift problem. ### _3D Object Detection Models_ According to the form of feature representation, LiDAR-only 3D object detectors can be divided into four main streams, i.e. point-based, voxel-based, range-view based and multi-view-based methods [12]. In point-based methods, features maintain the form of point-wise features, either by a sampled subset or derived virtual points. PointRCNN [13] uses a PointNet++ backbone [14] to extract point-wise features from the raw point cloud, and performs foreground segmentation. For each foreground point, it generates a 3D proposal followed by a point cloud ROI pooling and a canonical transformation-based bounding box refinement process. Point-based methods usually have to deal with a huge amount of point-wise features, which leads to a lower inference speed. To accelerate point-based methods, 3DSSD [15] introduces feature farthest-point-sampling (F-FPS), which computes the feature distance for sampling, instead of Euclidean distance in traditional distance farthest-point-sampling (D-FPS). The inference speed of 3DSSD is competitive with voxel-based methods. SECOND [16] proposes a sparse convolutional middle extractor [17] to speed up inference time. In PointPillars [2], the point cloud is divided into pillars (vertical columns), which are special voxels without partition along the z-direction. The feature map of pillars is a pseudo-image so that 2D convolutions can be used. PointPillars runs with 62 FPS using TensorRT. SA-SSD [18] adds a detachable auxiliary network to the sparse convolutional middle layers to predict a point-wise foreground segmentation and a center estimation task to provide a point-level supervision. It also proposes a part-sensitive warping (PS-Warp) operation as an extra detection head. It can alleviate the misalignment between predicted boxes and classification confidence maps, since they are gen Fig. 2: Overview architecture of _DASE-ProPillars_, a LiDAR-only single-stage pillar-based 3D object detector. The detector is based on _PointPillars_[2], with the following five extensions. 1) Data Augmentation (Shape-Aware [3], Dropout, upsampling and Gaussian Noise). 2) Stacked Triple Attention Mechanism [4]. 3) Attentive Hierarchical Middle Layers. 4) Multi-task detection head. 5) Self-Ensembling Training Architecture [3]. The stacked triple attention module extracts features from the semi-synthetic point cloud using the triple attention mechanism, including channel-wise, point-wise, and voxel-wise attention to enhance the learned features. The pillar feature net turns point-wise features into pillar features and scatters the pillar features into a pseudo image. The hierarchical middle layers perform 2D convolution operations on the pseudo image. Hierarchical feature maps are concatenated with attentive addition. Finally, the multi-task head is used for the final prediction, that includes an IoU prediction to alleviate the misalignment between the localization accuracy and classification confidence. In addition, we introduce two new training techniques: the shape-aware data augmentation module and the self-ensembling teacher and student training framework. erated by two different convolutional layers in the detection head. CIA-SSD [19] designs an IoU-aware confidence rectification module, using an additional convolutional layer in the detection head to make IoU predictions. The predicted IoU value rectifies the classification score. By introducing only one additional convolutional layer, it is more lightweight than SA-SSD. SE-SSD [3] proposes a self-ensembling one-stage post-training framework, where a pre-trained teacher model produces predictions that serve as soft targets in addition to the hard targets from the label. These predictions are matched with student's predictions by their IoU and supervised by the consistency loss. Soft targets are closer to the predictions from the student model and therefore help the student model to finetune its predictions. The Orientation-Aware Distance-IoU Loss (OD-IoU) is proposed to replace the traditional smooth-\\(L_{1}\\) loss of box regression in the post training, in order to provide a fresh supervisory signal. This OD-IoU loss emphasizes the orientation of the bounding boxes as well as the alignment of the center points. SE-SSD also designs a shape-aware data augmentation module to improve the generalization ability of the student model. This module performs dropout, swapping and sparsification of points. This data augmentation is applied to the point cloud data the student model is trained on. In this way, by using both a teacher and a student single-stage object detector, the framework can boost the precision of the detector significantly without incurring extra computation during the inference. Pyramid R-CNN [20] concentrates on handling the sparsity and non-uniform distribution of point clouds. The authors propose a novel pyramid RoI-head second-stage module, that extracts the features from sparse points of interest. Utilizing features only inside the RoIs performs well in 2D detection models mainly for two reasons. First, the input feature map is dense and second, the collected pixels have large receptive fields. However, in 3D models, the points of interest are sparse and non-uniformly distributed inside the RoIs. Therefore, accurately inferring the sizes and categories of objects becomes hard when collecting features of few individual points and not gathering enough information from neighbours. Pyramid RoI effectively solves this problem by constructing a pyramid grid structure that contains the RoI-grid points both inside and outside RoIs, so that both fine-grained shape structures for accurate box refinement as well as large context information for identifying incomplete objects can be captured by grid points inside RoIs and outside RoIs, respectively. The authors performed experiments on the KITTI Dataset and the Waymo Open Dataset [21]. They showed that Pyramid R-CNN outperforms other 3D detection models on these two datasets. On the Waymo Open Dataset their Pyramid-PV model achieves 81.77% L1 mAP. As conventional 3D convolutional backbones in voxel-based 3D detectors are not able to efficiently capture extensive context information, Voxel Transformer (VoTr) [22] is proposed to resolve this issue by introducing a voxel-based transformer backbone for 3D object detection from point clouds. It consists of a series of sparse voxel modules, which extract features at empty locations and thus are responsible for the downsampling of the voxel-grids and submanifold voxel modules, which perform multi-head self attention strictly on non-empty voxels to keep the original 3D structure with increasing the receptive fields. The attention mechanism is split up into two components, called local attention and dilated attention. Local attention focuses on the neighboring region to preserve detailed information. Dilated attention obtains a large attention range with only a few attending voxels by gradually increasing the search step. VoTr can be applied to single-stage and two-stage detectors. Comparing with the backbone of the respective module, the AP results on the KITTI test set, calculated by 40 recall positions for the car class, are slightly increased. Signal miss, external and self occlusion often cause shape misses for disordered point clouds. Behind the Curtain Detector (BtcDet) [23] deals with this problem by learning the object shape priors and estimate the complete object shapes including the partially occluded points. Therefore, points are filled into the labeled bounding boxes and using the recovered shape miss, the detection results are improved. For the training process, the complete object shapes are approximated using the ground truth labels of the corresponding objects. For cars and cyclist, the objects points are mirrored against the middle section plane of the bounding box and a heuristic H(A,B) determines if a source object B covers most parts of a target object A. The heuristic also provides points that can fill the target object's shape miss. The detection pipeline of BtcDet is built up as followed: First, the regions of occlusion and signal miss have to be identified after the Fig. 3: a) Birds-eye view (BEV) of the S110 intersection that is part of the A9 test stretch. b) S110 intersection modeled in the CARLA simulator and Unreal Engine. c) Intersection in Regensburg that was used to create the A11 and D16 synthetic datasets. d) Reconstruction of the intersection in Regensburg. spherical voxelization for the point cloud. Then, a shape occupancy network estimates the probability of object shape occupancy using the created training targets, which consists of the approximated complete object shapes. The extracted point cloud 3D features are then sent to a region proposal network to generate 3D proposals, which are refined in the last step, the proposal refinement step. ### _Domain Adaptation_ Domain adaptation is a type of transfer learning and aims to transfer knowledge from a source domain, for which annotated data is available to a target domain, for which no or only less annotated data is available. Semi-supervised domain adaptation uses a few labeled examples from the target domain to learn a target model and unsupervised domain adaptation exploits only the labeled data from the source domain without having any annotated target domain data [24]. The domain adaptation methods can be divided into four different approaches, which are either data-driven, such as domain-invariant data representation, domain mapping, and normalization statistics or model-driven like the domain-invariant feature learning [25]. With the use of these domain adaptation methods, the gap between the source and the target domain should be mitigated [26]. Wang _et al.[27]_ propose a dataset-to-dataset, semi-supervised domain adaptation for 3D object detection and provide a baseline for 3D object detection adaptation across countries using normalization statistics domain adaptation methods. As only annotated datasets are considered, few-shot fine-tuning enables to increase the accuracy by selecting 10 labeled scenes from the target domain, which are added to the source domain during training. As the car sizes vary in several countries, the object sizes of the source domain differ from the object sizes of the target domain. Therefore, the already trained object detector is modified that the predicted box sizes can better match the previously determined target statistics. To fit the adjusted box size, the corresponding labels are scaled up or shrunk down. New point clouds with the associated labels whose sizes are similar to the target domain data are generated. This step is called statistical normalization. ST3D [28] provides a self-training pipeline for unsupervised domain adaptation on 3D object detection from point clouds, where no annotated data in the target domain is available. As the sizes of the objects vary in different datasets due to the geographical location in which the data were recorded, ST3D proposes pre-training of the 3D detector on source domain with random object scaling (ROS) strategy to mitigate the negative effects of source domain bias. Using the 3D object detector, pseudo labels for the unlabeled target data are generated. The quality-aware triplet memory bank (QTMB) modules parses the object predictions to the pseudo labels. However, the negative impacts on pseudo labeled objects lead to noisy supervisory information and instability for self-training. The memory bank updates the pseudo labels that also serve as labels for subsequent model training. The curriculum data augmentation (CDA) module allows to generate gradually increasingly diverse and potentially hard examples to improve the model. This enables the model to learn from challenging samples while making the examples more difficult during training. ## III Approach We design a real-time LiDAR 3D object detector (_DASE-ProPillars_) to solve the domain shift problem. The architecture of our _DASE-ProPillars_ model is shown in Fig. 2. **Data Generation.** We created a semi-synthetic dataset (_proSynthSemi_) with 7,000 point cloud frames using the CARLA simulator and train our _DASE-ProPillars_ model on it. Fig. 3 shows the intersection with generated traffic in the CARLA simulator. A simulated LiDAR sensor represents a real Ouster OS1-64 (gen. 2) LiDAR sensor with 64 channels and a range of 120 m. In the simulation, we add Gaussian noise (see Eq. (1)) \\(N(\\mu,\\sigma^{2})\\) with mean \\(\\mu=0\\) and standard deviation \\(\\sigma=0.1\\) to disturb each point along the vector of its raycast. \\[p(z)=\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{\\frac{(z-\\mu)^{2}}{2\\sigma^{2}}} \\tag{1}\\] The LiDAR emits 1.31M points per second and runs at 10 Hz (131,000 points per frame). We store the extracted point clouds in.ped files and labels in.json files according to the OpenLABEL standard [10]. To get more realistic point clouds, the points of the objects in the simulated point clouds are extracted and included into a background of a point cloud captured by a real Ouster OS1-64 (gen.2) LiDAR, which does not contain any objects. Before the simulated object points are inserted into the real background point set, those points in the real point cloud are cut out, which are inside the simulated objects or below, respectively the points of the ground plane. As the height profiles of simulated and real data do not coincide, the z-coordinate of the objects must be adjusted in order to place the objects on the ground plane. We use the _RANSAC_ algorithm [29] to determine the ground plane of the real point cloud and thus, calculating the corresponding height profile. Subsequently, a pipeline is applied to the background points using both Gaussian noise and drop out of points in order to get more variance in the point clouds. This gives the advantage to get more realistic point clouds and having labeled objects from the simulation. **Normalization.** As the bounding boxes of the classes differ between the semi-synthetic and real A9 dataset, the boxes are normalized to an average size of the bounding boxes for each class. Since manually labeled data can often result in incorrect sizes of the length, width and height of the bounding box, normalized bounding boxes of the synthetic labels are used instead. For synthetic data, the exact dimensions can be extracted directly from the simulation. With normalization, the distinction between the individual classes can be improved, which is useful, e.g. for the _Van_ and _Car_ classes. In addition, the normalized sizes are used in a domain adaptation to adjust the source domain data to the target domain data. **Voxelization.** We divide the raw point cloud into vertical pillars before feeding them into a neural network. These are special voxels that are not split along the vertical axis. Pillars have these advantages over voxels: A pillar-based backbone is faster than a voxel-based backbone due to fewer grid cells. Time consuming 3D convolutional middle layers are also being eliminated and instead 2D convolutions are being used. We also do not need to manually tune the bin size along the z-direction hyperparameter. If a pillar contains more points than specified in the threshold, then the points are being subsampled to the threshold using farthest point sampling [30]. If a pillar contains fewer points than the threshold, then it is padded with zeros to make the dimensions consistent. Due to the sparsity issue most of the pillars are empty. We record the coordinates of non-empty pillars according to the pillar's center index. Empty pillars are not being considered during the feature extraction until all pillars are being scattered back to a pseudo image for 2D convolution. For experiments, we set the voxel size to (0.2, 0.2, 6.0) m, for which the height (6.0 m) must correspond to the detection range in the z-axis. The maximum number of points per voxel is set to 40 and if a voxel contains more, the points are subsampled using the farthest sampling method. We also limit the number of voxels to 20,000. **Stacked Triple Attention.** The _Stacked Triple Attention_ module is used for a more robust and discriminative feature representation. Originally introduced in _TANet_[4] by Liu _et al._, the stacked triple attention module enhances the learning of hard to detected objects and deals better with noisy points. This method can be applied on both, voxel and pillar-based point clouds. The attention mechanism in this module follows the Squeeze-and-Excitation pattern [31]. If channel-wise attention is applied to an input tensor with shape (\\(H\\times W\\times C\\)), then first a global pooling operation (max pooling) is used to pool the tensor to shape (\\(1\\times 1\\times C\\)), called squeeze operation. Then, two fully connected (FC) layers are applied to the squeezed tensor attention score, called excitation operation. Between the two FC layers, the feature dimension is reduced and then recovered with a reduction ratio which forms a bottleneck structure. After that, a sigmoid function is applied to get the attention score. Finally, the (\\(1\\times 1\\times C\\)) tensor is multiplied element-wise to get the original (\\(H\\times W\\times C\\)) feature. The input to the module is a \\((P\\times N\\times C)\\) tensor, where \\(P\\) is the number of non-empty pillars, \\(N\\) is the maximum number of points in each pillar, and \\(C\\) is the dimension of the input point-wise feature. At the beginning, we have a 9-dimensional (\\(C=9\\)) feature vector \\((x,y,z,r,x_{c},y_{c},z_{c},x_{p},y_{p})\\), where \\(x,y,z\\) are the coordinates of the point, r is the intensity, \\(x_{c},y_{c},z_{c}\\) are the distance to the arithmetic mean of all points inside the pillar, \\(x_{p},y_{p}\\) are the location of the pillar from the pillars center. The triple attention (TA) module extracts features inside each pillar, using point-wise, channel-wise and voxel-wise attention. All three attention scores are combined to form the final output feature. To further exploit the multi-level feature attention, two triple attention modules are stacked with a structure similar to the skip connections in ResNet [32]. The first module takes the raw point cloud 9-dim features as input, while the second one works on the extracted high dimensional features. For each TA module the input is concatenated or summed to the output to fuse more feature information. Each TA module is followed by a fully connected layer to increase the feature dimension. Inside the TA modules, the attention mechanism only re-weights the features, but does not increase their dimensions. **Pillar Feature Net.** We choose _PointPillars_[2] as our baseline and improve its 3D detection performance at the expense of inference time. _PointPillars_ runs at 42 Hz without the acceleration of _TensorRT_. Since there is a trade-off between speed and accuracy, we can further boost the accuracy by incorporating additional modules without sacrificing the inference speed too much. The pillar feature net (PFN) shown in Fig. 2 takes pillars as input, extracts pillar features, and scatters pillars back to a pseudo image for 2D convolution operations in the middle layers. The pillar feature net acts as an additional feature extractor to the stacked triple attention module. The point-wise pillar-organized features from the stacked TA module with shape (\\(P\\times N\\times C\\)) are fed to a set of PFN layers. Each PFN layer is a simplified PointNet [33], which consists of a linear layer, Batch-Norm [34], ReLU [35], and max pooling. The max-pooled features are concatenated back to the ReLU's output to keep the point-wise feature dimension inside each pillar, until the last FPN layer. The last FPN layer makes the final max pooling and outputs a (\\(P\\times C\\)) feature as the pillar feature. Pillar features are then scattered back to the original pillar location, forming a (\\(C\\times H\\times W\\)) pseudo image, where \\(H\\) and \\(W\\) are the height and width of the pillar grid. Here the location of empty pillars is padded with zeros. **Attentive Hierarchical Middle Layers.** We exchange the default backbone of _PointPillars_ with an _Attentive Hierarchical Backbone_ to perform 2D convolution on the pseudo image from the pillar feature net. In the first stage, the spatial resolution of the pseudo image is gradually downsampled by three groups of convolutions. Each group contains three convolutional layers, where the first one has a stride of two for downsampling, and the two subsequent layers act only for feature extraction. After downsampling, deconvolution operations are applied to recover the spatial resolution. Deconvolutional layers (marked with an asterix) recover the size of feature maps with stride 2 and element-wise add them to upper branches. The remaining three deconvolutional layers make all three branches have the same size (half of the original feature map). Then the final three feature maps are combined by an attentive addition to fuse both, spatial and semantic features. The attentive addition uses the plain attention mechanism. All three feature maps are being passed through a convolutional operation and are channel-wise concatenated as attention scores. The softmax function generates the attention distribution and feature maps are multiplied with the corresponding distribution weight. The element-wise addition in the end gives the final attention output, a (\\(C\\times H/2\\times W/2\\)) feature map. **Multi-task Head.** The multi-task head outputs the final class (based on a confidence score), the 3D box position (\\(x,y,z\\)), dimensions (\\(l,w,h\\)), rotation (\\(\\theta\\)) and the direction of the detected object. The direction (front/back) is being classified to solve the problem that the sine-error loss [36] cannot distinguish flipped boxes. Four convolutional layers operate on the feature map separately. One of the four heads is the IoU prediction head that predicts an IoU between the ground truth bounding box and the predicted box. It was introduced in CIA-SSD [19] to deal with the misalignment between the predicted bounding boxes and corresponding classification confidence maps. The misalignment is mainly because these two predictions are from different convolutional layers. Based on this IoU prediction, we use the confidence function (CF) to correct the confidence map and use the distance-variant IoU-weighted NMS (DI-NMS) module post-process the predicted bounding boxes. The distance-variant IoU-weighted NMS is designed to deal with long-distance predictions, to better align far bounding boxes with ground truths, and to reduce false-positive predictions. If the predicted box is close to the origin of perspective, we give higher weights to those box predictions with high IoU. If the predicted box is far, we give relatively uniform weights, to get a more smooth final box. **Data Augmentation.** Data augmentation has proven to be an efficient way to better exploit the training dataset and help the model to be more generalized. We use the shape-aware data augmentation method proposed by SE-SSD [3]. This module simplifies the handling of partial occlusions, sparsity and different shapes of objects in the same class. Some traditional augmentation methods are also applied before the shape-aware augmentation, e.g. rotation, flipping, and scaling. For the generation of semi-synthetic data, several data augmentation techniques are also implemented to increase the variance of the point clouds. Therefore, in every second frame, 0-20% of all points are dropped out and Gaussian noise with \\(\\sigma=0.2\\) is added to 20-40% of all points. These techniques increase the variance of point clouds and provide more robust and diverse data. Data augmentation plays an important role for domain adaptation methods, as it is used by several methods [28, 37],[38]. It is noticeable, that the point density of the target domain is more important than the point density of the source domain [28]. Subsequently, cropping and drop out of points, respectively point cloud upsampling is an important step to adjust the number of points of the source set to the target set. Statistics for both, the source and the target domain dataset, are calculated with respect to the number of points of the total point cloud and, if annotated data is available for the target domain data, the average number of points per object. Then, the data augmentation techniques drop out and upsampling are used to match the source domain dataset to the target domain dataset. To better illustrate this effect, a domain adaptation is applied in Sec. IV from the synthetic A9 dataset (source domain) to the A11 and D16 dataset of the _Regensburg Next_ project (target domain). Note that the number of points for the target domain set is reduced by a factor of 2.72 compared to the source domain set, whereas the average number of points for the _Car_ class is increased by a factor of 1.83 compared to the source dataset. Due to the different LiDARs used for both smart intersections, there is more overlap between the four permanently installed Blickfeld Cube 1 LiDARs in the _Regensburg Next_ project. This sensor-to-sensor domain adaptation is considered in more detail in Sec. IV. **Self-Ensembling Training Framework.** We introduce the self-ensembling training framework [3] to do a post training: We first train the model without self-ensembling, and then we take the pre-trained model as a teacher model to train the student model that has the same network structure. Predictions of the teacher model are used as soft supervision. Combined with the hard supervision from the ground truth, we can provide more information to the student model. The student and teacher model are initialized with the same pre-trained parameters. The generated soft targets are obtained by the teacher SSD. They include a global transformation that performs translation, flipping and scaling as data augmentation techniques. After a global transformation, shape-aware data augmentation is performed on input points with the corresponding ground truth annotations (hard targets). The augmented input is fed into the student SSD together with the consistency loss that is obtained by the teacher and student predictions in order to align the student predictions with the provided soft targets. Furthermore, the student is supervised with the orientation-aware distance-IoU loss to better exploit hard targets for regression bounding boxes. The overall loss for training the student model consists of: \\[\\mathcal{L}_{student}=\\mathcal{L}_{cls}^{s}+\\omega_{1}\\mathcal{L}_{OD-IoU}^{s} +\\omega_{2}\\mathcal{L}_{dir}^{s}+\\mu_{t}\\mathcal{L}_{consist}, \\tag{2}\\] where \\(\\mathcal{L}_{cls}^{s}\\) is the focal loss [39] for box classification, \\(\\mathcal{L}_{OD-IoU}^{s}\\) is the OD-IoU loss for bounding box regression, \\(\\mathcal{L}_{dir}^{s}\\) is the cross-entropy loss for direction classification, \\(\\mathcal{L}_{consist}\\) is the consistency loss, that is a sum of the bounding box loss and the classification loss, \\(\\omega_{1}\\), \\(\\omega_{2}\\), \\(\\lambda\\) and \\(\\mu_{t}\\) are weights of losses. During post-training, the parameters of the teacher model are updated with the ones of the student model using the exponential moving average (EMA) strategy. ## IV Evaluation To prove the effectiveness of our proposed modules in _DASE-ProPillars_, we evaluate the model on the A9 dataset which is a novel roadside dataset in the autonomous driving domain with labeled roadside sensor data. Furthermore, we finetune and evaluate the model on two further roadside LiDAR datasets, the synthetic A11 and D16 datasets from the _Regensburg Next_ project. The training and evaluation was performed on 3 x NVIDIA GeForce RTX 3090 GPUs. ### _A9 dataset_ We use the _DASE-ProPillars_ model to train on the training set of the A9 dataset for 80 epochs with a batch size of 8, and evaluate on the test set. As optimizer, Adam is used with a weight decay of 0.01. The learning rate is of type \"one_cycle\" with a maximum of 0.003. We report our result using 3D and BEV under IoU threshold 0.5 and 0.25 like in [11]. The confidence score is limited with threshold 0.1, but we again decrease the NMS IoU threshold to 0.2, exactly the same setting as the convention in nuScenes [40]. The LiDAR frames of the second batch of the A9 dataset are labeled on the intersection. We test the case of using 0.1 as the NMS threshold. The inference time is 22.0 ms that still provides real-time object detection results. The result is shown in Tab. I. ### _A11 and D16 dataset_ The results of the _DASE-ProPillars_ model trained on the semi-synthetic _proSynthSemi_ dataset are used to improve the results on similar roadside datasets, the A11 and D16 datasets from the _Regensburg Next_ project [8], which consist of 2,581 and 3,315 synthetic frames respectively. The simulation environment _Gazebo_ was used to randomly place objects in a predefined region of an intersection to generate annotated data. The LiDARs in the real A9 dataset and the semi-synthetic A9 dataset are mounted on traffic signantry bridges, whereas the LiDAR sensors in the synthetic A11 dataset are mounted on roadside rooftops in a height of approximately 7 m above the ground plane. We use our _DASE-ProPillars_ model to train on the training set of the synthetic A9 dataset for 80 epochs and finetune on the A11 dataset for 40 epochs, with a decrease of the learning rate of 50%. For comparison reasons we also trained our _DASE-ProPillars_ model directly on the A11 dataset for 40 epochs using the same hyperparameters as above. The NMS IoU threshold is decreased to 0.2 for both models. As mentioned above, the average number of points per object is increased by a factor of 1.83 compared to the synthetic A9 dataset. Therefore, point upsampling was used, to match the number of average points per objects for both domains. Since the average number of points of the total point cloud in the target set is significantly lower (factor 2.72) compared to the source set, only points of the objects are upsampled. In order to reduce the difference in terms of total number of points of the respective point clouds simultaneously, drop out is used to reduce the number of total points of the source set. Table II shows the impact of several data augmentation techniques of a _DASE-ProPillars_ model trained only on the source set and evaluated directly on the target set without the use of any finetuning. Upsampling object points improved the 3D object detection, whereas the subsequent drop out had led to a slight deterioration, contrary to expectations. However, the drop out rate was set to 0.5, meaning that 50% of all points were dropped. Using a smaller drop out rate of 0.25, the results compared to the higher rate increased. Table III shows the results for the models trained only on the A11 dataset of the _Regensburg Next_ project compared with the model which was trained on the synthetic A9 dataset and finetuned on the target domain set. ## V Conclusion In this work we presented our _DASE-ProPillars_ 3D object detector that is an improved version of the _PointPillars_ model. We show generalization ability of our model and make it more robust via domain adaptation. We replace the detection head with a more lightweight multi-task head. We add two training techniques to our baseline: the shape-aware data augmentation module and the self-ensembling training architecture. Sufficient data collection is key to train a model and achieve a good accuracy. We do several sets of experiments for each module to prove its accuracy and runtime performance. To sum up, the _DASE-ProPillars_ 3D object detector is a significant contribution within the area of LiDAR-based 3D perception to support self-driving vehicles and improve road traffic safety. ## VI Detailed Module Architecture ### _Attentive Hierarchical Middle Layers_ We exchange the default backbone of _PointPillars_ with an _Attentive Hierarchical Backbone_ to perform 2D convolution on the pseudo image from the pillar feature net. Figure 4 depicts the structure of the attentive hierarchical middle layers. In the first stage, the spatial resolution of the pseudo image is gradually downsampled by three groups of convolutions. Each group contains three convolutional layers, where the first one has a stride of two for downsampling, and the two subsequent layers act only for feature extraction. After downsampling, deconvolution operations are applied to recover the spatial resolution. Deconvolutional layers (marked with an asterix) recover the size of feature maps with stride 2 and element-wise add them to upper branches. The remaining three deconvolutional layers make all three branches have the \\begin{table} \\begin{tabular}{|l|c|c|c|} \\hline Metric & 3D mAP & \\multicolumn{2}{c|}{BEV mAP} \\\\ \\hline \\hline UBsampling + drop out (0.5) & 49.64 & 61.14 \\\\ \\hline drop out (0.25) & 60.92 & 69.05 \\\\ \\hline original & 61.72 & 72.83 \\\\ \\hline upsampling & **63.11** & **76.65** \\\\ \\hline \\end{tabular} \\end{table} TABLE II: 3D object detection results of _DASE-ProPillars_ on the test set of the A11 dataset. We report the 3D and BEV mAP of the _Car_ class under an IoU threshold of 0.25, with 40 recall positions including several data augmentation techniques. \\begin{table} \\begin{tabular}{|l|c|c|c|c|} \\hline Metric & 3D mAP & \\multicolumn{2}{c|}{BEV mAP} \\\\ \\hline \\hline IoU threshold & 0.5 & 0.25 & 0.5 & 0.25 \\\\ \\hline \\hline SE-ProPillars & 30.13 & 50.09 & 40.21 & 51.53 \\\\ \\hline DASE-ProPillars (Ours) & **54.38** & **80.65** & **55.10** & **83.38** \\\\ \\hline \\end{tabular} \\end{table} TABLE I: 3D object detection results of _DASE-ProPillars_ on the A9 test set. We report the 3D and BEV mAP of Car under 0.5 and 0.25 IoU threshold, with 40 recall positions. \\begin{table} \\begin{tabular}{|l|c|c|c|c|} \\hline Metric & \\multicolumn{2}{c|}{3D mAP} & \\multicolumn{2}{c|}{BEV mAP} \\\\ \\cline{2-5} IoU threshold & 0.5 & 0.25 & 0.5 & 0.25 \\\\ \\hline \\hline DASE-ProPillars & 23.26 & 89.87 & 27.61 & 93.42 \\\\ \\hline DASE-ProPillars + fine-tuning & **33.75** & **93.49** & **38.81** & **94.79** \\\\ \\hline \\end{tabular} \\end{table} TABLE III: 3D object detection results of _DASE-ProPillars_ on the test set of the A11 dataset. We report the 3D and BEV mAP of Car under 0.5 and 0.25 IoU threshold, with 40 recall positions. same size (half of the original feature map). Then the final three feature maps are combined by an attentive addition to fuse both, spatial and semantic features. The attentive addition uses the plain attention mechanism. All three feature maps are being passed through a convolutional operation and are channel-wise concatenated as attention scores. The softmax function generates the attention distribution and feature maps are multiplied with the corresponding distribution weight. The element-wise addition in the end gives the final attention output, a (\\(C\\times H/2\\times W/2\\)) feature map. ### _Self-Ensembling Training Framework_ In addition, we introduce the self-ensembling training framework [3] to do a post training: We first train the model shown in Fig. 5 but without the self-ensembling module, and then we take the pre-trained model as a teacher model to train the student model that has the same network structure. ## VII Semi-Synthetic Data Generation In the CARLA simulator, the frequency of a simulated LiDAR has to be set to 20 Hz due to CARLA time synchronization problems. Using 10 Hz, the generated point cloud is cut off at half for each frame. The data acquisition with 20 Hz solves this problem to generate a complete point cloud. However, not as expected 20 frames per second, but only 10 frames per second are captured, what coincides with a data acquisition at 10 Hz. With an initial horizontal resolution of \\(2,048\\), the number of points have to be doubled to \\(2,621,480\\). To reduce the _sim-to-real_ gap, noise with a standard deviation of \\(0.1\\) and a general drop-off of \\(10\\) % is included to the generated point cloud directly in the acquisition process. The characteristics of the simulated LiDAR are depicted in Tab. IV. Using a semi-synthetic data generation pipeline, a synthetically generated point cloud containing automatically annotated data can be included into a point cloud which is captured in real world. The main concept of the semi-synthetic data generation is to insert synthetically generated points of an object, for which the annotation is available, into a real point cloud. In this context, one has to ensure, that the real world and the simulated LiDAR cover the same map region. Furthermore, the proportions of the environment must match between real world and simulation. Using _OpenDRIVE_, roads can be well integrated into the simulation and GPS measured points can be transferred to the CARLA simulation, which guarantees, that the LiDAR positions in real world and simulation are identical. If these conditions for the simulation are met, the advantages of synthetic and real data can be combined and only synthetic generated point clouds and a minimal real point cloud, which does not contain any objects, is required. To add more variance to the data, multiple real point clouds without any objects can be included. The semi-synthetic data generation can be split up into the following steps: 1. Extraction of synthetic object points from synthetic data 2. Identifying the ground plane via RANSAC 3. Creation of a height profile 4. Removal of ground points 5. Including of synthetic objects point into real point cloud 6. Application of data augmentation 7. Adjustment of annotations ## VIII Additional Ablation Studies Table V lists the 3D, BEV and AOS mAP of the car class on the \\(s110\\_v01\\) dataset under 0.25 IoU threshold with 40 recall positions for both, the initial baseline without any domain adaptation and the step-by-step enabling of all \\begin{table} \\begin{tabular}{l|c} \\hline Actor attribute & Value \\\\ \\hline Channels & 64 \\\\ \\hline Range & 120 m \\\\ \\hline Points per Second & 2,621,480 \\\\ \\hline Rotation Rate & 20 Hz \\\\ \\hline Vertical FOV & \\(45^{\\circ}\\) (\\(\\pm 22.5^{\\circ}\\)) \\\\ \\hline Horizontal FOV & \\(360^{\\circ}\\) \\\\ \\hline Noise & 0.1 \\\\ \\hline Dropoff general rate & 0.1 \\\\ \\hline \\end{tabular} \\end{table} TABLE IV: Characteristics of the simulated OS1-64 (gen. 2). Fig. 4: Left: Structure of the attentive hierarchical middle layers. Right: Structure of the attentive addition operation. [MISSING_PAGE_POST] domain adaptation modules. It can be seen, that including all domain adaptation processes outperforms the initial baseline approach in all metrics significantly, having the same runtime of 20 ms. The transformation of the synthetic to semi-synthetic data increases the BEV and AOS metric by 51.19 and 35.02 respectively. ## IX Qualitative Evaluation Results Figure 6 provides several examples of the target domain set including the predictions of the _DASE-ProPillars_ model and the corresponding ground truths. In the first frame, the bus and the motorbike are detected wrongly as a car and also a part of the environment is predicted and therefore denotes one false positive. The remaining two frames also include one false positive as a bush on the roadside is detected as a car. However, in all frames, all annotated cars are detected correctly and also the rotation fits accurately for most objects. The visual representation of the predictions emphasizes the results of the proposed _DASE-ProPillars_ object detector. ## References * [1]M. Abadi, M. * [32] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 770-778. * [33] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, \"Pointnet: Deep learning on point sets for 3d classification and segmentation,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 652-660. * [34] S. Ioffe and C. Szegedy, \"Batch normalization: Accelerating deep network training by reducing internal covariate shift,\" in _International conference on machine learning_. PMLR, 2015, pp. 448-456. * [35] V. Nair and G. E. Hinton, \"Rectified linear units improve restricted boltzmann machines,\" in _Icml_, 2010. * [36] Y. Yan, Y. Mao, and B. Li, \"Second: Sparsely embedded convolutional detection,\" _Sensors_, vol. 18, no. 10, p. 3337, 2018. * [37] L. Yi, B. Gong, and T. A. Funkhouser, \"Complete & label: A domain adaptation approach to semantic segmentation of lidar point clouds,\" _CoRR_, vol. abs/2007.08488, 2020. [Online]. Available: [https://arxiv.org/abs/2007.08488](https://arxiv.org/abs/2007.08488) * [38] M. Jaritz, T. Vu, R. de Charette, E. Wirbel, and P. Perez, \"xmuda: Cross-modal unsupervised domain adaptation for 3d semantic segmentation,\" _CoRR_, vol. abs/1911.12676, 2019. [Online]. Available: [http://arxiv.org/abs/1911.12676](http://arxiv.org/abs/1911.12676) * [39] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, \"Focal loss for dense object detection,\" in _Proceedings of the IEEE international conference on computer vision_, 2017, pp. 2980-2988. * [40] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, \"nuscenes: A multimodal dataset for autonomous driving,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 11 621-11 631.
This work aims to address the challenges in domain adaptation of 3D object detection using roadside LiDARs. We design _DASE-ProPillars_, a model that can detect objects in roadside LiDARs in real-time. Our model uses _PointPillars_ as the baseline model with additional modules to improve the 3D detection performance. To prove the effectiveness of our proposed modules in _DASE-ProPillars_, we train and evaluate the model on two datasets, the open source A9 dataset and a semi-synthetic roadside A11 dataset created within the _Regensburg Next_ project. We do several sets of experiments for each module in the _DASE-ProPillars_ detector that show that our model outperforms the _SE-ProPillars_ baseline on the real A9 test set and a semi-synthetic A9 test set, while maintaining an inference speed of 45 Hz (22 ms) that allows to detect objects in real-time. We apply domain adaptation from the semi-synthetic A9 dataset to the semi-synthetic A11 dataset from the _Regensburg Next_ project by applying transfer learning and achieve a \\(3D~{}[email protected]\\) of **93.49%** on the Car class of the target test set using **40 recall positions.
Provide a brief summary of the text.
278
arxiv-format/2405_01310v1.md
# Overcoming LLM Challenges using RAG-Driven Precision in Coffee Leaf Disease Remediation Dr. Selva Kumar S _Department of Computer Science and Engineering_ _B. M. S. College of Engineering_ Bangalore, India [email protected] Affiah Khan Mohammed Ajmal Khan _Department of Computer Science and Engineering_ _B. M. S. College of Engineering_ Bangalore, India [email protected] Imadh Ajaz Banday _Department of Computer Science and Engineering_ _B. M. S. College of Engineering_ Bangalore, India [email protected] Manikantha Gada _Department of Computer Science and Engineering_ _B. M. S. College of Engineering_ Bangalore, India [email protected] ## I Introduction In precision agriculture, the incorporation of cutting-edge technologies is essential for tackling challenges in disease identification and remediation. One such advancement is YOLO (You Only Look Once) which is an object detection algorithm that analyzes entire images at once, predicting both bounding boxes and class probabilities in a single pass. It employs anchor boxes to efficiently detect objects of different sizes and shapes. By utilizing a grid-based approach and a deep convolutional neural network backbone, YOLO achieves real-time performance and high accuracy across diverse domains. Through multiple iterations, YOLO has undergone improvements in speed, accuracy, and reliability, catering to applications such as autonomous driving and surveillance. The original YOLOv1, YOLOv2 (YOLO9000), YOLOv3, YOLOv4, and YOLOv5 were the first in the line of YOLO (You Only Look Once) object detection models. The most recent models in the line are YOLOv6, YOLOv7, and YOLOv8. Every iteration has sought to improve speed, precision, and adaptability for a range of uses, such as plant disease detection. Because of their real-time processing capabilities, these YOLO models have shown to be superior to conventional machine learning and deep learning techniques, making them indispensable tools for early intervention and disease mitigation in agriculture. YOLOv8 is an efficient object detection system renowned for its real-time identification of plant diseases. YOLOv8's speed, processing images in a single pass, proves invaluable for timely disease detection, particularly important in the dynamic landscape of agriculture. Despite YOLOv8's effectiveness in object detection, limitations arise with Large Language Models (LLMs) like GPT-3.5. These models, while powerful, exhibit a static nature, leading to inaccuracies in disease diagnosis, known as 'hallucination.' In agriculture's ever-changing conditions, the static nature of LLMs impedes their capability to provide accurate and up-to-date information, especially in contexts like the coffee industry, where variousfactors influence disease outcomes. Through our paper, we propose a solution that addresses the issues mentioned by conducting a comprehensive review of Large Language Models (LLMs) and their drawbacks. Our aim is to reduce the research gap through a novel approach that mitigates the limitations of LLMs, particularly in the context of precision agriculture. To address these limitations, our research introduces Retrieval Augmented Generation (RAG), which, when integrated with LLMs like GPT-3.5, mitigates drawbacks by fetching up-to-date, context-specific data from external databases. Serving as a dynamic bridge, RAG minimizes the risk of hallucination, enhancing GenAI application accuracy by incorporating current, domain-specific knowledge. This approach ensures our model remains informed and significantly improves adaptability and reliability in providing context-aware solutions for precision agriculture. RAG's role in overcoming LLM limitations is central to our research, promising a transformative impact on agricultural practices. ## II Literature Survey The study [1] utilizes YOLOv3 for identifying plant diseases, leveraging its effective object detection capabilities. By training the model on labeled images of both diseased and healthy plant parts, the model attains high accuracy in detecting prevalent Apple tree diseases such as apple scalp, coder aple rust, and black rot. The next research [2] employed YOLOv5 for detecting rice leaf diseases, achieving notable precision (0.83), recall (0.94), and mAP (0.62) scores. YOLOv5 is distinguished by its rapid object detection model with enhanced performance, incorporating a backbone (CSP-Darknet), neck (PANet), and head (Yolo Layer) architecture for effective detection. Wanx, X. et al. [3] proposed the the YOLO-Dense network, inspired by DenseNet that employs dense connections to boost feature extraction and enhance detection accuracy in identifying anomalies in tomatoes. This innovative method achieved an impressive mean average precision (mAP) of 96.41%, demonstrating superior performance compared to other algorithms such as SSD, Faster R-CNN, and the original YOLOv3. Sajitha P et al., [4] introduced a system that integrates YOLO v7 for leaf disease detection and GPT-3 for providing corrective measures. And Jiajun Qing et al. in their paper [5], combine the logical reasoning capabilities of GPT-4 with YOLOPC, a lightweight YOLO variant, to achieve real-time agricultural disease diagnosis. While YOLOPC attains 94.5% accuracy with 75% fewer parameters, GPT-4 demonstrates 90% reasoning accuracy in generating diagnostic reports. Both [4] and [5] face the limitation that the language model might not consistently yield the most accurate answers. Jean Kaddour et al., [6] explore the challenges with LLMs through their paper \"Challenges and Applications of Large Language Models\". The paper explores large language models (LLMs), highlighting challenges like intricate datasets and elevated expenses. It focuses on enhancing LLM behavior and knowledge, discusses fine-tuning methods, and emphasizes the need for comprehensive evaluation. In [7], the authors explore the matter of hallucination in LLMs and propose the LVLM Hallucination Revistor (LURE), an algorithm designed to address object hallucination in Large Vision-Language Models (LVLMs). Evaluation on six open-source LVLMs demonstrates a substantial 23% improvement in general object hallucination metrics compared to prior approaches. Further Junyi Li et al., through their paper [8] introduce HaluEval which is a comprehensive collection of 35,000 hallucinated and normal samples for analyzing and evaluating LLMs. The study presents a two-stage framework for generating and annotating hallucinated samples, revealing that existing LLMs often fail to recognize hallucinations in text and tend to generate such content. [9] provides an overview of the challenges posed by hallucination in LLMs. It delves into the impact of noisy data during pre-training on LLMs' parametric knowledge and the subsequent occurrence of hallucinations. The survey explores various mitigation strategies, including data curation, filtering, and supervised fine-tuning, as well as the use of high-quality reference corpora. In summary, [7], [8], and [9] collectively reveal that hallucination is a pervasive challenge in LLMs, prompting the exploration of algorithms, benchmarks, and mitigation strategies to enhance their performance and reliability. In [10] the authors address hallucination in Language Models through the creation of the HILT dataset, utilizing 75,000 text passages generated by 15 LLMs. They emphasize the need for continuous updates due to the evolving nature of the field, and present detailed statistics on factual mirage (FM) and silver lining (SL) categories. Nitin Liladhar Rane et al. [11] explore the multifaceted contributions of large language models, such as ChatGPT, in scientific and research progress across diverse domains. It underlines the potential for these models to revolutionize knowledge dissemination while acknowledging the ethical and societal implications and emphasizing the importance of responsible development, deployment, and regulation. In [12] Abdullahi Saka et al. shift the focus to the utilization of GPT models in the construction industry. The authors delve into opportunities, limitations, and a use case validation involving NLP techniques for processing construction project documents and Building Information Modeling (BIM) data. Abdullahi Saka et al. suggest formulating ethical use policies, exploring novel applications, and researching solutions for GPT model limitations in construction. The paper [13] presents a balanced assessment of large language models (LLMs). It highlights the sophisticated inductive learning and inference capabilities of LLMs, including their capacity to recognize hierarchical syntactic structure and complex semantic relations. Additionally, LLMs have demonstrated potential in tasks such as medical image analysis and diagnostics and predicting the properties of proteins and new molecular structures. However, Shalom Lappin also acknowledges limitations, such as the potential for LLMs to hallucinate plausible-sounding narratives with no factual basis, their susceptibility to adversarial testing, and the requirement for additional investigation into improving their performance on specific tasks, addressing biases, and developing smaller, more lightweight models. [14] Introduces the Retrieval-Augmented Generation (RAG) model, showcasing its state-of-the-art performance in open-domain question answering tasks, emphasizing its ability to generate diverse and factual content. The next paper [15], Self-RAG, presents a framework that enhances large language models (LLMs) through retrieval and self-reflection without compromising creativity. It employs instruction and demonstration pairs, achieving significant improvements in model performance, factuality, and citation accuracy, with future work aiming to address factual errors and enhance self-reflection mechanisms. The authors of [16] examine the incorporation of extensive language models into systems for retrieving information, highlighting potential benefits and challenges, proposing research directions for improvement, and addressing drawbacks such as bias and data requirements. Jiawei Chen et al. [17] establish a standard for assessing the effectiveness of LLMs in retrieval-augmented generation tasks, identifying limitations in noise robustness and information integration abilities, and suggesting directions for improvement. [18] proposes ARM-RAG, a system leveraging RAG to enhance the problem-solving capabilities of LLMs, demonstrating superior performance by utilizing Neural Information Retrieval for reasoning chains derived from solving math problems and suggesting avenues for further enhancements. In [19] \"InPars,\" introduces a method using large language models (LLMs) for few-shot labeled data generation in information retrieval (IR) tasks, demonstrating superior performance and emphasizing potential with domain-specific training data. Limitations include the lack of pretraining and limited suitability for non-neural retrieval algorithms. The next paper [20], \"Retrieval-based Evaluation for LLMs,\" proposes Eval-RAG, an approach to evaluating LLM-generated texts in the legal domain, outperforming existing methods in correlation with human evaluation and factual error identification. Future work includes refining Eval-RAG, exploring its applicability to other domains, and addressing potential limitations. The authors in [21], \"Retrieval Meets Long Context LLMs,\" compare retrieval-augmented language models (RAG) and long context LLMs, demonstrating RAG's significant performance improvement in Q&A and summarization tasks. Future work aims to explore combined retrieval and long context LLMs for enhanced accuracy, with limitations not explicitly mentioned. Zhangyin Feng et al. [22] introduces Retrieval-Generation Synergy Augmented Large Language Models, showcasing an iterative framework that significantly enhances the cognitive reasoning capacity of expansive language models (LLMs) for knowledge-intensive tasks, particularly answering questions in a broad range of domains. Through experiments on four datasets, the proposed method outperforms previous baselines, demonstrating improved LLM reasoning. [23] focuses on Interpretable Long-Form Legal Question Answering with Fig. 1: Diseased Leaf (Phoma) Fig. 2: Diseased Leaf (Miner)Retrieval-Augmented Large Language Models. It presents an end-to-end methodology that leverages a \"retrieve-then-read\" pipeline, employing a retrieval-augmented generator (RAG) approach with LLMs. The authors fine-tune on a task-specific dataset and introduce the Long-form Legal Question Answering (LLeQA) dataset. The authors highlight the positive aspects of this approach, emphasizing its potential for generating syntactically correct answers relevant to legal questions. The last paper [24], Establishes Performance Baselines in Fine-Tuning, Retrieval-Augmented Generation, and Soft-Prompting for Non-Specialist LLM Users, explores ways to improve LLM performance for non-specialist users. It compares unmodified GPT 3.5, fine-tuned GPT 3.5, and RAG using a limited dataset and technical skill. RAG stands out as an effective strategy, outperforming fine-tuning and showcasing positive results within the framework of the LayerZero cryptocurrency bridging project. The paper discusses the accessibility of these techniques to non-technical users and emphasizes the positive impact of RAG on LLM performance. ## III Methodology ### _Methodology Workflow_ In our methodology, illustrated in Fig. 4, for coffee leaf disease care and detection, we embrace a hybrid model. We train a YOLOv8 (You Only Look Once) Model for real-time instance segmentation on Coffee Leaves with the Transfer Learning approach, enhancing our ability to detect potential disease classes in coffee leaves such as Phoma and Miner as shown in figure 1 and 2 [25]. Simultaneously, we leverage the Natural Language understanding capabilities of GPT-3.5 (Generative Pre-Trained Transformer 3.5) to generate insightful diagnoses and treatment recommendations for the identified diseases. YOLOv8, developed by Ultralytics and pre-trained on Microsoft's COCO dataset, is a PyTorch-based model renowned for its modern techniques. It incorporates built-in Instance Segmentation, contributing to improved overall performance. Additionally, we introduce the utilization of the Retrieval-Augmented Generation (RAG) framework, enhancing the role of GPT-3.5 in the generation process. RAG allows us to fetch up-to-date and context-specific data from external databases, providing valuable information to the language model during the generation of responses. This integration of RAG by building a robust pipeline from image capture to generating accurate results based on disease prescriptions aims to overcome limitations associated with static Large Language Models (LLMs) and ensure that our coffee leaf disease detection and care system remains adaptable, informed, and accurate in its recommendations. * The workflow commences with the Phytopathology Recognition subsystem, where YOLOv8 processes input images, conducts plant disease identification, and classifies the findings. This involves utilizing the pre-trained YOLOv8 model for real-time instance segmentation and disease detection. The outcomes are then transmitted to the Remediation Assistance subsystem, where the Language Model (LLM), armed with the Retrieval-Augmented Generation (RAG) framework, generates highly contextual suggestions for effective remediation. The vector database becomes integral at this stage, storing and retrieving vector representations of remediation data to enhance the contextual accuracy of the suggestions. * The Integration and Communication subsystem serves as a crucial bridge, establishing robust communication channels between the Phytopathology Recognition and Remediation Assistance components. It defines communication protocols, ensuring secure and reliable data exchange between these subsystems. The Decision Fusion Module operates as a nexus, receiving disease identification results from the Phytopathology Recognition subsystem and seamlessly integrating them with the remediation suggestions. This integration culminates in the generation of a comprehensive recommendation tailored for effective plant disease management. * The User Interface acts as the final frontier, providing users with a platform to interact with the system. Users can input plant images, receive detailed reports on identified diseases, and access suggested remediation measures. This interface facilitates user engagement and feedback, contributing to the continuous improvement of the system. The technical components and considerations encompass a range of elements, from communication interfaces to security measures, thereby shaping the overall advantages and disadvantages of the proposed system. ## IV Experimental Setup ### _Dataset Compilation_ We leverage a diverse dataset encompassing multiple sources from Kaggle [25], open-source datasets, and real-time pictures of diseased coffee leaves obtained directly from coffee fields. ### _Dataset Annotation and Augmentation_ The YOLOv8 model relies on annotated datasets for effective training. Our dataset, sourced from various internet repositories and real-time images from coffee fields, undergoes thorough annotation with class labels (Rust, Miner, and Phoma). Annotated data is pre-processed, incorporating augmentation techniques like flipping, rotation, and brightness adjustment. This not only enhances dataset diversity but also mitigates the requirement for extra manual annotation efforts, streamlining the overall training process. ### _YoloV8 Model_ The workflow starts by setting up the environment, including the installation of the required YOLOv8 library and dataset preparation. This involves checking and extracting the dataset, renaming necessary files, and organizing paths for efficient data management. Following setup, the training process is initiated, configuring model parameters and training settings. Animage is uploaded for segmentation, and the model performs segmentation, saving the output. Lastly, a custom function is employed to display the segmented result, highlighting the versatile application of YOLOv8 across various segmentation tasks. ### _Label Extraction_ The output from the YOLOv8 trained model is utilized to extract labels using the EasyOCR library. This process involves extracting text from the segmented image, filtering it to include relevant disease labels such as 'Rust', 'Miner', and 'Phoma', and subsequently printing the filtered output for analysis. ### _Prompt Generation_ Unlike traditional object detection models, RAG-LLM requires a different form of input. We generate prompts from the dataset, encapsulating concise and relevant information about the coffee leaf diseases. These prompts act as queries to the language model for generating context-aware and detailed responses. ### _Working of RAG-LLM_ After label extraction, the extracted labels serve as input, cross-referenced with the vector store. The disease diagnosis information contained in research PDFs act as the knowledge base. They undergo text extraction and segmentation which are converted into manageable chunks. These chunks contribute to the creation of a vector store generated by the OpenAI API embedding, optimizing storage for text embedding. A conversational chain is then established, allowing users to engage in dialogue and gather insights. This cohesive approach seamlessly integrates document information retrieval with AI-driven conversational capabilities, offering an efficient platform for exploring coffee diseases and their management strategies. Hence, allowing the information retrieval method proves out to be more efficient to this use case. The proposed RAG model is built using the Langchain framework to enable seamless integration with LLM powered Q&A. ### _Challenges with the Dataset and Model Training_ In transitioning to RAG-LLM, challenges may arise in formulating effective prompts, ensuring the model's contextual understanding, and maintaining coherence in responses. Expert input remains crucial in refining the prompts and validating the model's outputs. ## Conclusion This research represents a significant advancement in precision agriculture by integrating YOLOv8 and Retrieval Augmented Generation (RAG), offering a novel approach to disease identification and management in agriculture, with a focus on the coffee industry in Karnataka. While YOLOv8 excels in real-time disease detection, challenges persist with Large Language Models (LLMs) like GPT-3.5, particularly in addressing inaccuracies known as 'hallucination' due to their static nature. To overcome these limitations, our study introduces RAG, which dynamically retrieves context-specific data from external sources, enhancing the accuracy and adaptability of the GenAI application. By incorporating current, domain-specific knowledge, RAG minimizes the risk of hallucination, ensuring more reliable and context-aware solutions for precision agriculture. This innovative approach not only increases Fig. 3: Proposed Methodology Workflowanswer accuracy but also mitigates hallucinations, promising transformative impacts on agricultural practices. Moreover, we wish to expand on our dataset and also prioritize meeting farmers and expert agriculturals interested in phytopathology researches. The cooperation among individuals and groups, Organizations and the government plays a pivotal role in advancing sustainable and effective agricultural systems, helping farmers worldwide, and improving agricultural practices. Furthermore, the extension of the model to include a broader range of disease classes in the two coffee species, Arabica and Robusta, is an avenue for future research. Looking ahead, our work lays the groundwork for ongoing development. The focus on scalability, reliability, and user-friendly design positions this system as a practical tool for positive changes in agriculture. As we move forward, our goal is to expand the system's capabilities to address a wider range of diseases. This study, with its emphasis on simplicity and practicality, aims to contribute to the larger vision of sustainable and technologically enhanced food production. ## References * [1]M.P. Mathew and T.Y. Mahesh (
This research introduces an innovative AI-driven precision agriculture system, leveraging YOLOv8 for disease identification and Retrieval Augmented Generation (RAG) for context-aware diagnosis. Focused on addressing the challenges of diseases affecting the coffee production sector in Karnataka, The system integrates sophisticated object detection techniques with language models to address the inherent constraints associated with Large Language Models (LLMs). Our methodology not only tackles the issue of hallucinations in LLMs, but also introduces dynamic disease identification and remediation strategies. Real-time monitoring, collaborative dataset expansion, and organizational involvement ensure the system's adaptability in diverse agricultural settings. The effect of the suggested system extends beyond automation, aiming to secure food supplies, protect livelihoods, and promote eco-friendly farming practices. By facilitating precise disease identification, the system contributes to sustainable and environmentally conscious agriculture, reducing reliance on pesticides. Looking to the future, the project envisions continuous development in RAG-integrated object detection systems, emphasizing scalability, reliability, and usability. This research strives to be a beacon for positive change in agriculture, aligning with global efforts toward sustainable and technologically enhanced food production. YOLOv8, LLM, GPT-3.5, RAG, Precision Agriculture, Object Detection, NLP
Condense the content of the following passage.
254
arxiv-format/2010_09085v1.md
## 1 Introduction The mathematical modeling of epidemics has a long history; it began with the Kermack-McKendrick model [8], introduced in 1927. In this seminal paper, the whole population is divided into Susceptible, Infectious, and Recovered sub-populations. Then, some ordinary differential equations are formulated specifying the time evolution of the functions representing these sub-populations. Wavelet analysis is now frequently used to extract information from epidemiological and other time series. Grenfell et al. [7] introduced wavelet analysis for characterizing non-stationary epidemiological time series. Cazelles et al. [1] use the Morlet wavelets for applications in epidemiology. Lavrova et al. [10] modeled the disease dynamics caused by Mycobacterium tuberculosis in Russia using a sum of two logistic functions (3) (bi-logistic model). SARS-CoV-2 initially emerged in China, at the end of 2019; after Chinese scientists identified the sequence of the new virus [17], this information was shared with the international community. Since then, a lot of articles were written and published, describing from different points of view, the new SARS-CoV-2 coronavirus and the COVID-19 disease, caused by the virus. We will point out only some of them. Fokas et al. FDK used a generalization of the logistic function for forecasting the number of individuals reported to be infected with SARS-CoV-2 in different countries. Krantz et al. [9] proposed a two-phase procedure (combining discrete graphs and Meyer wavelets) for constructing true epidemic growth. A method similar to that one from Lavrova et al. [10] was used by E. Vanucci and L. Vanucci [16] for predicting the end date of Covid-19 disease in Italy. The outline of the present paper is as follows. In Sec. 2, we discuss the basic properties of Riccati's equation, logistic equation, and logistic curve. For this purpose, we use Eulerian numbers. Sec. 3 and Sec. 4 are devoted to logistic wavelets. In Sec. 5 we model the cumulative number of persons reported to be infected by SARS-CoV-2 in the United States as a sum of several logistic functions. We use the following convention for the Fourier transform: \\[\\hat{f}(\\xi)=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty}f(x)e^{-i\\xi x}dx, \\tag{1}\\] where \\(f\\in L^{1}(\\mathbb{R})\\cap L^{2}(\\mathbb{R})\\). ## 2 Logistic function and its derivatives The logistic equation is defined as \\[u^{\\prime}(t)=\\frac{s}{u_{max}}\\,u(u_{max}-u),\\quad u(0)=u_{0}. \\tag{2}\\] where \\(t\\) is time, \\(u=u(t)\\) is the unknown function, \\(s,u_{max}\\) are constants. The constant \\(u_{max}\\) is called the saturation level. The integral curve \\(u(t)\\) fulfilling condition \\(0<u(t)<u_{max}\\) is known as the logistic function. After solving (2) we get the logistic function in the following form \\[u(t)=\\frac{u_{max}}{1+e^{-s(t-t_{0})}}, \\tag{3}\\] where \\(t_{0}\\) is its inflection point, which is related to the initial condition \\(u(0)=u_{0}=\\dfrac{u_{max}}{1+e^{st_{0}}}\\), therefore \\(t_{0}=\\dfrac{1}{s}\\log\\Big{(}\\dfrac{u_{max}-u_{0}}{u_{0}}\\Big{)}\\). Equation (2) is a particular case of Riccati's equation with constant coefficients \\[u^{\\prime}(t)=r(u-u_{1})(u-u_{2}). \\tag{4}\\] The constants \\(r\ eq 0,\\ u_{1},\\ u_{2}\\) can be generally real or complex numbers. If \\(u(t)\\) is a solution of (4) then it is known a formula for the \\(n\\)th derivative \\(u^{(n)}(t)\\) (\\(n=2,3,4,\\ldots\\)) of \\(u(t)\\) expressing it as a polynomial of the function \\(u(t)\\) itself: \\[u^{(n)}(t)=r^{n}\\sum_{k=0}^{n-1}\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{k}(u- u_{1})^{k+1}(u-u_{2})^{n-k} \\tag{5}\\] where \\(n=2,3,\\ldots\\) and \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{k}\\) denotes the Eulerian number (number of permutations of the set \\(\\{1,2,\\ldots,n\\}\\) having \\(k\\), \\((k=0,1,2,\\ldots,n-1)\\) permutation ascents, see Graham et al [6]). The first few Eulerian numbers are given in the Table 1. Formula (5) was discussed during the Conference ICNAAM 2006 (September 2006) held in Greece and it appeared, with an inductive proof, in paper [11] (see also [12]). Independently the formula has been considered and proved, with the proof based on generating functions, by Franssens [5]. The polynomial of \\(u\\), of order \\((n+1)\\), appearing on the right-hand side of (2) is known in the literature as a kind of the so-called derivative polynomials. It is easy to see that all \\((n+1)\\) roots of the polynomial are simple and lie in the interval \\([u_{1},u_{2}]\\). The derivative polynomials have been recently intensively studied. \\begin{table} \\begin{tabular}{|c|c c c c c c c c|} \\hline n & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{0}\\) & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{1}\\) & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{2}\\) & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{3}\\) & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{4}\\) & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{5}\\) & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{6}\\) & \\(\\genfrac{\\langle}{\\rangle}{0.0pt}{}{n}{7}\\) \\\\ \\hline 0 & 1 & & & & & & & \\\\ 1 & 1 & 0 & & & & & & \\\\ 2 & 1 & 1 & 0 & & & & & \\\\ 3 & 1 & 4 & 1 & 0 & & & & \\\\ 4 & 1 & 11 & 11 & 1 & 0 & & & \\\\ 5 & 1 & 26 & 66 & 26 & 1 & 0 & & \\\\ 6 & 1 & 57 & 302 & 302 & 57 & 1 & 0 & \\\\ 7 & 1 & 120 & 1191 & 2416 & 1191 & 120 & 1 & 0 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Eulerian numbersFormula (5) applied to the particular case of the logistic equation (2) is as follows: \\[u^{(n)}(t)=\\left(-\\frac{s}{u_{max}}\\right)^{n}\\ \\sum_{k=0}^{n-1}\\genfrac{(}{)}{0.0 pt}{}{n}{k}u^{k+1}(u-u_{max})^{n-k}. \\tag{6}\\] The polynomial of the variable \\(u\\) and of order \\((n+1)\\) on the right hand side of (6) is uniform in the sense of the following. _Remark 1_.: If \\(u_{0}\\) is a root of the polynomial on the right hand side of (6), i.e., \\[\\sum_{k=0}^{n-1}\\genfrac{(}{)}{0.0pt}{}{n}{k}u_{0}^{k+1}(u_{0}-u_{max})^{n-k}=0, \\tag{7}\\] then dividing both sides of (7) by \\(u_{max}^{n+1}\\) we get \\[\\sum_{k=0}^{n-1}\\genfrac{(}{)}{0.0pt}{}{n}{k}\\left(\\frac{u_{0}}{u_{max}} \\right)^{k+1}\\left(\\frac{u_{0}}{u_{max}}-1\\right)^{n-k}=0.\\] Thus \\(u_{0}\\) is a root of the derivative polynomial on the right hand side of (6) if \\(u_{0}/u_{max}\\) is the root of the polynomial \\[P_{n+1}(u):=(-1)^{n}\\sum_{k=0}^{n-1}\\genfrac{(}{)}{0.0pt}{}{n}{k}u^{k+1}(u-1)^ {n-k}. \\tag{8}\\] Let us write down, using formula (6) and the notation of (8), the first few derivatives of the logistic function, which fulfills equation (2). By Remark 1 we can assume, without loss of the generality, that \\(u_{max}=1\\) and \\(s=1\\). We obtain successively: \\[u^{\\prime}(t)= u(1-u)=-u(u-1)=P_{2}(u),\\] \\[u^{\\prime\\prime}(t)= u(u-1)^{2}+u^{2}(u-1)=P_{3}(u),\\] \\[u^{\\prime\\prime\\prime}(t)= -u(u-1)^{3}-4u^{2}(u-1)^{2}-u^{3}(u-1)=P_{4}(u),\\] \\[u^{(4)}(t)= u(u-1)^{4}+11u^{2}(u-1)^{3}+11u^{3}(u-1)^{2}+u^{4}(u-1)=P_{5}(u),\\] \\[u^{(5)}(t)= -u(u-1)^{5}-26u^{2}(u-1)^{4}-66u^{3}(u-1)^{3}-26u^{4}(u-1)^{2}-u^ {5}(u-1)=P_{6}(u).\\] All roots of the polynomials \\(P_{k}(u)\\) for \\(k=3,4,5,6\\) can be calculated explicitly, so the polynomials can be factored and we get \\[P_{3}(u)= 2u(u-1)\\left(u-\\frac{1}{2}\\right),\\] \\[P_{4}(u)= -6u(u-1)\\left(u-\\frac{1}{2}-\\frac{\\sqrt{3}}{6}\\right)\\left(u- \\frac{1}{2}+\\frac{\\sqrt{3}}{6}\\right),\\] \\[P_{5}(u)= 24u(u-1)\\left(u-\\frac{1}{2}\\right)\\left(u-\\frac{1}{2}-\\frac{ \\sqrt{6}}{6}\\right)\\left(u-\\frac{1}{2}+\\frac{\\sqrt{6}}{6}\\right),\\] \\[P_{6}(u)= -120u(u-1)\\!\\!\\left(u-\\frac{1}{2}-\\frac{\\sqrt{30(15-\\sqrt{105}) }}{60}\\right)\\!\\!\\left(u-\\frac{1}{2}-\\frac{\\sqrt{30(15+\\sqrt{105})}}{60}\\right)\\] \\[\\left(u-\\frac{1}{2}+\\frac{\\sqrt{30(15-\\sqrt{105})}}{60}\\right)\\! \\!\\left(u-\\frac{1}{2}+\\frac{\\sqrt{30(15+\\sqrt{105})}}{60}\\right).\\]Therefore the minimal _positive_ root of the polynomial \\[P_{4}(u)\\ \\ \\mbox{is}\\ \\ \\frac{1}{2}-\\frac{\\sqrt{3}}{6}\\approx 0.211,\\] \\[P_{5}(u)\\ \\ \\mbox{is}\\ \\ \\frac{1}{2}-\\frac{\\sqrt{6}}{6}\\approx 0.0917, \\tag{9}\\] \\[P_{6}(u)\\ \\ \\mbox{is}\\ \\ \\frac{1}{2}-\\frac{\\sqrt{30(15+\\sqrt{105})} }{60}\\approx 0.0413.\\] Thus by using Remark 1 we see for example that if at a minimal time \\(t_{1}\\), \\(u^{\\prime\\prime\\prime}(t_{1})=0\\) (\\(t_{1}\\) is simultanously a maximum of \\(u^{\\prime\\prime}(t)\\)) then the value of the logistic function at this point is \\(u(t_{1})=0.211\\:u_{max}\\). By (3) we have \\[u(t_{1})=\\frac{u_{max}}{1+e^{-s(t_{1}-t_{0})}}=0.211\\:u_{max},\\] from which we calculate \\[t_{0}-t_{1}=\\frac{1.319}{s}. \\tag{10}\\] Similar conclusions can be drawn for the smallest zero of the \\(u^{(4)}(t)\\) (polynomial \\(P_{5}(u)\\)) or \\(u^{(5)}(t)\\) (polynomial \\(P_{6}(u)\\)) using constants (9). ## 3 Wavelets based on the second derivative of the logistic function Let a wavelet \\(\\psi_{2}(x)\\) (see Figure 1) be the second derivative of the logistic function \\(u(x)=\\frac{1}{1+e^{-x}}\\). Since \\(u^{\\prime}(x)=-u(u-1)\\), then by (5) or directly we get \\[u^{\\prime\\prime}(x)=u(1-u)(1-2u), \\tag{11}\\] and by (11) it follows that the wavelet has the following exact form \\[\\psi_{2}(x)=\\frac{1}{1+e^{-x}}\\Big{(}1-\\frac{1}{1+e^{-x}}\\Big{)}\\Big{(}1-\\frac {2}{1+e^{-x}}\\Big{)}=\\frac{e^{-2x}-e^{-x}}{(1+e^{-x})^{3}}. \\tag{12}\\] Changing the variable \\(u=\\frac{1}{1+e^{-x}},u^{\\prime}(x)=u(1-u)\\) in the following three integrals we calculate \\[\\int_{-\\infty}^{\\infty}\\psi_{2}(x)dx=\\int_{0}^{1}(1-2u)du=0,\\] \\[\\int_{-\\infty}^{\\infty}|\\psi_{2}(x)|dx=\\int_{0}^{1}|1-2u|du=\\frac {1}{2},\\] \\[\\int_{-\\infty}^{\\infty}(\\psi_{2}(x))^{2}dx=\\int_{0}^{1}u(1-u)(1-2 u)^{2}du=\\frac{1}{30},\\] which proves that \\(\\psi_{2}(x)\\in L^{1}(\\mathbb{R})\\cap L^{2}(\\mathbb{R})\\). In fact \\(\\psi_{2}(x)\\in S(\\mathbb{R})\\) (the space of rapidly decreasing functions on \\(\\mathbb{R}\\)). We will discuss this in the next section. By \\(\\psi_{2}^{+}(x)\\) we denote the positive part of \\(\\psi_{2}(x)\\), i.e., \\[\\psi_{2}^{+}(x)=\\frac{1}{2}(\\psi_{2}(x)+|\\psi_{2}(x)|).\\] Obviously \\[\\int_{-\\infty}^{\\infty}(\\psi_{2}^{+}(x))^{2}dx=\\frac{1}{2}\\int_{-\\infty}^{ \\infty}(\\psi_{2}(x))^{2}dx=\\frac{1}{60}. \\tag{13}\\]The Fourier transform of \\(\\psi_{2}(x)\\) is as follows: \\[\\hat{\\psi_{2}}(\\xi)=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty}\\psi_{2}(x)e^{-i \\xi x}dx=\\sqrt{\\frac{\\pi}{2}}\\frac{i\\xi^{2}}{\\sinh(\\pi\\xi)}. \\tag{14}\\] It is well known (see [2]) that a wavelet \\(\\psi(x)\\in L^{1}(\\mathbb{R})\\cap L^{2}(\\mathbb{R})\\) should satisfy the following admissibility condition \\[2\\pi\\int_{-\\infty}^{\\infty}|\\xi|^{-1}|\\hat{\\psi}(\\xi)|^{2}d\\xi<\\infty. \\tag{15}\\] We will show that for \\(\\psi_{2}(x)\\) the condition (15) is satisfied and even the integral can be expressed in a closed form in terms of the Riemann zeta function. Namely, using (14) and the following formula from Dwight's Tables [3] (item no 860.519): \\[\\int_{0}^{\\infty}\\frac{x^{p}}{(\\sinh(ax))^{2}}dx=\\frac{\\Gamma(p+1)}{2^{p-1}a^ {p+1}}\\zeta(p),\\quad a>0,\\;p>1, \\tag{16}\\] we have \\[2\\pi\\int_{-\\infty}^{\\infty}|\\xi|^{-1}|\\hat{\\psi_{2}}(\\xi)|^{2}d\\xi=\\pi^{2} \\int_{-\\infty}^{\\infty}\\frac{|\\xi|^{3}}{(\\sinh(\\pi\\xi))^{2}}d\\xi=\\frac{3\\zeta (3)}{\\pi^{2}}. \\tag{17}\\] We generate a doubly-indexed family of wavelets from \\(\\psi_{2}\\) by dilating and translating, \\[\\psi_{2}^{a,b}(x)=\\frac{1}{\\sqrt{a}}\\psi_{2}\\Big{(}\\frac{x-b}{a}\\Big{)},\\] where \\(a,b\\in\\mathbb{R},\\;a>0\\) and denote by \\(\\psi_{2}^{+\\;a,b}(x)\\) the positive part of \\(\\psi_{2}^{a,b}(x)\\). ## 4 Wavelets based on higher derivatives of the logistic function Similarly as in the previous section we define a wavelet \\(\\psi_{n}(x)\\) to be the \\(n\\)th (\\(n=3,4, \\)) derivative of the logistic function \\(u(x)=\\frac{1}{1+e^{-x}}\\). Figure 2 shows graph of the wavelet \\(\\psi_{3}(x)\\). Thus (5) gives \\[u^{(n)}(x)=(-1)^{n}\\sum_{k=0}^{n-1}\\binom{n}{k}u^{k+1}(u-1)^{n-k},\\]and then \\(\\psi_{n}(x)\\) can be explicitly expressed as \\[\\psi_{n}(x)=(-1)^{n}\\sum\\limits_{k=0}^{n-1}\\binom{n}{k}\\Big{(}\\frac{1}{1+e^{-x}} \\Big{)}^{k+1}\\Big{(}\\frac{1}{1+e^{-x}}-1\\Big{)}^{n-k}=\\frac{(-1)^{n}\\sum\\limits_ {k=0}^{n-1}\\binom{n}{k}(-e^{-x})^{n-k}}{(1+e^{-x})^{n+1}}. \\tag{18}\\] By definition, the function \\(\\psi_{n}(x)\\) is an even function for odd n and an odd function when n is even. The numerator of the expression (18) is a polynomial of degree \\(n\\) of the variable \\(e^{-x}\\), while the denominator of degree \\(n+1\\). Therefore for any polynomial \\(p(x)\\) we have \\(\\lim\\limits_{x\\rightarrow-\\infty}p(x)\\psi_{n}(x)=0\\). Since \\(\\psi_{n}(x)\\) has the symmetry property then also \\(\\lim\\limits_{x\\rightarrow\\infty}p(x)\\psi_{n}(x)=0\\). The last conclusion can also be drawn from multiplying the numerator and the denominator of (18) by \\(e^{(n+1)x}\\). From this and from the fact that \\(\\psi_{k+1}(x)=\\psi_{k}^{\\prime}(x)\\) for any integer \\(k\\geq 2\\) it follows that \\(\\psi_{n}(x)\\in S(\\mathbb{R})\\), \\((n=2,3,\\ldots)\\). By (14) we have \\[\\hat{\\psi_{n}}(\\xi)=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty}\\psi_{n}(x)e^ {-i\\xi x}dx=\\sqrt{\\frac{\\pi}{2}}\\frac{(i\\xi)^{n-1}\\xi}{\\sinh(\\pi\\xi)}. \\tag{19}\\] Now using (19) and once again formula (16) we can calculate the integral of the admissibility condition (15) as follows: \\[2\\pi\\int_{-\\infty}^{\\infty}|\\xi|^{-1}|\\hat{\\psi_{n}}(\\xi)|^{2}d\\xi=\\pi^{2}\\int _{-\\infty}^{\\infty}\\frac{|\\xi|^{2n-1}}{(\\sinh(\\pi\\xi))^{2}}d\\xi=\\pi^{2}\\frac{2 \\Gamma(2n)}{2^{2n-2}\\pi^{2n}}\\zeta(2n-1)=\\frac{(2n-1)!}{2^{2n-3}\\pi^{2n-2}} \\zeta(2n-1). \\tag{20}\\] As usually we generate a doubly-indexed family of wavelets from \\(\\psi_{n}\\) by dilating and translating, \\[\\psi_{n}^{a,b}(x)=\\frac{1}{\\sqrt{a}}\\psi_{n}\\Big{(}\\frac{x-b}{a}\\Big{)},\\] where \\(a,b\\in\\mathbb{R},\\;a>0,\\;n=2,3,\\ldots\\). Figure 2: Wavelet \\(\\psi_{3}(x)\\) Applications for modeling of SARS-CoV-2 virus epidemics Denote by \\(y_{n}^{*}\\) total cumulative number of individuals reported to be infected up to \\(n\\)th day in a country or a region and by \\(y_{n}\\) the 7-day central moving arithmetic average for the sequence \\(y_{n}^{*}\\), i.e., \\[y_{n}=\\frac{1}{7}\\sum_{i=-3}^{3}y_{n+i}^{*}.\\] We will look, in the sequence \\((y_{n})\\), for points corresponding to the zeros of the second or the third derivative of the logistic function. This is equivalent to detect the points, where the sequence of second differences, \\[\\Delta^{2}y_{n}=y_{n+1}-2y_{n}+y_{n-1},\\] takes a value close to zero or a maximum respectively. We will find these points either directly by observing the sequence of second differences (\\(\\Delta^{2}y_{n}\\)) or detect them by using the wavelet \\(\\psi_{2}(x)\\) and its positive part \\(\\psi_{2}^{+}(x)\\). From the considerations in Sec. 2 and from (10) it follows that parameter \\(b\\) should be determined as that point where the sequence (\\(\\Delta^{2}y_{n}\\)) changes sign. Parameter \\(a\\) should be chosen in such a way that the distance between the zero and the maximum of (\\(\\Delta^{2}y_{n}\\)) was approximately \\(1.319a\\). Thus, we obtain two parameters defining the first logistic function (first wave) approximating the time series \\((y_{n})\\). It remains to determine the third parameter of the first wave, i.e., its saturation level \\(y_{max}\\). Assuming that \\((y_{n})\\) initially follows a logistic function \\(y_{n}\\approx y(n)=\\frac{y_{max}}{1+\\exp(-\\frac{n-b}{a})}\\) and since by definition it holds \\[y^{\\prime\\prime}(x)=\\frac{y_{max}}{a^{3/2}}\\psi_{2}^{a,b}(x),\\] then by (13) we get successively \\[\\sum_{n} \\Delta^{2}y_{n}\\psi_{2}^{+\\ a,b}(n)\\approx\\sum_{n}\\Delta^{2}y(n) \\psi_{2}^{+\\ a,b}(n)\\approx\\int_{-\\infty}^{\\infty}y^{\\prime\\prime}(x)\\psi_{2} ^{+\\ a,b}(x)dx=\\int_{-\\infty}^{\\infty}\\frac{y_{max}}{a^{3/2}}\\psi_{2}^{a,b}(x) \\psi_{2}^{+\\ a,b}(x)dx\\] \\[=\\frac{y_{max}}{a^{3/2}}\\int_{-\\infty}^{b}(\\psi_{2}^{+\\ a,b}(x))^ {2}dx=\\frac{y_{max}}{60a^{3/2}}. \\tag{21}\\] Using (21) we can estimate \\(y_{max}\\) as follows \\[y_{max}\\approx 60a^{3/2}\\sum_{n}\\Delta^{2}y_{n}\\psi_{2}^{+\\ a,b}(n), \\tag{22}\\] Parameters \\(a\\) and \\(b\\) can also be estimated by maximizing locally the integral on the left-hand side of (21). Thus we find in the sequence \\(\\Delta^{2}y_{n}\\) the best pattern corresponding to the positive part of the wavelet \\(\\psi_{2}^{a,b}\\). To avoid the situation that the next wave, immediately following the previous one, could distort our findings we use here the positive part \\(\\psi_{2}^{+\\ a,b}\\), not the whole wavelet \\(\\psi_{2}^{a,b}\\). The saturation level of the first wave can also be estimated as twice the value of the sequence \\((y_{n})\\) at the point where (\\(\\Delta^{2}y_{n}\\)) changes signs (inflection point) or its maximal value multiplied by \\(1/0.211\\) (zero of the third derivative). Having found the values of the parameters \\(a\\), \\(b\\), and \\(y_{max}\\) for the first wave, we create a new time series by subtracting the first wave from \\(y_{n}\\), i.e., \\[z_{n}=y_{n}-\\frac{y_{max}}{1+\\exp(-\\frac{n-b}{a})},\\] and with the sequence \\(z_{n}\\) we proceed in the same way as with \\(y_{n}\\), calculating successive logistic waves. After this we use the nonlinear Generalized Reduced Gradient method to optimize the values of saturation levels (but not \\(a\\)'s and \\(b\\)'s). All data were collected from the [https://www.worldometers.info/coronavirus/platform](https://www.worldometers.info/coronavirus/platform). ### The spread of infections in the United States Let us use the theory to build a model for the total cumulative number of individuals reported to be infected by SARS-CoV-2 in the USA. We assumed the observation period of 189 days, from March 13 (\\(n=1\\), the first day when the number of cases exceeded \\(2,000\\)) to September 17, 2020 (\\(n=189\\)). All calculations were performed in Excel. Using the above-described procedure we have got the approximating function as a sum of the following waves \\[f(x)=\\frac{630,913}{1+\\exp(-\\frac{x-25}{6.6})}+\\frac{1,085,184}{1+\\exp(-\\frac{ x-52}{9})}+\\frac{3,288,916}{1+\\exp(-\\frac{x-126}{14.6})}+\\frac{2,846,457}{1+ \\exp(-\\frac{x-174}{23.3})}. \\tag{23}\\] Figure 3 shows the positive part of the wavelet (scaled) \\(\\psi_{2}^{+\\ a,b}\\), \\(a=25,b=6.6\\) fitted to the second differences (\\(\\Delta^{2}y_{n}\\)). Figure 4 shows the total cumulative number of individuals reported to be infected by SARS-CoV-2 in the USA in the period: March 13 (\\(n=1\\)) - September 17 (\\(n=189\\)), (blue points) and the approximating function \\(f(x)\\) (23), (red points). The MAD (Mean Absolute Deviation) value of the approximation equals \\[\\mbox{MAD}=\\frac{1}{189}\\sum_{n=1}^{189}|y_{n}-f(n)|=20,882.53.\\] Figure 4: Approximation of the total number of infections in the USA, blue points - observed values, red line - approximating function \\(f(x)\\) (23) Figure 3: Scaled first wave’s wavelet \\(\\psi_{2}^{+\\ a,b}\\), \\(a=25,b=6.6\\) fitted to the dataWe have calculated SMAPE (Symmetric Mean Absolute Percent Error) value in two versions: \\[\\text{SMAPE}_{1} =\\frac{2}{189}\\sum_{n=1}^{189}\\frac{|y_{n}-f(n)|}{y_{n}+f(n)}=0.08217,\\] \\[\\text{SMAPE}_{2} =\\frac{\\sum\\limits_{n=1}^{189}|y_{n}-f(n)|}{\\sum\\limits_{n=1}^{189 }(y_{n}+f(n))}=0.003679.\\] The error \\(\\text{SMAPE}_{1}\\) strongly depends on the initial values of the time series. If we omit the first 19 observations, then the error calculated for the shorter period of 170 days from April 1 (\\(n=20\\)) to September 17 reduces to \\[\\text{SMAPE}_{1}=\\frac{2}{170}\\sum_{n=20}^{189}\\frac{|y_{n}-f(n)|}{y_{n}+f(n)}= 0.01195.\\] ### The spread of infections in the United Kingdom The same procedure applied to the total cumulative number of individuals reported to be infected by SARS-CoV-2 in the United Kingdom for the period of 201 days from March 13 (\\(n=1\\)) to September 29 (\\(n=201\\)) gives the following approximating function (see Figure 5) \\[f(x)= \\frac{130,000}{1+\\exp(-\\frac{x-29}{7.4})}+\\frac{83,595}{1+\\exp(- \\frac{x-52}{6.1})}+\\frac{30,441}{1+\\exp(-\\frac{x-68}{5.6})}+\\frac{29,688}{1+ \\exp(-\\frac{x-86}{6.2})}+\\frac{3,405}{1+\\exp(-\\frac{x-102}{4.7})}\\] \\[+\\frac{74,547}{1+\\exp(-\\frac{x-152}{18.7})}+\\frac{194,489}{1+\\exp (-\\frac{x-200}{7.6})}. \\tag{24}\\] The approximation errors are: \\(\\text{MAD}=1,443.37\\), \\(\\text{SMAPE}_{1}=0.05955\\) (for the shorter period of 182 days, beginning from April 1, \\(\\text{SMAPE}_{1}=0.007123\\)), \\(\\text{SMAPE}_{2}=0.003047\\). Figure 5: Approximation of the total number of infections in the UK, blue points - observed values, red line - approximating function \\(f(x)\\) (24) Conclusions and further work In the paper, we have proved, by using the Eulerian numbers, properties of the logistic function related to zeros of its successive derivatives. We also used logistic wavelets to estimate the parameters of a logistic curve that best fits to a given time series with a potential logistic trend. Then we described, based on the data from the United States and the United Kingdom, that the total reported number of SARS-CoV-2 infections can be modeled, in a natural way, as a sum of several logistic functions. The theory and the procedure can be applied to model the number of infections in any country or a region. In our further work, we intend to use, in a similar way, the logistic wavelets of higher order (see Sec. 4). Using some appropriate special numbers we are going to define analogous wavelets for the Gompertz function (see some initial calculations [13], [14]) or for the fractional logistic functions (some preliminary theorems see [15]). **Funding statement** This research was partially funded by the 'IDUB against COVID-19' project granted by the Warsaw University of Technology (Warsaw, Poland) under the program Excellence Initiative: Research University (IDUB). ## References * [1] B. Cazelles, K. Cazelles and M. Chavez, Wavelet analysis in ecology and epidemiology: impact of statistical tests. _J. R. Soc. Interface_**11** (2014): 20130585. [http://dx.doi.org/10.1098/rsif.2013.0585](http://dx.doi.org/10.1098/rsif.2013.0585) * [2] I. Daubechies, _Ten lectures on wavelets_, 2nd ed., Philadelphia: SIAM, 1992. CBMS-NSF regional conference series in applied mathematics 61 * [3] H. B. Dwight, _Tables of integrals and other mathematical data_, 4th ed., The Macmillan Company, New York, 1961. * [4] A. S. Fokas, N. Dikaios and G. A. Kastis, Mathematical models and deep learning for predicting the number of individuals reported to be infected with SARS-CoV-2, _J. R. Soc. Interface_**17** (2020): 20200494. [http://dx.doi.org/10.1098/rsif.2020.0494](http://dx.doi.org/10.1098/rsif.2020.0494) * [5] G. R. Franssens, Functions with derivatives given by polynomials in the function itself or a related function, _Analysis Mathematica_**33** (2007), 17-36. * [6] R. L. Graham, D. E. Knuth and O. Patashnik, _Concrete Mathematics: A Foundation for Computer Science_, Reading MA: Addison Wesley, 1994. * [7] B.T. Grenfell, O. N. Bjornstad and J. Kappey, Travelling waves and spatial hierarchies in measles epidemics. _Nature_**414** (2001), 716--723. [https://doi.org/10.1038/414716a](https://doi.org/10.1038/414716a) * [8] W. O. Kermack, A. G. McKendrick, A contribution to the mathematical theory of epidemics, _Proc. R. Soc. Lond. A_**115** (1927), 700--721. [https://doi.org/10.1098/rspa.1927.0118](https://doi.org/10.1098/rspa.1927.0118) * [9] S. G. Krantz, P. Polyakov and A.S.R.S. Rao, True epidemic growth construction through harmonic analysis, _J. Theor. Biol._**494** (2020): 110243. [https://doi.org/10.1016/j.jtbi.2020.110243](https://doi.org/10.1016/j.jtbi.2020.110243) * [10] A. I. Lavrova, E. B. Postnikov, O. A. Manicheva and B. I. Vishnevsky, Bi-logistic model for disease dynamics caused by Mycobacterium tuberculosis in Russia, _R. Soc. Open Sci._**4** (2017): 171033. [https://doi.org/10.1098/rsos.171033](https://doi.org/10.1098/rsos.171033) * [11] G. Rzadkowski, Eulerian numbers and Riccati's differential equation, (Eds. T.E. Simos) Proceedings of ICNAAM 2006, Wiley-VCH Verlag (2006), 291-294. * [12] G. Rzadkowski, Derivatives and Eulerian numbers, _Amer. Math. Monthly_**115** (2008), 458-460. * [13] G. Rzadkowski, W. Rzadkowski, P. Wojcicki, On some connections between the Gompertz function and special numbers, _J. Nonlinear Math. Phys._**3** (2015), 374--380. [http://dx.doi.org/10.1080/14029251.2015.1079419](http://dx.doi.org/10.1080/14029251.2015.1079419) * [14] G. Rzadkowski, I. Glazewska, K. Sawinska, The Gompertz function and its applications in management, _Foundations of Management_**7** (2015), 185-190. DOI: 10.1515/fman-2015-0035 * [15] G. Rzadkowski, M. Urlinska, Some applications of the generalized Eulerian numbers, _J. Comb. Theory Ser. A._**163** (2019), 85-97. DOI: [https://doi.org/10.1016/j.jcta.2018.11.012](https://doi.org/10.1016/j.jcta.2018.11.012) * [16] E. Vanucci, L. Vanucci, Forecast Covid-19 end date in Italy by logistics waves, [https://www.researchgate.net/publication/341104205](https://www.researchgate.net/publication/341104205) (Access: 2020 August 21). * [17] Zhou, P., Yang, X., Wang, X. et al. A pneumonia outbreak associated with a new coronavirus of probable bat origin. _Nature_**579** (2020), 270--273. [https://doi.org/10.1038/s41586-020-2012-7](https://doi.org/10.1038/s41586-020-2012-7)
In the present paper, we model the cumulative number of persons reported to be infected by the SARS-CoV-2 virus, in a country or a region, by a sum of logistic functions. For a given logistic function, using Eulerian numbers, we find the zeros of its successive derivatives and their relationship with the saturation level of this function. In a given time series, having potentially the logistic trend, we use its second differences to determine points corresponding to these zeros. To estimate the parameters of the approximating logistic function, we define and use logistic wavelets. Then we apply the theory to the cases of SARS-CoV-2 infections in the United States and the United Kingdom. **Logistic wavelets and logistic function: An application to model the spread of SARS-CoV-2 virus infections** **Grzegorz Rzadkowski** Warsaw University of Technology, str. Narbutta 85, 02-524 Warsaw, Poland e-mail: [email protected] Keywords: Logistic wavelet, logistic equation, logistic function, SARS-CoV-2 infections, Eulerian number, Riccati's differential equation. 2020 Mathematics Subject Classification: 92D30, 65T60, 11B83
Summarize the following text.
264
arxiv-format/2105_03296v3.md
# VIRAL SLAM: Tightly Coupled Camera-IMU-UWB-Lidar SLAM Thien-Minh Nguyen,, Shenghai Yuan, Muqing Cao, Thien Hoang Nguyen, and Lihua Xie, This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, under the Grant Call 10013 - Wallenberg-STU Presidential Postdoctoral Fellowship 2020. (Corresponding Author: Thien-Minh Nguyen)\\({}^{1}\\)The authors are with School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, 50 Nanyang Avenue. (e-mail: {thienminh.nguyen@, shyuan@, mqcao@, e180071@e., elhxie@}ntu.edu.sg). ## I Introduction Localization is arguably one of the most important capabilities for mobile robots, especially for Unmanned Aerial Vehicles (UAVs). Obviously, a common approach to ensure reliable and accurate localization is to combine multiple sensors for their complementary advantages as well as redundancy. For example, since lidar is not affected by lighting condition or lack of visual features, which can easily destabilize most visual-inertial-odometry (VIO) systems, the robot can still rely on this type of sensor for localization in low light or low-texture conditions. In addition, camera can also enable loop closure capability, while lidar pointcloud map can help augment the visual feature's depth estimation process [1, 2]. This is one of the main advantages that motivates us to develop a tightly coupled camera-lidar-IMU-based Simultaneous Localization and Mapping (SLAM) system in this paper. Besides the aforementioned benefits of a visual-lidar localization system, integration of Ultra-wideband (UWB) into the SLAM system can also provide another layer of backup in case both lidar and camera lose track, and also allows user to obtain global localization information relative to the inspected object [3, 4]. However, to successfully integrate UWB with SLAM, especially in the real-time localization process, one must first estimate the coordinates of the anchors in the SLAM coordinate frame L, which is the methodology used in previous works [5, 6]. In this paper, we propose a new approach. Specifically, using the distance measurements between the anchors, we can set up a nominal coordinate of the anchors, which effectively defines a preferred frame W that aligns with the mission to be conducted in the environment (more details in Sec. III-A), and then further refine the transform between L and W in the BA process. Subsequently, the anchors' coordinates in W can be converted to L and used for constructing the range-based factors in the optimization process over the local sliding window. The separation of estimating the anchor coordinates and estimating the robot states is a deliberate choice to ensure convergence, especially in the case when the movement in the sliding window is too short, which lacks excitation for convergence of the anchor position estimates. It is also noted that we focus on a simple yet effective UWB network of two or three anchors, with multiple body-offset ranging nodes in the UAV. This simple network allows relatively accurate Fig. 1: Hardware setup of the VIRAL SLAM system on a UAV: a hardware-synchronized stereo camera rig, a 400 Hz IMU, two 16-channel lidars, four body-offset UWB ranging nodes and a crystal prism that is tracked by a Leica total station for millimeter-accuracy groundtruth. initialization of the robot and anchor position in \\(\\bar{w}\\), which facilitates accurate and seamless integration of UWB into the SLAM system. The contribution of our work can be stated as follows: * We propose a comprehensive SLAM framework that tightly integrates multiple sensors of different sensing modalities, i.e. lidars, cameras, IMU and UWB ranging sensors, in a seamless manner. * We propose a map-matching marginalization (MMM) scheme for visual features using the local map constructed from lidar pointclouds. * We devise a loop closure scheme that is triggered by visual place recognition and further refined via a two-stage pointcloud alignment. * We propose a novel scheme to fuse UWB, where estimation of the anchor position and ranging bias is delegated to the bundle adjustment (BA) thread, and their values are fixed in the local sliding window optimization for the fusion of UWB range. * We conduct extensive experiments to validate VIRAL SLAM and compare it with other state-of-the-art methods in a variety of real-world scenarios and high-fidelity simulations. ## II Related Works To the best of our knowledge, our work features a tightly coupled SLAM system that integrates one of the most comprehensive sensor suites. While localization methods based on mainly camera or lidar (with or without IMU) are abundant, only a handful of works have investigated tightly coupled visual and lidar information in the literature. In [1], Zhang et al proposed a method where VIO and lidar data were employed in a cascaded manner. In this framework, high rate VIO data is used to help initialize the scan-matching process, while pointcloud map can be used to help retrieve the visual feature's depth. On the other hand, in [7, 8] Zuo et al proposed an MSCKF framework to asynchronously update the robot states when lidar and visual features are obtained, and IMU is used for propagation in between of these states. In [9], stereo camera, IMU, lidar were fused together on a unified framework, synchronized with camera data. We note that loop closure and BA are not considered in the aforementioned works (as opposed to our proposed VIRAL SLAM), therefore drift is still an intrinsic problem in these approaches. To address the drift issue, in [10], Graeter et al studied the problem of estimating the scale of a pose graph obtained from monocular visual SLAM with lidar information. In [11], Shao et al considered an approach similar to [1], but also used camera and ICP for loop closure. However they require hardware-synchronized camera-lidar messages, which can only produce very low rate data. In [2], Shan et al proposed a loose integration of VIO output from VINS-Mono [12] to LIO-SAM [13], which itself loosely integrates LeGO-LOAM output [14] with IMU preintegration, in a gtsam pose-graph optimization framework. This approach can be unreliable as the whole chain depends on whether the core lidar-based process runs well. If there is a low-texture case when lidar localization is unstable, its error can reverberate up the chain, which appears to be the case in some of our experiments. We also note that the aforementioned works [1, 2, 10, 11, 13, 14] do not consider the integration of multiple lidars like VIRAL SLAM. On the other hand, while the MLOAM and BLS methods [15, 16] did address this issue, they focused purely on lidar and no camera and IMU is involved. In [17], we proposed a multi-input lidar-inertia odometry and mapping scheme called MILIOM, which clearly demonstrates the robustness, accuracy, and real-time performance. VIRAL SLAM system is developed based on this lidar-based system. Another trend in the literature is the use of UWB to aid VIO or SLAM process. For example, UWB has been integrated with monocular VIO for drift correction [5, 6], or can be used as a variable baseline for cameras on different UAVs [18]. In recent years, VIO, UWB and lidar have also been used in a loosely coupled manner for relative localization [19, 20, 21, 22, 23, 24]. In our previous work [4], a tightly-coupled fusion of body-offset range measurement with IMU preintegration and pose displacement derived from LOAM/VINS subsystems was proposed. We showed that the system can achieve better accuracy compared to traditional onboard self-localization methods. Based on this work, tight coupling of UWB with lidar and IMU preintegration factors was investigated in [3]. However, this preliminary work is still restrictive in that the lidar processing pipeline inherited from LIO-Mapping [25] was quite inefficient, and no visual information was considered. The remainder of the paper is organized as follows: in Sec. III, we lay out some basic definitions and our general approach towards synchronization. Sec. IV then presents the main function blocks that process the sensor data for the local sliding window optimization, while Sec. V goes into detail of the global map management, which includes loop closure and BA processes. We demonstrate the capability of our method via several experiments on public and simulated datasets in Sec. VI. Finally, Sec. VII concludes our work. ## III Preliminaries ### _Coordinate frames_ In this paper, we define a so-called _local frame_L whose origin coincides with the position of the body frame at the initial time, and the \\(z\\) axis points to the opposite direction of gravity, and the robot's initial yaw angle is zero. In addition to L, we fix the coordinates of the anchors, which defines another so-called _world frame_W. Since three anchors reside on a plane, the 3D coordinates of these anchors in W can be determined via the distances between the anchors, plus a nominal height \\(z^{*}\\). Fig. 2 describes our coordinate systems in more details. ### _State estimates_ At each time step \\(t_{k}\\), we define a sliding window \\(\\widehat{\\mathcal{T}}_{k}\\) consisting of the robot's state estimates over the last \\(M\\) time steps as follows: \\[\\hat{\\mathcal{T}}_{k} =\\left(\\hat{\\mathcal{X}}_{w},\\hat{\\mathcal{X}}_{w+1},\\ldots,\\hat{ \\mathcal{X}}_{k}\\right),w\\triangleq k-M+1. \\tag{1}\\] \\[\\hat{\\mathcal{X}}_{k} =\\left(\\hat{\\mathbf{q}}_{k},\\hat{\\mathbf{p}}_{k},\\hat{\\mathbf{v} }_{k},\\hat{\\mathbf{b}}_{k}^{\\omega},\\hat{\\mathbf{b}}_{k}^{\\omega}\\right)\\in \\mathrm{SO}(3)\\times\\mathbb{R}^{12}, \\tag{2}\\] where \\(\\hat{\\mathbf{q}}_{k}\\), \\(\\hat{\\mathbf{p}}_{k}\\), \\(\\hat{\\mathbf{v}}_{k}\\) are respectively the orientation quaternion, position and velocity state estimates w.r.t. the local frame L at time \\(t_{k}\\); \\(\\hat{\\mathbf{b}}_{k}^{a},\\hat{\\mathbf{b}}_{k}^{\\omega}\\) are respectively the IMU accelerometer and gyroscope biases. Besides, we also define the states for the inverse depth of \\(N_{V}^{k}\\) visual features being tracked on the sliding window as: \\[\\hat{\\lambda}^{1},\\hat{\\lambda}^{2},\\ldots\\hat{\\lambda}^{N_{V}^{k}},\\ \\hat{\\Lambda}_{k}\\triangleq(\\hat{\\lambda}^{1},\\hat{\\lambda}^{2},\\ldots\\hat{ \\lambda}^{N_{V}^{k}})\\in\\mathbb{R}^{N_{V}^{k}} \\tag{3}\\] #### Iii-B1 Global pose graph and UWB parameters A global pose graph is developed with marginalized key frames from the sliding window estimation process. For each key frame \\(i\\) stored in the memory, we define its pose estimate as \\(\\mathbb{L}\\hat{\\mathbf{T}}_{i}\\). The pose estimates will be updated in the BA process whenever a certain number of new key frames are admitted, or a loop factor is obtained. Besides the key frame pose, we also seek to estimate the following UWB-related parameters: \\[\\mathbb{L}\\mathbf{T}=\\left(\\mathbb{L}\\mathbf{R},\\mathbb{L}\\mathbf{P}\\right), \\ \\mathbb{L}\\mathbf{R}\\in\\mathrm{SO}(3),\\ \\mathbb{L}\\mathbf{P}\\in\\mathbb{R}^{3};\\ \\mathbf{b}^{r}\\in \\mathbb{R}, \\tag{4}\\] where \\(\\mathbb{L}\\mathbf{T}\\) is the coordinate transform between the local and world frames that were introduced in Sec. III-A, and \\(\\mathbf{b}^{r}\\) is the ranging bias that is present in our problem due to the use of extension cables to place the UWB ranging nodes at different points on the robot [26]. Taking VIO as an analogy, \\(\\mathbb{L}\\mathbf{T}\\) and \\(\\mathbf{b}^{r}\\) are similar to the extrinsic and intrinsic parameters of the UWB ranging and communication network. ### _Synchronization_ In this work, our synchronization scheme is an combination of previous schemes for multiple lidars with IMU [17] and lidar with IMU and UWB data [3], with stereo-images being the new addition. Fig. 3 is an illustration of our synchronization scheme. Briefly speaking, one lidar is arbitrarily chosen as the primary whose timestamps are used to determine the sliding window's time steps, and other lidars' inputs are merged into the primary lidar's, yielding a combined feature cloud (CFC) as a single sensor input. IMU data are associated with the time steps for propagation and preintegration. For UWB samples, they are grouped into \"bundles\" based on the intervals between the time steps. Here we denote the timestamp of a UWB sample as \\(\\tau_{k}^{i}\\), which implies that \\(\\tau_{k}^{i}\\in(t_{k-1},\\ t_{k}]\\). Knowing this will allow us to associate the sample with the correct state in the construction of the cost factor in the later part. For the cameras, they are triggered by an external hardware apparatus, thus their images can be synchronized into pairs before further synchronized with the lidar CFCs. For each time step, we admit the image pair that is closest in time to it, and measure the time delay \\(t_{k}^{d}\\). This time delay will be used to compensate for the visual feature's pixel coordinate when they are tracked in the image plane. Fig. 3 is a snapshot of the sliding window at time \\(t\\), where all of the sensor data needed for constructing the cost function in the local sliding window optimization block are available. After the optimization process elapses, we can obtain optimized states \\(\\hat{\\mathcal{X}}_{w},\\hat{\\mathcal{X}}_{w+1},\\ldots\\hat{\\mathcal{X}}_{k}\\) and nominate one of them as a key frame candidate \\(\\mathcal{K}\\) to the global map. This is the snapshot of the system as shown in Fig. 4. ### _System overview_ Fig. 4 presents the main function blocks of our VIRAL SLAM system. Most expansive of all is the real-time localization thread, where all sensor data are synchronized and processed to eventually create factors in a cost function that is optimized using the ceres solver [27]. Besides this time-critical thread, another thread runs in the background to manage the key frames, detect loop closure, and BA optimization. The details of these blocks will be described in the next sections. Fig. 3: Synchronization among the sensors. The light blue circles represent the interpolated IMU samples. Fig. 2: A so-called world frame W can be defined when fixing the coordinates of the anchor nodes using the anchor-to-anchor distances. On the other hand, the SLAM system takes reference to a local coordinate frame L that coincides with the initial key frame’s pose. To successfully combine UWB with SLAM, the transform \\(\\mathbb{L}\\mathbf{T}\\) needs to be resolved. ## IV Real-Time Localization Function Blocks ### _Local sliding window optimization_ To estimate the states on the local sliding window, we seek to construct and optimize the following cost function: \\[f(\\hat{\\mathcal{T}}_{k},\\hat{\\Lambda}_{k})\\triangleq\\Bigg{\\{} \\sum_{m=w+1}^{k}\\left\\|\\mathbf{r}_{\\mathcal{I}}(\\hat{\\mathcal{X}}_{m-1},\\hat{ \\mathcal{X}}_{m},\\mathcal{I}_{m})\\right\\|_{\\mathbf{P}_{\\hat{\\mathcal{I}}_{m}^{- 1}}^{-1}}^{2}\\] \\[+\\sum_{m=w}^{k}\\sum_{i=1}^{N_{m}^{m}}\\rho_{H}\\left(\\left\\|\\mathbf{r}_ {\\mathcal{L}}(\\hat{\\mathcal{X}}_{m},\\mathcal{L}_{m}^{i})\\right\\|_{\\mathbf{P}_ {\\hat{\\mathcal{L}}_{m}^{i}}^{-1}}^{2}\\right)\\right.\\] \\[+\\sum_{m=w}^{k}\\sum_{i=1}^{N_{m}^{m}}\\left\\|\\mathbf{r}_{\\mathcal{U}}( \\hat{\\mathcal{X}}_{m-1},\\hat{\\mathcal{X}}_{m},\\hat{\\mathbb{N}}\\hat{\\mathbb{T }},\\hat{\\mathbb{P}}^{r},\\mathcal{U}_{m}^{i})\\right\\|_{\\mathbf{P}_{\\hat{ \\mathcal{U}}_{m}^{i}}^{-1}}\\] \\[\\left.+\\sum_{i=1}^{N_{\\mathcal{U}}^{k}}\\sum_{b\\in\\mathcal{C}^{i} }\\rho_{A}\\left(\\left\\|\\mathbf{r}_{\\mathcal{V}}(\\hat{\\mathcal{X}}_{m_{a}},\\hat{ \\mathcal{X}}_{m_{b}},\\tilde{\\lambda}^{i},\\mathcal{V}_{ab}^{i})\\right\\|_{ \\mathbf{P}_{\\hat{\\mathcal{V}}_{ab}^{-1}}^{-1}}^{2}\\right)\\Bigg{\\}}, \\tag{5}\\] where \\(\\rho_{H}(\\cdot)\\) and \\(\\rho_{A}(\\cdot)\\) are the Huber and arctan loss functions used to reduce the effects of outliers; \\(\\mathcal{I}_{m}\\), \\(\\mathcal{L}_{m}^{i}\\), \\(\\mathcal{U}_{m}^{i}\\), \\(\\mathcal{V}_{ab}^{i}\\) are the elementtry observations from IMU, Lidar, UWB and visual feature, respectively; \\(N_{C}^{m}\\in\\mathbb{N}\\) is the number of _feature-map matching_ FMM coefficients extracted from the CFC \\(\\mathcal{F}_{m}\\), \\(N_{U}^{m}\\in\\mathbb{N}\\) is the number of UWB samples obtained in the interval \\((t_{m-1},t_{m}]\\), \\(N_{V}^{k}\\in\\mathbb{N}\\) is the number of visual features that are tracked on the sliding window from \\(t_{w}\\) to \\(t_{k}\\), and \\(\\mathcal{C}^{i}\\) refers to the set of cameras that observe the visual feature \\(\\hat{\\mathbf{r}}^{i}\\), excluding \\(\\mathbb{C}_{a}\\), \\(\\tilde{\\lambda}^{i}\\) can be either the state estimate \\(\\hat{\\lambda}^{i}\\) or the marginalized inverse depth \\(\\tilde{\\lambda}^{i}\\) of the MMM features. The cost function (5) summarizes the coupling of each sensor's factor with the state estimate. We will elaborate on how to construct these factors in the next sections. ### _Sensor data processing_ #### Iv-B1 Lidar & IMU We refer to our previous work [17] for the details on how to construct the lidar and IMU factors from the sensor data. #### Iv-B2 Uwb Similar to [3], we define each UWB sample as \\(\\mathcal{U}_{m}^{i}=\\left(\\hat{d}^{i},\\,\\,\\forall\\mathbf{x}^{i},\\mathbf{y}^{ i},\\tau_{m}^{i},t_{m-1},t_{m}\\right)\\), where \\(\\hat{d}^{i}\\) is the range measurement, \\(\\,\\,\\forall\\mathbf{x}^{i}\\) is the coordinate of the anchor w.r.t. \\(\\,\\,\\forall\\mathbf{y}^{i}\\) is the UAV ranging node in the body frame \\(\\texttt{B}_{\\tau_{k}^{i}}\\), \\(\\tau_{m}^{i}\\) is the message's timestamp, \\(t_{m-1}\\) and \\(t_{m}\\) are the preceding and succeeding time steps of \\(\\tau_{m}^{i}\\). However, what is different now is that the distance measurement \\(\\bar{d}^{i}\\) at time \\(t_{k}+\\delta t^{i}\\) is defined by the norm of the vector \\({}^{\\perp}\\!\\mathbf{d}^{i}\\), corrupted by Gaussian noise and bias as follows: \\[\\bar{d}^{i} =\\left\\|{}^{\\perp}\\!\\mathbf{d}^{i}\\right\\|+\\mathbf{\\eta}_{\\mathcal{U} ^{i}}+\\mathbf{b}^{r},\\ \\mathbf{\\eta}_{\\mathcal{U}^{i}}\\sim\\ \\mathcal{N}(0,\\sigma_{ \\mathcal{U}^{i}}^{2});\\] (6) \\[{}^{\\perp}\\!\\mathbf{d}^{i} \\triangleq\\mathbf{p}_{m}+\\mathbf{R}_{m-1}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!number of nearest neighbors of the state at time \\(t_{k-M/2}\\) and if the relative distance or relative rotation to all of these neighbours exceed a certain threshold, the information associated with this time step will be marginalized as prior. #### Iv-A2 Key frame selection The selection of the key frames is needed for construction of a local pointcloud map for FMM process. This selection process takes place before the optimization process and is based on the IMU-propagated pose \\(\\hat{\\mathbf{T}}_{k}\\). Hence, the set of these key frames is a union of \\(\\{\\mathcal{K}_{a}\\}\\cup\\{\\mathcal{K}_{b}\\}\\cup\\{\\mathcal{K}_{c}\\}\\), where \\(\\{\\mathcal{K}_{a}\\}\\) is the set of the last \\(M\\) key frames, \\(\\{\\mathcal{K}_{b}\\}\\) is the set of \\(M\\) nearest neighbors of \\(\\hat{\\mathbf{T}}_{k}\\), and \\(\\{\\mathcal{K}_{c}\\}\\) is the set of key frames representing their \\(2m\\times 2m\\times 2m\\) voxel cells that are within a radius from \\(\\hat{\\mathbf{T}}_{k}\\). ### _Bundle Adjustment_ For the BA process, our task is to construct and optimize the following cost function: \\[f(\\hat{\\mathcal{Y}},\\sfrac{\\mathrm{L}}{W}\\hat{\\mathbf{T}},\\hat{ \\mathbf{b}}^{r})\\triangleq\\Bigg{\\{}\\sum_{n=1}^{N}\\left\\|\\mathbf{r}_{1}(\\hat{ \\mathbf{T}}_{n-1},\\hat{\\mathbf{T}}_{n},\\sfrac{n-1}{n}\\hat{\\mathbf{T}})\\right\\| _{\\mathbf{P}_{1}^{-1}}^{2}\\] \\[+\\sum_{(p,c)\\in\\mathcal{M}}\\left\\|\\mathbf{r}_{2}(\\hat{\\mathbf{T }}_{p},\\hat{\\mathbf{T}}_{c},\\sfrac{p}{c}\\hat{\\mathbf{T}})\\right\\|_{\\mathbf{P} _{2}^{-1}}^{2}\\] \\[+\\sum_{n=1}^{N}\\sum_{j=1}^{N_{0}^{\\text{T}}}\\left\\|\\mathbf{r}_{3} (\\hat{\\mathbf{T}}_{n},\\sfrac{\\mathrm{L}}{W}\\hat{\\mathbf{T}},\\hat{\\mathbf{b}}^ {r},\\hat{\\mathcal{U}}_{n}^{i})\\right\\|_{\\mathbf{P}_{3}^{-1}}\\Bigg{\\}}, \\tag{10}\\] where \\(\\hat{\\mathcal{Y}}\\triangleq(\\hat{\\mathbf{T}}_{0},\\hat{\\mathbf{T}}_{1},\\ldots \\hat{\\mathbf{T}}_{N})\\) is the key frames' poses, \\(\\sfrac{n-1}{n}\\hat{\\mathbf{T}}\\) and \\(\\sfrac{p}{c}\\hat{\\mathbf{T}}\\) are respectively the relative pose and loop closure priors, \\(\\mathcal{H}\\) is the set of loop closure pairs, and \\(\\hat{\\mathcal{U}}_{n}^{i}\\) is a marginalized UWB measurement whose timestamp is within 0.2s of the key frame at time \\(t_{n}\\), i.e.: \\[\\hat{\\mathcal{U}}_{n}^{i}=\\left(\\hat{d}^{i},\\sfrac{\\mathrm{L}}{W}\\mathbf{x}^ {i},\\sfrac{\\mathrm{L}}{W}\\hat{\\mathbf{T}},\\hat{\\mathcal{U}}_{n}^{i}\\right), \\tag{11}\\] where \\(\\hat{d}^{i}\\), \\(\\sfrac{\\mathrm{L}}{W}\\mathbf{x}^{i}\\)\\(\\mathbf{y}^{i}\\) are defined similarly to (7), and \\((\\sfrac{t_{n}}{n},\\sfrac{t_{n}}{n}\\hat{\\mathbf{t}})\\) is the relative transform between \\(\\mathbb{B}_{t_{n}}\\) and \\(\\mathbb{B}_{\\sfrac{t_{n}}{n}}\\), which can be obtained from IMU propagation. The residuals \\(\\mathbf{r}_{1}(\\cdot)\\), \\(\\mathbf{r}_{2}(\\cdot)\\) over the relative poses are straightforward, while the residual \\(\\mathbf{r}_{3}\\) can be stated as: \\[\\mathbf{r}_{3}(\\hat{\\mathbf{T}}_{n},\\sfrac{\\mathrm{L}}{W}\\hat{ \\mathbf{T}},\\hat{\\mathcal{U}}_{n}^{i})=\\left\\|\\hat{\\mathbf{d}}^{i}\\right\\|+ \\hat{\\mathbf{b}}^{r}-\\hat{d}^{i},\\] \\[\\hat{\\mathbf{d}}^{i}\\triangleq\\hat{\\mathbf{p}}_{n}+\\hat{\\mathbf{ R}}_{n}\\sfrac{t_{n}}{n_{n}^{\\text{T}}}\\bar{\\mathbf{R}}(\\sfrac{\\mathrm{L}}{r_{n}^{i}} \\bar{\\mathbf{t}})-\\sfrac{\\mathrm{L}}{W}\\hat{\\mathbf{R}}\\mathbf{x}^{i}-\\sfrac{ \\mathrm{L}}{W}\\hat{\\mathbf{t}}. \\tag{12}\\] To ensure the pose graph has enough excitation to help the the anchor-related states converge, we do not add the factors of \\(\\mathbf{r}_{3}\\) into the BA cost function right from the beginning. Rather, they are only added in after the spatial distribution of the key frame poses has satisfied a certain condition. Specifically, we calculate the geometric dilution of the key frame positions via the quantity \\[\\Gamma=\\left(\\sum_{n=1}^{N}(\\bar{\\mathbf{p}}_{n}-\\mu)(\\bar{\\mathbf{p}}_{n}-\\mu) ^{\\top}\\right)^{-1},\\mu=\\frac{1}{N}\\sum_{n=1}^{N}\\bar{\\mathbf{p}}_{n}. \\tag{13}\\] Hence we perform singular value decomposition on \\(\\Gamma\\) to obtain the singular values \\(\\sigma_{1}\\geq\\sigma_{2}\\geq\\sigma_{3}>0\\). If \\(\\sigma_{1}<c_{1}\\) and \\(\\sigma_{1}/\\sigma_{3}<c_{2}\\), where \\(c_{1}\\) and \\(c_{2}>1\\) are some user-defined parameters, then we can start adding the factors \\(\\mathbf{r}_{3}(\\cdot)\\) to (10). Afterwards, we can obtain the estimates of \\(\\sfrac{\\mathrm{L}}{W}\\mathbf{T}\\) and \\(\\mathbf{b}^{r}\\) which can be used for the fusion of UWB factors in (5). ### _Loop Closure_ To construct the loop priors \\(\\sfrac{p}{c}\\hat{\\mathbf{T}}\\) in Sec. V-B, a three-stage process is conducted as follows: First, when a new key frame is admitted, we compare its visual features with the database using the DBoW library. If a match is flagged, we can extract the transforms \\(\\mathbf{T}_{c}\\) and \\(\\mathbf{T}_{p}\\), referred to as the current and previous key poses, respectively. Then, we search for a number of key frames that were admitted before and after \\(\\mathbf{T}_{p}\\) to build a local map using their corresponding marginalized CFCs, and proceed to the second stage. At the second stage, we will use ICP to align the CFC \\({}^{\\mathbb{R}_{c}}\\mathcal{F}_{c}\\) with \\({}^{\\mathbb{R}_{p}}\\mathcal{M}_{p}\\) to obtain a fitness score, as well as an initial guess of \\({}^{\\mathbb{R}_{p}}_{\\mathbb{R}_{c}}\\hat{\\mathbf{T}}\\). If the fitness score is below a threshold, we proceed to the third stage. At the third stage, we perform FMM between \\({}^{\\mathbb{R}_{c}}\\mathcal{F}_{c}\\) and \\({}^{\\mathbb{R}_{p}}\\mathcal{M}_{p}\\) to calculate the FMM coefficients, then construct the following cost function and optimize it: \\[f\\left({}^{\\mathbb{R}_{p}}_{\\mathbb{R}_{c}}\\hat{\\mathbf{T}}\\right)=\\sum_{i=1}^ {N_{\\mathcal{L}}^{c}}\\rho\\left(\\left\\|\\mathcal{F}_{\\mathcal{L}}({}^{\\mathbb{R} _{p}}_{\\mathbb{R}_{c}}\\hat{\\mathbf{T}},\\mathcal{L}^{i}_{c})\\right\\|^{2}_{ \\mathcal{L}^{i}_{c}}\\right). \\tag{14}\\] After optimizing (14) and obtaining the optimal relative pose \\({}^{\\mathbb{R}_{p}}_{\\mathbb{R}_{c}}\\hat{\\mathbf{T}}^{*}\\), if the ratio \\(f\\left({}^{\\mathbb{R}_{p}}_{\\mathbb{R}_{c}}\\hat{\\mathbf{T}}^{*}\\right)/N_{ \\mathcal{L}}^{c}\\) is below a threshold, \\({}^{\\mathbb{R}_{p}}_{\\mathbb{R}_{c}}\\hat{\\mathbf{T}}^{*}\\) will be registered as a loop closure prior \\({}^{\\mathbb{R}_{p}}_{\\mathbb{R}_{c}}\\hat{\\mathbf{T}}\\). ## VI Experiment ### _Datasets_ We first employ our recently published NTU VIRAL dataset1[28], which features all sensor types covered by VIRAL SLAM. To further demonstrate the robustness of VIRAL SLAM in low-texture condition, we conduct further experiments on some building inspections datasets with significant challenges collected near a building facade. Finally, since no ground truth on the anchor position and the ranging bias are available, to clearly verify this capability of VIRAL-SLAM, we employ AirSim simulator to construct a dataset with absolute ground truth for more accurate evaluate. Footnote 1: [https://ntu-aris.github.io/ntu_viral_dataset/](https://ntu-aris.github.io/ntu_viral_dataset/) ### _Comparison_ For comparison VIRAL SLAM, we run other state-of-the-art localization techniques with all of the aforementioned datasets. All algorithms are run on an NUC 10 computer with core i7 processor. Each method is slightly modified and configured for their best performance with the dataset. The details of these modified packages can be found on the NTU VIRAL dataset website1. Since several lidar-based methods are not designed to for multiple lidars, we also include experiments of VIRAL SLAM using only the horizontal lidar for a fairer comparison. Tab. I summarizes the Absolute Trajectory Error (ATE) of these methods. \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline \\multirow{2}{*}{**Dataset**} & **VINS-Mono** & **VINS-Fusion** & \\multicolumn{2}{c}{**IAO-UM**} & \\begin{tabular}{c} **MILO** \\\\ **SAM** \\\\ **(all lidars)** \\\\ \\end{tabular} & \\begin{tabular}{c} **VIRAL-SLAM** \\\\ **(in lidar,** \\\\ **odom/BA)** \\\\ \\end{tabular} & \\begin{tabular}{c} **VIRAL-SLAM** \\\\ **(all lidars)** \\\\ \\end{tabular} & \\begin{tabular}{c} **VIRAL-SLAM** \\\\ **(in lidar,** \\\\ **odom/BA)** \\\\ \\end{tabular} & \\begin{tabular}{c} **VIRAL-SLAM** \\\\ **(all lidars)** \\\\ \\end{tabular} \\\\ \\hline eee\\_01 & 1.650 / 0.568 & 0.608 / 0.306 & 0.212 / 6.827 & 0.075 & 0.249 & 0.064 / 0.084 & **0.060** / 0.086 \\\\ eee\\_02 & 0.722 / 0.443 & 0.506 / 0.266 & 0.199 / 1.845 & 0.069 & 0.166 & **0.051** / 0.056 & 0.058 / 0.050 \\\\ eee\\_03 & 1.037 / 0.886 & 0.494 / 0.383 & 0.148 / 3.852 & 0.101 & 0.232 & 0.060 / 0.073 & **0.037** / 0.049 \\\\ nya\\_01 & 1.475 / 0.830 & 0.397 / 0.237 & 0.077 / 3.206 & 0.076 & 0.123 & 0.063 / 0.061 & **0.051** / 0.058 \\\\ nya\\_02 & 0.580 / 0.422 & 0.424 / 0.297 & 0.091 / 0.377 & 0.090 & 0.191 & **0.042** / 0.051 & 0.043 / 0.055 \\\\ nya\\_03 & 1.333 / 0.501 & 0.787 / 0.368 & 0.080 / 0.715 & 0.137 & 0.226 & 0.039 / 0.063 & **0.032** / 0.062 \\\\ shs\\_01 & 4.142 / 3.739 & 0.508 / 0.372 & 0.203 / 6.762 & 0.089 & 0.173 & 0.051 / 0.055 & **0.048** / 0.059 \\\\ shs\\_02 & 1.605 / 0.890 & 0.564 / 0.369 & 0.091 / 2.496 & 0.083 & 0.147 & **0.056** / 0.062 & 0.062 / 0.052 \\\\ shs\\_03 & 1.306 / 0.802 & 0.878 / 0.276 & 0.363 / 3.996 & 0.140 & 0.153 & 0.060 / 0.075 & **0.054** / 0.072 \\\\ \\hline bid\\_01 & 3.749 / 3.652 & 2.416 / 2.045 & 0.936 / 14.670 & - & 4.264 & 0.178 / 0.159 & **0.161** / 0.158 \\\\ bid\\_02 & 1.257 / 1.238 & 0.837 / 0.603 & 4.359 / 5.043 & - & 0.257 & 0.752 / 1.320 & **0.343** / 0.603 \\\\ bid\\_03 & 0.670 / 0.659 & 0.914 / 0.814 & 1.961 / 4.789 & - & 3.330 & 2.181 / 1.813 & **0.128** / 0.177 \\\\ \\hline nbh\\_01 & 4.709 / 4.474 & 1.413 / 1.388 & 86.399 / 53.680 / 53.757 & - & 0.321 & 0.149 / 0.200 & **0.146** / 0.194 \\\\ nbh\\_02 & 3.526 / 2.960 & 2.268 / 1.436 & 47.326 / 40.881 / 40.638 & - & 0.369 & **0.084** / 0.142 & 0.096 / 0.162 \\\\ nbh\\_03 & 3.560 / 2.759 & 1.837 / 0.643 & 6.764 / 50.710 / 50.578 & - & 0.282 & **0.091** / 0.114 & 0.098 / 0.141 \\\\ nbh\\_04 & 2.707 / 1. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline & \\multicolumn{2}{c}{**Anchor 1**} & \\multicolumn{2}{c}{**Anchor 2**} & \\multicolumn{2}{c}{\\(\\mathbf{b^{\\tau}}\\)} \\\\ \\hline **True values** & 15.000, 0.000, 1.250 & 7.500, -5.000, 1.500 & 0.050 \\\\ \\hline **Initial values** & 15.020, 0.000, 1.000 & 7.450, -5.070, 1.000 & 0.000 \\\\ \\hline nbh\\_01 est. & 15.018, 0.019, 1.245 & 7.449, -5.046, 1.510 & 0.026 \\\\ nbh\\_02 est. & 15.018, 0.038, 1.243 & 7.456, -5.038, 1.484 & 0.021 \\\\ nbh\\_03 est. & 15.011, 0.109, 1.506 & 7.467, -4.995, 1.706 & 0.008 \\\\ nbh\\_04 est. & 15.018, 0.025, 1.253 & 7.451, -5.043, 1.502 & 0.031 \\\\ nbh\\_05 est. & 15.018, 0.055, 1.253 & 7.462, -5.032, 1.449 & 0.020 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE III: Anchor coordinates and the final values estimated by the BA process using the AirSim-generated dataset. All values are in m. Note that the coordinates of anchor 0 is fixed at \\((0,0,1)\\). \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline **Dataset** & **Lidars** & \\begin{tabular}{c} **Lidars** \\\\ **+Cameras** \\\\ \\end{tabular} & \\begin{tabular}{c} **Lidars** \\\\ **+UWB** \\\\ **+Cameras** \\\\ \\end{tabular} \\\\ \\hline eee\\_01 & **0.0380** & 0.0390 & 0.0822 & 0.0861 \\\\ eee\\_02 & 0.0451 & **0.0347** & 0.0647 & 0.0505 \\\\ eee\\_03 & **0.0385** & 0.0438 & 0.0608 & 0.0494 \\\\ nya\\_01 & **0.0429** & 0.0436 & 0.0545 & 0.0584 \\\\ nya\\_02 & 0.0463 & **0.0416** & 0.0635 & 0.0551 \\\\ nya\\_03 & **0.0383** & 0.0392 & 0.0696 & 0.0621 \\\\ abs\\_01 & **0.0441** & 0.0483 & 0.0585 & 0.0587 \\\\ abs\\_02 & 0.0512 & **0.0476** & 0.0547 & 0.0518 \\\\ abs\\_03 & **0.0514** & 0.0524 & 0.0696 & 0.0716 \\\\ Average & 0.0440 & **0.0434** & 0.0642 & 0.0604 \\\\ \\hline bid\\_01 & 0.2104 & 0.2039 & 0.1585 & **0.1583** \\\\ bid\\_02 & 0.6046 & 0.6033 & 0.6111 & **0.6029** \\\\ bid\\_03 & 0.1904 & 0.1865 & 0.1822 & **0.1765** \\\\ Average & 0.3351 & 0.3312 & **0.3173** & **0.3126** \\\\ \\hline nbh\\_01 & **0.0393** & 0.0508 & 0.1910 & 0.1938 \\\\ nbh\\_02 & 0.0374 & **0.0364** & 0.1489 & 0.1618 \\\\ nbh\\_03 & 0.0453 & **0.0342** & 0.1653 & 0.1413 \\\\ nbh\\_04 & **0.0178** & 0.0201 & 0.1811 & 0.1963 \\\\ nbh\\_05 & **0.0225** & 0.0257 & 0.3224 & 0.2763 \\\\ Average & **0.0325** & 0.0335 & 0.2017 & 0.1939 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE III: Anchor coordinates and the final values estimated by the BA process using the AirSim-generated dataset. All values are in m. Note that the coordinates of anchor 0 is fixed at \\((0,0,1)\\). Fig. 7: Error of the estimates on anchor position and ranging bias by the BA process over time. * [4] ----, \"Viral-fusion: A visual-inertial-ranging-lidar sensor fusion approach,\" _IEEE Transactions on Robotics_, 2021. * [5] T. H. Nguyen, T.-M. Nguyen, and L. Xie, \"Tightly-coupled ultra-wideband-aided monocular visual slam with degenerate anchor configurations,\" _Autonomous Robots_, vol. 44, no. 8, pp. 1519-1534, 2020. * 1685, 2021. * [7] X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Huang, \"Lie-fusion: Lidar-inertial-camera odometry,\" in _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2019, pp. 5848-5854. * [8] X. Zuo, Y. Yang, P. Geneva, L. Jiajun, Y. Liu, G. Huang, and M. Pollefey, \"Lie-fusion 2.0: Lidar-inertial-camera odometry with sliding-window plane-feature tracking,\" in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 2020, pp. 5112-5119. * [9] D. Wisht, M. Camurri, S. Das, and M. Fallon, \"Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 2, pp. 1004-1011, 2021. * [10] J. Graeter, A. Wilczynski, and M. Lauer, \"Limo: Lidar-monocular visual odometry,\" in _2018 IEEE/RSJ international conference on intelligent robots and systems (IROS)_. IEEE, 2018, pp. 7872-7879. * [11] W. Shao, S. Vijayarangan, C. Li, and G. Kantor, \"Stereo visual inertial lidar simultaneous localization and mapping,\" in _2019 IEEE/RSJ international conference on intelligent robots and systems (IROS)_, 2019. * [12] T. Qin, P. Li, and S. Shen, \"Vins-mono: A robust and versatile monocular visual-inertial state estimator,\" _IEEE Transactions on Robotics_, vol. 34, no. 4, pp. 1004-1020, 2018. * [13] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and R. Daniela, \"Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping,\" in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2020, pp. 5135-5142. * [14] T. Shan and B. Englot, \"Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,\" in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2018, pp. 4758-4765. * [15] J. Jiao, H. Ye, Y. Zhu, and M. Liu, \"Robust odometry and mapping for multi-lidar systems with online extrinsic calibration,\" _arXiv preprint arXiv:2010.14294_, 2020. * [16] P. Chen, W. Shi, S. Bao, M. Wang, W. Fan, and H. Xiang, \"Low-drift odometry, mapping and ground segmentation using a backpack lidar system,\" _IEEE Robotics and Automation Letters_, 2021. * [17] T.-M. Nguyen, S. Yuan, M. Cao, Y. Lyu, T. H. Nguyen, and L. Xie, \"Miliom: Tightly coupled multi-input lidar-inertia odometry and mapping,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 3, pp. 5573-5580, May 2021. * [18] M. Karrer and M. Chli, \"Distributed variable-baseline stereo slam from two uavs.\" [Online]. Available: [https://arxiv.org/pdf/2009.04801.pdf](https://arxiv.org/pdf/2009.04801.pdf) * [19] T.-M. Nguyen, T. H. Nguyen, M. Cao, Z. Qiu, and L. Xie, \"Integrated uwb-vision approach for autonomous docking of uavs in spg-engled environments,\" in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 9603-9609. * [20] T.-M. Nguyen, Z. Qiu, T. H. Nguyen, M. Cao, and L. Xie, \"Distance-based cooperative relative localization for leader-following control of mavs,\" _IEEE Robotics and Automation Letters_, vol. 4, no. 4, pp. 3641-3648, 2019. * [21] ----, \"Persistently excited adaptive relative localization and time-varying formation of robot swarms,\" _IEEE Transactions on Robotics_, vol. 36, no. 2, pp. 553-560, 2019. * [22] J. P. Queralta, L. Qinging, F. Schiano, and T. Westerlund, \"Vio-uwb-based collaborative localization and dense scene reconstruction within heterogeneous multi-robot systems.\" [Online]. Available: [https://arxiv.org/pdf/2006.00420.pdf](https://arxiv.org/pdf/2006.00420.pdf) * [23] J. Xu, J. Hu, L. Xie, and K.-Y. Lum, \"Distributed coverage control under generalized locational optimization framework,\" in _Proceedings of the 31st Chinese Control Conference_. IEEE, 2012, pp. 6015-6020. * [24] H. Xu, L. Wang, Y. Zhang, K. Qiu, and S. Shen, \"Decentralized visual-inertial-uwb fusion for relative state estimation of aerial swarm,\" in _2020 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2020, pp. 8776-8782. * [25] H. Ye, Y. Chen, and M. Liu, \"Tightly coupled 3d lidar inertial odometry and mapping,\" in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 3144-3150. * [26] T.-M. Nguyen, A. H. Zaini, C. Wang, K. Guo, and L. Xie, \"Robust target-relative localization with ultra-wideband ranging and communication,\" in _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2018, pp. 2312-2319. * [27] S. Agarwal and K. Mierle, \"Ceres solver: Tutorial & reference.\" [Online]. Available: [http://ceres-solver.org/](http://ceres-solver.org/) * [28] T.-M. Nguyen, S. Yuan, M. Cao, Y. Lyu, T. H. Nguyen, and L. Xie, \"Ntu viral: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint,\" The International Journal of Robotics Research. [Online]. Available: [https://ntu-aris.github.io/ntu_viral_dataset/](https://ntu-aris.github.io/ntu_viral_dataset/)
In this paper, we propose a tightly-coupled, multi-modal simultaneous localization and mapping (SLAM) framework, integrating an extensive set of sensors: IMU, cameras, multiple lidars, and Ultra-wideband (UWB) range measurements, hence referred to as VIRAL (visual-inertial-ranging-lidar) SLAM. To achieve such a comprehensive sensor fusion system, one has to tackle several challenges such as data synchronization, multi-threading programming, bundle adjustment (BA), and conflicting coordinate frames between UWB and the onboard sensors, so as to ensure real-time localization and smooth updates in the state estimates. To this end, we propose a two stage approach. In the first stage, lidar, camera, and IMU data on a local sliding window are processed in a core odometry thread. From this local graph, new key frames are evaluated for admission to a global map. Visual feature-based loop closure is also performed to supplement the global factor graph with loop constraints. When the global factor graph satisfies a condition on spatial diversity, the BA process will be triggered to update the coordinate transform between UWB and onboard SLAM systems. The system then seamlessly transitions to the second stage where all sensors are tightly integrated in the odometry thread. The capability of our system is demonstrated via several experiments on high-fidelity graphical-physical simulation and public datasets.
Summarize the following text.
276
arxiv-format/2403_18711v1.md
Sat-Ngq : Unleashing Neural Graphics Primitives for Fast Reliable Transient-Free 3D Reconstruction from Satellite Imagery ## 1 Introduction The way we perceive and understand our planet has been revolutionised by the advent of satellite imagery. In fields as diverse as meteorology, biodiversity conservation and climate change, satellite imagery has become an invaluable source of information. Accurate knowledge of 3D surface elements is critical in several domains, including flood risk mitigation, vegetation monitoring, urban planning, etc. State-of-the-art stereo-vision techniques [1] provide large-scale 2.5D Digital Surface Models (DSM) with sufficient accuracy for many applications. When applied to a single pair or triplet of satellite images, these methods suffer from occlusions, homogeneous/poorly structured areas, shadow zones, and reflective surfaces (like water or metal structures). Such errors can be mitigated by merging the 3D models from several pairs/triplets acquired at different dates. Nonetheless, the accuracy of these methods is negatively impacted by changes in the contents of the scene (mobile objects, vegetation) and variations in lighting conditions. In the field of computer vision, a new concept has emerged called Neural Radiance Field [2] (NeRF), which has already shown remarkable results in the synthesis of novel views and 3D reconstruction from multi-date satellite imagery [3, 4, 5, 6]. NeRF [2] represents the 3D scene as a continuous radiance field, encoded in a feed-forward Multi-Layer Perceptron (MLP). Novel views are synthesised by alpha-compositing radiance values (quer Figure 1: Compared to the ground truth LiDAR and unseen WorldView-3 image (top), SAT-NGP (middle) yields more accurate surface extraction and similar novel views in 14 minutes, as SAT-NeRF (bottom) in 12 hours. coordinates and viewing directions along camera rays. Previous works [3, 4, 5, 6] applied to remote sensing have focused on unique challenges linked to the complexity of the scene, namely changes in lighting conditions, satellite sensor geometry, and transient objects. Nonetheless, retraining an entire neural network on each scene constrains the scalability to large-scale areas, Indeed, state of the art stereo-vision pipelines such as CARS [1]) produce relatively accurate DSMs with 40 000 \\(\\times\\) 40 000 pixel images in less than an hour, using only few parallel CPUs. To address these challenges, we combine previous works on the satellite image model of NeRF [3, 4] with the acceleration brought by Instant Neural Graphics Primitives (I-NGP) [7]. Our method, Satellite Neural Graphics Primitives (SAT-NGP) reduces the time needed to extract a 3D model of a terrestrial scene from satellite images from 24 hours to less than 15 minutes, without compromising the quality of the reconstruction. ## 2 Related Works ### NeRF for Satellite imagery An important concern when applying NeRF to multi-date satellite images is the changes in lighting conditions. Shadow NeRF (S-NeRF) [3] uses the solar angles to learn the amount of light reaching each point in the scene, and achieves more reliable modelling of shadow areas than with a NeRF model alone. SAT-NeRF [4] learns the transient objects (cars, etc.) present in each view with a similar approach to NeRF in the Wild [8] which introduces an uncertainty coefficient. This coefficient predicts for each point on the ray whether that point corresponds to a transient object or not. Earth Observation NeRF (EO-NeRF) [6] adds geometrically consistent shading, which provides realistic shadows and relighting capabilities. The adaptations from [3, 4] generate DSMs similar in terms of altimetric accuracy to the state-of-the-art stereo-vision pipelines (CARS [1]), as is demonstrated in Table 1. However, training a NeRF for each scene is slow due to the immense number of inferences that are required for the network to converge. This issue is made worse by the complexity of learning from multi-date satellite images. Taking into account shading effects and transient objects increases the complexity of the rendering, and the use of multiple competing losses tends to slow down convergence. ### NeRF Training and Inference acceleration The authors of [7] have accelerated NeRF using a multi-resolution hash-table that stores features decoded by a smaller, faster neural network. In addition, the samples along each ray are concentrated near the surface, using a voxel occupancy grid updated and kept in cache during training. A concurrent work RS-NeRF [9] also offers acceleration based on I-NGP. However, it does not provide relighting capabilities and handles transient objects with an inpainting method based on a pretrained network. ## 3 Methodology Our work focuses on speeding up the previous versions of NeRF applied to satellite imagery [3, 4] using the principles of I-NGP [7]. Our goal is to achieve the same qualitative results in the Novel View Synthesis (NVS) and DSMs generation. We consider the same experimental setting as SAT-NeRF. First we use the solar angles as a network input in order to model shading effects. Second, we learn an uncertainty image as a network output, based on a latent time vector, to constrain the loss to focus on areas without transient objects. Finally, the sensor models of the different acquisitions are refined using bundle-adjustment to reduce inaccuracies between models. We use the Universal Transverse Mercator (UTM) based representation of geographic 3D point coordinates as in [6] instead of using an Earth-Centered Earth-Fixed reference. This is particularly helpful for DSM generation since \\(z\\) axis corresponds to the altitude in UTM. ### Encoding Previous works employ either sinusoidal networks [3] or a frequency-based positional encoding [4, 6]. Instead, we follow [7] and use a multi-resolution hash encoding which allows us to work with smaller neural networks. The linear interpolation of features learned and encoded in a hash table is more computationally efficient than querying a large neural network. ### Architecture Our model is based on the architecture in [4] and consists of the same first small MLP with 2 hidden layers of 64 neurons Figure 2: Comparison between ground truth unseen view (left column) and NVS (right column). We show transient-free (top two row) and relightable capabilities on NVS. instead of 8 layers and 512 neurons. We evaluate MISH [10] activation function on hidden layers over SIREN used in SAT-NeRF. MISH is a smooth, continuous, self regularized and non-monotonic activation function which is, unlike ReLU, continuously differentiable. We also explore the use of Spherical Harmonics (SH) to encode solar directions, a technique not employed in previous studies [3, 4]. ### Loss Modeling transient objects with a variable learned during training remains a proven approach with [8] but comes at an additional computational cost. The authors of [11] promote a loss function based on the principle of robust estimation for training a NeRF in a setting with transient perturbations. This is achieved by modeling the transient objects as outliers during the NeRF model training. This method assumes no a priori knowledge and focuses on the optimisation problem rather than on pre-processing or modeling transient objects. We define our loss 1 as : \\[\\mathcal{L}_{final}=\\mathcal{L}_{robust}+\\lambda\\mathcal{L}_{solar} \\tag{1}\\] a linear sum of the robust loss 3 : \\[\\mathcal{L}_{rgb}=\\sum_{r\\in\\mathcal{R}}\\left|\\left|\\mathbf{C}(r)-\\mathbf{C }_{GT}(r)\\right|\\right|_{2}^{2} \\tag{2}\\] \\[\\mathcal{L}_{robust}=\\omega(\\mathcal{L}_{rgb}^{t-1})\\cdot\\mathcal{L}_{rgb}^{t} \\tag{3}\\] with \\(\\omega(\\cdot)\\) the weighted function detailed in equation (10) of [11]. And the solar correction 4 of [3] : \\[\\mathcal{L}_{solar}=\\sum_{r\\in\\mathcal{R}_{SC}}\\left(\\sum_{i=1}^{N_{SC}}{(T_{ i}-s_{i})}^{2}+1-\\sum_{i=1}^{N_{SC}}{T_{i}\\alpha_{i}s_{i}}\\right) \\tag{4}\\] weighted by a \\(\\lambda\\) factor of 0.05. With the first term, the illuminance factor \\(s_{i}\\), predicted at point \\(i\\), must be equal to the transmittance \\(T_{i}\\). The second term encourages all direct light to be absorbed by the scene. ### Implementation details We use orthogonal initialization as described in [12], which in our experiments seems to enhance gradient stability during training. For each test, we used the RAdam Optimizer with a learning rate of \\(0.01\\) using the LambdaLR scheduler. The batch size is 1024 rays with a number of samples between 4 and 256 points per ray (see Section 2.2). The multi-resolution encoding is driven by five main parameters as presented in [7]. After hyperparameter tuning we have chosen : a hash table of size \\(2^{19}\\), 8 levels, a grid size of 128 and coarsest resolution of 16. NeRF experiments were done using a 12GB GPU and the stereo-vision pipeline CARS on 4 parallel CPU. ## 4 Experiments and Analysis The main objective of the experiments is to measure whether the acceleration comes at the cost of a loss in quality of novel view synthesis or the geometric quality of the DSM. We want to know whether our model stands close to the state-of-the-art stereo-vision pipelines, while retaining the high 3D rendering quality and novel view synthesis of NeRF methods. ### Datasets and ground truth details The experiments are conducted using the Data Fusion Contest (DFC2019) dataset1 which is a 3d reconstruction featuring satellite images of Jacksonville (JAX) taken by World-View 3 with 0.3m/pixel resolution over a year, used in [4, 6, 3]. An airborne LiDAR DSM with a ground sampling distance of 0.5m is used as ground truth for the 3d reconstruction. Footnote 1: [https://github.com/pubgeo/dfc2019](https://github.com/pubgeo/dfc2019) ### Evaluation Metrics In the following experiments, two metrics are used. The Peak Signal Noise Ratio (PSNR) is measured on images generated from viewing angles that are absent from the training dataset. It should be noted that in the case of EO-NeRF, the PSNR may be higher due to the fact that their evaluation is based on the training images. The values were taken from their publication as their code is private. On the other hand, the Mean Absolute Error (MAE) quantifies the average error of surface elevation prediction in relation to the ground truth measured by each rasterized LiDAR point. ### Results Table 1 shows that SAT-NGP is significantly faster than compared to other NeRF variants, while being slower than CARS. Figure 3: MAE evolution during the first 5 epochs with the training time of each one of SAT-NeRF compared to our method. This is due to the higher number of images (10-18 instead of 2-3). In terms of 3D reconstruction, Figure 3 demonstrates that our method already converges to a lower MAE score in the initial epochs. The quality of the DSM is further shown in Figure 1, which compares the airborne LiDAR, with the DSM of SAT-NGP and SAT-NeRF after convergence. The DSM produced by our method is devoid of any holes or bumps on the car-park. We attribute this to the use of robust loss which provides a smoother inductive bias when combined with MISH and RAdam. Finally we observe that the novel views generated at unseen viewing and solar angles by SAT-NGP are less accurate than the slower NeRF variants. The large difference with EO-NeRF can be due to the fact that they compute the test scores on images have already been seen during the training unlike S-NeRF, SAT-NeRF and ours. Our PSNR values are similar or slightly worse than S-NeRF and SAT-NeRF. This can be attributed to the darker shadows and a smoother rendering. Moreover, our method not only recovers the 3D scene but also eliminates the transient objects from the NVS as shown in Figure 2. ## 5 Conclusion In this paper, we presented SAT-NGP, a fast relightable transient-free 3D reconstruction method from satellite images. This model requires minimal GPU capacity and surpasses the performance of the previously adapted NeRF variant for multi-date satellite imagery. SAT-NGP can accurately reconstruct DSM in less than 15 minutes, a significant improvement from the 20 hours required by previous methods. We also highlight the benefits of using a Robust loss function and MISH activation function. Future improvements to the quality of DSM and NVS production, while maintaining rapid training and significantly reducing the number of view inputs, would be advantageous for real satellite reconstruction use cases. After making necessary adjustments, our code will be made available at SAT-NGP. ## 6 Acknowledgements This work was performed using HPC resources from CNES Computing Center (DOI 10.24400/263303/CNES_C3). The authors would like to thank the Johns Hopkins University Applied Physics Laboratory and IARPA for providing the data used in this study, and the IEEE GRSS Image Analysis and Data Fusion Technical Committee for organizing the Data Fusion Contest. ## References * [1] Julien Michel, Emmanuelle Sarrazin, David Youssefi, Myriam Cournet, Fabrice Buffe, Jean-Marc Delvit, Aurelie Emilien, Julien Bosman, Olivier Melet, and Celine L'Helguen, \"A new satellite imagery stereo pipeline designed for scalability, robustness and performance,\" _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, vol. 2, pp. 171-178, 2020. * [2] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng, \"Nerf: Representing scenes as neural radiance fields for view synthesis,\" _Communications of the ACM_, vol. 65, no. 1, pp. 99-106, 2021. * [3] Dawa Derksen and Dario Izzo, \"Shadow neural radiance fields for multi-view satellite photogrammetry,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 1152-1161. * [4] Roger Mari, Gabriele Facciolo, and Thibaud Ehret, \"Sat-nerf: Learning multi-view satellite photogrammetry with transient objects and shadow modeling using rpc cameras,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 1311-1321. * [5] Lulin Zhang and Ewelina Rupnik, \"Sparsesat-nerf: Dense depth supervised neural radiance fields for sparse satellite images,\" _arXiv preprint arXiv:2309.00277_, 2023. * [6] Roger Mari, Gabriele Facciolo, and Thibaud Ehret, \"Multi-date earth observation nerf: The detail is in the \\begin{table} \\begin{tabular}{l|c|c} \\cline{2-3} & \\multicolumn{2}{c|}{PSNR \\(\\uparrow\\) MAE \\(\\downarrow\\) / TIME \\(\\downarrow\\)} \\\\ \\hline Area index & 004 & 214 \\\\ \\hline S-NeRF [3] & 26.14 / 1.472 / 10h & 24.93 / 2.406 / 20h \\\\ \\hline SAT-NeRF [4] & 26.67 / 1.288 / 10h & 25.50 / 2.009 / 20h \\\\ \\hline EO-NeRF [6] & **28.56 / 1.25 / 10h** & **26.59 / 1.52 / 20h** \\\\ \\hline CARS [1] & - / - / - 3.24 / **3min \\(\\bullet\\)** \\\\ \\hline SAT-NGP (ours) & 25.03 / 1.31 / **8min \\(\\bullet\\)** & 23.43 / 2.17 / 14min \\(\\circ\\) \\\\ \\hline \\end{tabular} \\begin{tabular}{l|c|c} \\cline{2-3} & \\multicolumn{2}{c|}{PSNR \\(\\uparrow\\) MAE \\(\\downarrow\\) / TIME \\(\\downarrow\\)} \\\\ \\hline Area index & 260 & 068 \\\\ \\hline S-NeRF [3] & 21.24 / 2.299 / 20h & 24.07 / 1.374 / 20h \\\\ \\hline SAT-NeRF [4] & 21.78 / 1.864 / 20h & 25.07 / 1.249 / 20h \\\\ \\hline EO-NeRF [6] & **26.09 / 1.43 / 20h** & **27.25 / 0.91 / 20h** \\\\ \\hline CARS [1] & - / 13.81 / **2min \\(\\bullet\\)** & - / 1.40 / **2min \\(\\bullet\\)** \\\\ \\hline SAT-NGP (ours) & 23.02 / 1.68 \\(\\circ\\) / 12min\\(\\circ\\) & 22.58 / 2.03 / 13min \\(\\circ\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Evaluation metrics and computation time of various methods on four JAX areas. SAT-NGP (with Robust Loss and MISH) is the fastest among NeRF methods and provides competitive PSNR and MAE values. Best values are in bold. The values from S-NeRF and SAT-NeRF are taken from [4], while EO-NeRF is taken from [6]. shadows,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 2034-2044. * [7] Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller, \"Instant neural graphics primitives with a multiresolution hash encoding,\" _ACM transactions on graphics (TOG)_, vol. 41, no. 4, pp. 1-15, 2022. * [8] Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth, \"Nerf in the wild: Neural radiance fields for unconstrained photo collections,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 7210-7219. * [9] Songlin Xie, Lei Zhang, Gwanggil Jeon, and Xiaomin Yang, \"Remote sensing neural radiance fields for multi-view satellite photogrammetry,\" _Remote Sensing_, vol. 15, no. 15, pp. 3808, 2023. * [10] Diganta Misra, \"Mish: A self regularized non-monotonic activation function,\" _arXiv preprint arXiv:1908.08681_, 2019. * [11] Sara Sabour, Suhani Vora, Daniel Duckworth, Ivan Krasin, David J Fleet, and Andrea Tagliascachi, \"Robustnerf: Ignoring distractors with robust losses,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 20626-20636. * [12] Andrew M Saxe, James L McClelland, and Surya Ganguli, \"Exact solutions to the nonlinear dynamics of learning in deep linear neural networks,\" _arXiv preprint arXiv:1312.6120_, 2013.
Current stereo-vision pipelines produce high accuracy 3D reconstruction when using multiple pairs or triplets of satellite images. However, these pipelines are sensitive to the changes between images that can occur as a result of multi-date acquisitions. Such variations are mainly due to variable shadows, reflexions and transient objects (cars, vegetation). To take such changes into account, Neural Radiance Fields (NeRF) have recently been applied to multi-date satellite imagery. However, Neural methods are very compute-intensive, taking dozens of hours to learn, compared with minutes for standard stereo-vision pipelines. Following the ideas of Instant Neural Graphics Primitives we propose to use an efficient sampling strategy and multi-resolution hash encoding to accelerate the learning. Our model, Satellite Neural Graphics Primitives (SAT-NGP) decreases the learning time to 15 minutes while maintaining the quality of the 3D reconstruction. Camille Billboard\\({}^{1,2}\\), Dawa Derksen\\({}^{1}\\), Emmanuelle Sarrazin\\({}^{1}\\), Bruno Vallet\\({}^{2}\\)\\({}^{1}\\) CNES ([email protected]) \\({}^{2}\\)Univ Gustave Eiffel, ENSG, IGN, LASTIG, F-94160 Saint-Mande, France ([email protected]) Neural Radiance Fields, Neural Graphics Primitives, 3D Reconstruction, Satellite imagery, Transient objects, Relighting
Provide a brief summary of the text.
302
ieee/13082905_cd74_41ca_b2e5_72dcb0172b7c.md
# Observation of Deep Occultation Signals in Tropical Cyclones With COSMIC-2 Measurements Pawel Hordyniec\\({}^{\\copyright}\\), Yuriy Kuleshov\\({}^{\\copyright}\\), Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Manuscript received January 9, 2021; revised April 27, 2021 and June 1, 2021; accepted June 15, 2021. Date of publication July 12, 2021; date of current version December 15, 2021. This work was supported by the Australian Antarctic Science Grant Program Project 4469. Robert Norman is supported by the Cooperative Research Centre for Space Environment Management (SERC Limited) through the Australian Government's Cooperative Research Centre Program. _(Corresponding author: Pawel Hordyniec.)_ Pawel Hordyniec is with the SPACE Research Centre, RMIT University, Melbourne, VIC 300, Australia, and also with the Institute of Geodesy and Geoinformatics, Provachen University of Environmental and Life Sciences, 50-375 Wroclaw, Poland (e-mail: [email protected].) Yuriy Kuleshov is with the Bureau of Meteorology, Melbourne, VIC 3008, Australia, also with the SPACE Research Centre, RMIT University, Melbourne, VIC 3000, Australia, and also with the School of Mathematics and Statistics, The University of Melbourne, Melbourne, VIC 3010, Australia (e-mail: [email protected]) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Manuscript received January 9, 2021; revised April 27, 2021 and June 1, 2021; accepted June 15, 2021. Date of publication July 12, 2021; date of current version December 15, 2021. This work was supported by the Australian Antarctic Science Grant Program Project 4469. Robert Norman is supported by the Cooperative Research Centre for Space Environment Management (SERC Limited) through the Australian Government's Cooperative Research Centre Program. _(Corresponding author: Pawel Hordyniec.)_ Pawel Hordyniec is with the SPACE Research Centre, RMIT University, Melbourne, VIC 300, Australia, and also with the Institute of Geodesy and Geoinformatics, Provachen University of Environmental and Life Sciences, 50-375 Wroclaw, Poland (e-mail: [email protected].) Yuriy Kuleshov is with the Bureau of Meteorology, Melbourne, VIC 3008, Australia, also with the School of Mathematics and Statistics, The University of Melbourne, Melbourne, VIC 3010, Australia (e-mail: [email protected]) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Manuscript received January 9, 2021; revised April 27, 2021 and June 1, 2021; accepted June 15, 2021. Date of publication July 12, 2021; date of current version December 15, 2021. This work was supported by the Australian Antarctic Science Grant Program Project 4469. Robert Norman is supported by the Cooperative Research Centre for Space Environment Management (SERC Limited) through the Australian Government's Cooperative Research Centre Program. _(Corresponding author: Pawel Hordyniec.)_ Pawel Hordyniec is with the SPACE Research Centre, RMIT University, Melbourne, VIC 300, Australia, and also with the Institute of Geodesy and Geoinformatics, Provachen University of Environmental and Life Sciences, 50-375 Wroclaw, Poland (e-mail: [email protected].) Yuriy Kuleshov is with the Bureau of Meteorology, Melbourne, VIC 3008, Australia, also with the SPACE Research Centre, RMIT University, Melbourne, VIC 3000, Australia, and also with the School of Mathematics and Statistics, The University of Melbourne, Melbourne, VIC 3010, Australia (e-mail: [email protected]) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) Suelynn Choy\\({}^{\\copyright}\\), and Robert Norman\\({}^{\\copyright}\\) ## I Introduction Tropical cyclone (TC) monitoring and forecasting are important and challenging tasks as they require to promptly detect their intensification and issue early warnings in order to reduce impacts on communities at risk. One of the most detailed observations describing complicated vertical structures associated with TCs are gathered by global positioning system (GPS) dropsondes in reconnaissance aircraft missions [1]. Because low-pressure systems originate from oceanic regions, where _in situ_ measurements mostly remain unavailable, their monitoring heavily relies on remote sensing satellite data [2, 3]. The recently launched constellation of six low Earth orbiting (LEO) satellites of the constellation observing system for meteorology, ionosphere & climate (COSMIC-2) mission [4] utilize very favorable configuration for collecting atmospheric profiles of the tropical troposphere due to equatorial orbital planes. Fundamental geophysical variables are derived from inversion of Doppler frequency shifts, which in geometrical optics do not require amplitude data. The amplitude expressed in terms of signal-to-noise ratio (SNR), serving mostly as a data-quality measure, can be also a valuable indicator of atmospheric variability such as deep fading amplitude in the presence of super-refractions [5] associated with sharp inversion layers [6] or ionospheric gradients and irregularities induced by sporadic E layers [7]. Ding _et al._[8] present the evidence of strong vertical gradients in the vicinity of TCs suggesting pronounced planetary boundary layer. Fig. 1 shows the example of refractivity structures modeled from dropsonde released in the eyewall of tropical storm Nestor. The so-called tropospheric ducts are recognized by inducing anomalous Fig. 1: Vertical gradient of refractivity with multiple super-refraction layers (dN/dz \\(\\leq-157\\)) modeled from high-resolution dropsonde profile collected on 19 October 2019 in the vicinity of tropical storm Nestor. The threshold value of \\(-157\\) is marked with dashed line. propagation of radio signals resulting in negatively biased radio occultation (RO) retrievals of refractivity because of ill-conditioned inversions [9]. Tropospheric ducts can result in a diffraction mechanism producing signatures in the signal's amplitude observed when the occulting satellite is in the very deep Earth's shadow [10]. The signatures can often appear below the minimum height of straight line (HSL) connecting global navigation satellite system (GNSS) and LEO satellites (\\(-200\\) km) typically tracked by LEO receivers imposed by channel limitations. Improvements made in development of new generation RO receivers placed on-board COSMIC-2 satellites contributed to increased SNR allowing for more reliable tracking of deep signals. In the following study, amplitudes observed with one LEO satellite of COSMIC-2 constellation set to track signals down to very low HSL are studied to find potential correspondence to atmospheric conditions associated with TCs. We establish collocations of RO events with severe TCs and perform spectral analysis of observed signals. The statistical significance of deep signals induced by super-refraction layers is evaluated relative to various RO tunable parameters. ## II Data and Methods ### _COSMIC-2 Occultations_ COSMIC-2 satellites in the period between 30 October and 9 December in 2019 occasionally tracked the signal down to the lowest HSL \\(<-350\\) km. These measurements were recorded by the LEO spacecraft #E1 (RO products with C2E1 in occultation IDs) in setting occultations. All other flight modules typically tracked the signal down to lowest HSL of \\(-200\\) km. In total 9552 occultations to LEO satellite #E1 propagated deeper than \\(-200\\) km, while 8645 events reached HSL below \\(-350\\) km. The excess Doppler (phase) and SNR (amplitude) data are obtained from FORMOSAT-7/COSMIC-2 Neutral Atmospheric Provisional Release 1. The provided (original) SNR values are scaled by a factor of ten.1 The independent variable of straight line tangent altitude (HSL) describes the propagation depth of RO signals Footnote 1: For more details see [https://cdaac-www.cosmic.ucar.edu/cdaac/doc/documents/snr.pdf](https://cdaac-www.cosmic.ucar.edu/cdaac/doc/documents/snr.pdf) \\[\\mathrm{HSL}=\\frac{r_{\\mathrm{\\emph{L}}}r_{\\mathrm{\\emph{G}}}\\mathrm{sin} \\theta}{\\sqrt{r_{\\mathrm{\\emph{L}}}^{2}+r_{\\mathrm{\\emph{G}}}^{2}-2r_{\\mathrm{ \\emph{L}}}r_{\\mathrm{\\emph{G}}}\\mathrm{cos}\\theta}}-r_{\\mathrm{\\emph{e}}} \\tag{1}\\] where \\(r_{\\mathrm{\\emph{e}}}\\) is local radius of curvature, \\(\\theta\\) is the central angle between radii \\(r_{\\mathrm{\\emph{L}}}\\), and \\(r_{\\mathrm{\\emph{G}}}\\) for LEO and GNSS positions at given observation epoch. Following tunable RO parameters together with respective SNR values are analyzed: 1) lowest (bottom) HSL for which the phase and amplitude data are available; 2) HSL at the SNR cut-off point; and 3) HSL for the Doppler cut-off point. For the identification of deep signatures the SNR is averaged from 100 to 1 Hz by applying running mean filter to reduce the impact of noise. The deep signals can be considered statistically significant if their magnitudes exceed the noise level defined as the lowest voltage SNR in 1-Hz running mean data. In RO retrieval methodologies, signals observed down to HSL of SNR cut-off are inverted to neutral atmosphere profiles. Failure to include deep tropospheric signals may result in inversion biases [11]. We search for deep signals based on daily statistics computed from one sigma-filtered HSLs for SNR cut-off points. The mean SNR cut-off altitude for the study period is on the order of \\(-120\\)-km HSL with the lowest minimum varying around \\(-160\\) km (threshold for deep signals detection). Fig. 2 shows that in total 14% of #E1 occultations (2530 observations) contain potential contributions from deep signals (HSL \\(<-160\\) km), with 2% spikes in data counts at \\(-200\\)-km HSL for both deep tracking and typical occultations. ### _Collocations With Tropical Cyclones_ Data for TCs observed in the period from October 2019 to April 2020 are retrieved from International Best Track Archive for Climate Stewardship (IBTrACS) [12]. The distribution of TC tracks in 3-h temporal resolution is presented in Fig. 3. Collocated COSMIC-2 occultations satisfy the temporal and spatial criteria of \\(\\pm 2\\) h difference and up to \\(600\\)-km distance mismatch, respectively. The analysis of amplitude signals is restricted to deep occultations in the vicinity of Category 1 or higher in terms of the Saffir-Simpson hurricane wind scale. The identified cases are listed in Table I. Well-matched intersections of occultation planes with TC paths can be selected based on the azimuth computed between collocated pairs of TC location and COSMIC-2 tangent point. The azimuth difference of \\(0^{\\circ}\\) (\\(360^{\\circ}\\)) or \\(180^{\\circ}\\) corresponds to the occultation plane being perpendicular to the TC track with GNSS signals propagating through the central eye. This criterion is important for collocations having a mismatch distance larger than diameter of TC eye and its outer eyewall. According to [13], the mean eye radius is \\(26\\) km, whereas the size of TCs for the large-eye class can exceed \\(100\\) km. The eyewall sizes appear to be proportional to the eye size with the average of the order of \\(50\\) km [14]. These values should be contrasted with a mean horizontal resolution of the RO technique of about \\(300\\) km in the troposphere further extending the observation radius for nearest collocations. Fig. 2: Distribution of HSL for SNR cut-off showing (orange) potential deep signatures from occultations to spacecraft #E1 and (blue) no signals below \\(-200\\) km from observations to other satellites. ## III COSMIC-2 Deep Signals in Tropical Cyclones The spatial spectra in HSL-frequency domain presented in Fig. 4 are computed according to [15] using phase observations from conPhs format with applied GPS navigation modulation bits [16], thus their external removal is not required. The mismatch distance of occultation #3 to Category 4 TC is comparable to the horizontal resolution of RO measurements suggesting possible interaction of signals with the outer eye-wall. However, the noisy spectra shows only weak signals below \\(-200\\)-km HSL. Detailed analysis of spectrograms for event #11 revealed interference from other nonocculting satellite being another possible source of deep signals [10], which is visible as cross-line pattern at \\(-200\\)-km HSL, coinciding with 400 V/V spike in averaged SNR. The prominent deep signals were observed with occultation #16. The spectrogram Fig. 3: Distribution of (top) collocated COSMIC-2 measurements with (bottom) TCs observed between October 2019 and April 2020. clearly supports the evidence of tropospheric propagation between \\(-200\\)- to \\(-250\\)-km HSL. The signal observed in the event #24 might not be necessary regarded as a deep signature because the SNR re-emerges shortly after reaching the noise level at around \\(-150\\)-km HSL. If we on other hand define deep signals as amplitudes observed below the noise boundary in the spectrograms, the mean SNR above 500 V/V should be regarded as statistically significant. All above discussed cases were recorded by GPS occulting satellites. Generally, no deep signals were observed in Globanaja Navigacionnaja Sputnikowaja Sisteema (GLONASS) occultations. One particular case discussed here is the event #14 shown in Fig. 4 to demonstrate structure of direct and strong signals tracked down to \\(-150\\)-km HSL. Fig. 5 shows voltage SNR with corresponding HSL of deep occultations and the magnitude of refractivity gradients collocated with the TCs. It can be seen that some signals extend below the SNR cut-off point of \\(-200\\)-km HSL (#11, #17) down to \\(-240\\) km (#16). With SNR exceeding 568 V/V the occultation #16 shows the strongest signature amongst identified collocations. The spikes in 100-Hz SNR data visible in Fig. 4 exceed 1000 V/V at HSL corresponding to the largest deep amplitudes computed in 1-s interval of running mean. However, refractivity gradients based on COSMIC-2 and European Centre for Medium-Range Weather Forecasts (ECMWF) data for occultation #16 do not show the evidence of ducting layers. On the contrary, the most significant dN/dz values from ECMWF exceeding \\(-157\\) km\\({}^{-1}\\) for #8 and #14 are not reflected in deep signals. For the case #24, the magnitude of dN/dz is close to critical, which may explain signals below \\(-150\\)-km HSL in Fig. 4. The gradients observed with COSMIC-2 typically correspond to standard refractive conditions with the average of \\(-80\\) km\\({}^{-1}\\). The noise level for GPS occultations is on average \\(\\sim\\)150 V/V, while \\(\\sim\\)115 V/V for GLONASS. The larger noise in GPS data might be induced by cross-satellite interference, which does not affect GLONASS that uses frequency division multiple access (FDMA). After exclusion of occultations with strong interfering signals (#2, #11), the magnitude of GPS deep amplitudes is on average two times larger than the noise level. The weakest deep signal is found in occultation #1 (20% larger than noise), whereas nearly four times larger deep SNR relative to the noise is observed with occultation #16. As it can be clearly seen in Fig. 5, GLONASS signals are very comparable to the noise level suggesting no deep signals. ## IV Summary and Discussion COSMIC-2 occultations collocated with TCs were analyzed in terms of occurrence of deep signatures in the amplitude data. Although few spectrograms showed clear signals at or below \\(-200\\)-km HSL, most of deep signals were difficult to visually distinguish from the noise in the computed Doppler frequency offset. The existence of deep signals can be inferred from the analysis of refractivity. However, COSMIC-2 generally shows substantially underestimated refractivity gradients relative to ECMWF or dropsondes. Because deep occultations Fig. 4: Deep occultation signals from collocations with TCs. (Left panels) SNRs as a function of HSL. The strongest deep amplitudes are marked with purple circles, while other tunable RO parameters correspond to SNR cut-off (blue), Doppler cut-off (green), and lowermost straight-line tangent altitude (red). (Right panels) sliding spectrograms showing frequency offset computed between observed Doppler and phase model. were identified in the presence of standard refraction in COSMIC-2 and no deep signals were found in occultations collocated with critical gradients in ECMWF, the connection between ducting and deep RO signals is to be confirmed. The cross-satellite interference in GPS occultations is a major disturbing factor altering the structure of observed signals. Detailed data analysis supports the evidence that the magnitude of deep signals can be overshadowed by the interfering signals. However, discussion and demonstration of possible mechanisms is not possible with the data used in the study and additional scientific efforts are required. Variations in the running mean SNR of 367 and 567 V/V (with spikes reaching \\(\\sim\\)1000 and \\(\\sim\\)1500 V/V in 100-Hz data) were amongst the most statistically significant deep signals induced by propagation through Category 4 TCs. Deep tracking RO data can be used in TC analysis to obtain information about interaction between a cyclone and ambient environment which is generally hostile to duct formation. However, strong ducts exist: 1) inside the TC circulations associated with successive subsidence in the gaps among the spiral cloud bands and 2) in the transition zone between TCs and ambient environment impacted by inflow of dry and cold air or subsidence inversion. The peak Atlantic hurricane season of 2020 constitutes the first complete year for studying TCs with new generation RO signals to support the presented results. ## Acknowledgment The authors would like to thank UCAR/CDAAC for providing access to COSMIC-2 provisional dataset available at [https://data.cosmic.ucar.edu/gnss-ro/cosmic2/](https://data.cosmic.ucar.edu/gnss-ro/cosmic2/) provisional/. Tropical cyclone data were downloaded from [https://www.ncdc.noaa.gov/ibtracs/](https://www.ncdc.noaa.gov/ibtracs/). ## References * [1] J. Wang _et al._, \"A long-term, high-quality, high-vertical-resolution GPS dropsonde dataset for hurricane and other studies,\" _Bull. Amer. Meteorological Soc._, vol. 96, no. 6, pp. 961-973, Jun. 2015. * [2] C. Velden _et al._, \"The Dvorak tropical cyclone intensity estimation technique: A satellite-based method that has endured for over 30 years,\" _Bull. Amer. Meteorological Soc._, vol. 87, no. 9, pp. 1195-1210, Sep. 2006. * [3] P. Vergados, Z. J. Luo, K. Emanuel, and A. J. Mannucci, \"Observational tests of hurricane intensity estimations using GPS radio occultations,\" _J. Geophys. Res., Atmos._, vol. 119, no. 4, pp. 1936-1948, Feb. 2014. * [4] W. S. Schreiner _et al._, \"Cosmic-2 radio occultation constellation: First results,\" _Geophys. Res. Lett._, vol. 47, no. 4, 2020, Art. no. e2019GL086841. * [5] S. Sokolovsky, \"Effect of superrefraction on inversions of radio occultation signals in the lower troposphere,\" _Radio Sci._, vol. 38, no. 3, pp. 1-24, 2003. * [6] F. Xie, D. L. Wu, C. O. Ao, A. J. Mannucci, and E. R. Kursinski, \"Advances and limitations of atmospheric boundary layer observations with GPS occultation over southeast Pacific ocean,\" _Atmos. Chem. Phys._, vol. 12, no. 2, pp. 903-918, Jan. 2012. * [7] W.-H. Yeh, C.-Y. Huang, T.-Y. Hsiao, T.-C. Chiu, C.-H. Lin, and Y.-A. Liou, \"Amplitude morphology of GPS radio occultation data for sporadic-E layers,\" _J. Geophys. Res. Space Phys._, vol. 117, no. A11, pp. 1-8, 2012. * [8] J. Ding, J. Fei, X. Huang, X. Cheng, and X. Hu, \"Observational occurrence of tropical cyclone ducts from Gps dropsonde data,\" _J. Appl. Meteorol. Climatol._, vol. 52, no. 5, pp. 1221-1236, May 2013. * [9] C. O. Ao, T. Meehan, G. Haji, A. Mannucci, and G. Beyerle, \"Lower troposphere refractivity bias in GPS occultation retrievals,\" _J. Geophys. Res._, vol. 108, no. D18, pp. 1-16, 2003. * [10] S. Sokolovsky, W. Schreiner, Z. Zeng, D. Hunt, Y.-C. Lin, and Y.-H. Kuo, \"Observation, analysis, and modeling of deep radio occultation signals: Effects of tropospheric ducts and interfering signals,\" _Radio Sci._, vol. 49, no. 10, pp. 954-970, Oct. 2014. * [11] S. Sokolovsky, C. Rocken, W. Schreiner, and D. Hunt, \"On the uncertainty of radio occultation inversions in the lower troposphere,\" _J. Geophys. Res._, vol. 115, no. D22, pp. 1-19, 2010. * [12] K. R. Knapp, M. C. Kruk, D. H. Levinson, H. J. Diamond, and C. J. Neumann, \"The international best track archive for climate watershdip (IBTrACS): Unifying tropical cyclone data,\" _Bull. Amer. Meteorological Soc._, vol. 91, no. 3, pp. 363-376, Mar. 2010. * [13] C. L. Weatherford and W. M. Gray, \"Typhoon structure as revealed by aircraft reconnaissance. Part II: Structural variability,\" _Monthly Weather Rev._, vol. 116, no. 5, pp. 1044-1056, May 1988. * [14] K. R. Knapp, C. S. Velden, and A. J. Wimmers, \"A global climatology of tropical cyclone eyes,\" _Monthly Weather Rev._, vol. 146, no. 7, pp. 2089-2101, Jul. 2018. * [15] M. E. Gorbunov, K. B. Lauritsen, A. Rhodin, M. Tomassini, and L. Kombluen, \"Radio holographic filtering, error estimation, and quality control of radio occultation data,\" _J. Geophys. Research: Atmos._, vol. 111, no. D10, pp. 1-10, May 2006. * [16] S. Sokolovsky, C. Rocken, W. Schreiner, D. Hunt, and J. Johnson, \"Postprocessing of Li GPS radio occultation signals recorded in open-loop mode,\" _Radio Sci._, vol. 44, no. 2, pp. 1-13, Apr. 2009. Fig. 5: Distribution of (top) refractivity gradients and (middle) SNR values with (bottom) corresponding HSL for the strongest deep signals (purple) found in colloucations with TCs. Other tunable RO parameters represent: SNR cut-off (blue), Doppler cut-off (green), and lowermost HSL (red). The noise level is marked with yellow dots. The purple circles indicate deep amplitudes due to clear interfering signals. The out-of-scale SNR values for occultations #7, #10, and #13 corresponding to Doppler cut-offs have magnitudes of 1989, 5254, and 1047 V/V, respectively.
Global navigation satellite system (GNSS) signals in the radio occultation (RO) technique using new measurements from constellation observing system for meteorology, ionosphere & climate (COSMIC-2) mission were observed very deep below the Earth's limb. Selected occultations collocated with severe tropical cyclones showed the existence of signal-to-noise ratio (SNR) variations at or below \\(-200\\) km in terms of height of straight line (HSL) connecting a pair of occulting satellites. The presence of such signals is considered as indicative of sharp inversion layers associated with planetary boundary layer. We investigate the potential application of deep occultation signals for detection of tropical cyclones often resulting in strong vertical gradients of refractivity. The most prominent deep signatures computed using 1 s running mean filter can reach 400 V/V, whereas the majority of deep signals exceed the noise level by a factor of two. The cross-satellite interference is important mechanism affecting the structure of deep signals, especially for global positioning system (GPS) occultations. Amplitude, deep signal, radio occultation (RO), tropical cyclone (TC).
Provide a brief summary of the text.
230
isprs/7a35cf7d_915a_4d5c_98b2_70f83ca21262.md
# LIDAR and Pictometry Images Integrated Use for 3D Model Generation. F. Prandi DIIAR, Politecnico di Milano P.zza Leonardo da Vinci 32, 20133 Milano - [email protected] C. Achille DIIAR, Politecnico di Milano P.zza Leonardo da Vinci 32, 20133 Milano - [email protected] R. Brumana DIIAR, Politecnico di Milano P.zza Leonardo da Vinci 32, 20133 Milano - [email protected] F. Fassi DIIAR, Politecnico di Milano P.zza Leonardo da Vinci 32, 20133 Milano - [email protected] L. Fregonese DIIAR, Politecnico di Milano P.zza Leonardo da Vinci 32, 20133 Milano - [email protected] ## 1 Introduction ### Motivation In the last years the Geographic Information request growth had implied a quick technologies and software development for the acquiring, the storing, the analysis and the visualization of these information. In particular, in the sensor and platform case, the navigation system (GPS/IMU), the aerial digital camera development, the high definition satellite images and the high density points LIDAR sensor had permitted to the user to have a great number of information. At the same time, in the software case: the global information visualization software, the web platform for the data management and analysis and the 3D visualization hardware and software development had permitted to the users to use easily the information. Digital terrain models (DTM) and 3D city models (3DCM ) have became one of the most important and attractive products of Photogrammetry and remote sensing. While for many aspects the information acquisition techniques are consolidated, an ample sector of the research is turned toward the practical use of this great massive data structure. The research aim is that to furnish usable information within the planning: for great works organization in particular environmental, for urban contexts management and analysis etc. The scale of interest will have to go since 500/1000 up to the averages scale 2000/5000. In this paper, we propose ways to combine information extracted by oblique images with a DSM generated with LiDAR or Image matching. Specifically, we acquire multiple oblique aerial imagery from a Pictometry system using a standard digital camera; orienting the images without initial knowledge of position and orientation, we extract line and segment from oriented image and we use them for improve the DSM. To such purpose the nadiral or oblique aerial images contained information, once that these are adequately oriented in the space, it can be re-projected on the laser cloud to clearly define an edge or an identifiable element in the images but not noticed by the LiDAR. The necessary data whole to the development of such models includes, in first analysis, LiDAR data and Pictometry images to which we can add then, where you introduce, for instance cartographic data contents in Spatial DB. ### Technological state of art The used technology for information acquisition is generally airborne transported. In the last decades close to the traditional photogrammetric aerial camera is affirmed new technologies as LIDAR, radar or more simply the use of oblique digital camera. Particularly the aerial laser scanners (LIDAR) acquisition allows to notice a great massive structure of data, from the 1-4 points /m\\({}^{2}\\) for the airborne systems up to the 40:100 points /m\\({}^{2}\\) for the helicopter systems. Airborne LIDAR has become a rather important information source for generating high quality Digital Surface Models. It offers one of the most accurate, expedient and cost-effective ways of capturing wide-area elevation information to produce highly detailed DSMs. LIDAR systems collect positional (x, y) and elevation (z) data at pre-defined intervals. The resulting LIDAR data is a very dense points cloud. The accuracy of the LIDAR data is a function of the flying height, laser beam diameter (system dependent), thequality of the GPS/IMU data, and post-processing procedures. Accuracies of \\(\\pm\\)15cm (vertically) can be achieved. This three-dimensional data wealth allows the surface digital models construction with an elevated degree of detail. Technology allows besides, through the use of appropriate filters, to separate the built by the ground getting so the digital model of the territory. The classical photogrammetry with the navigation systems development and the use of high resolution aerial digital camera likewise allows to get very accurate territory surface model with the image matching technique. The power to conjugate these two technologies increases notably the possibility to get detailed three-dimensional models and ortho-photos. Both LIDAR both the classical they have a territory nadir vision, however in many cases the possibility to have an scene oblique vision can reveal very interesting. The Pictometry technology of images taking comes really meeting to this demand of a different vision of the environment from the tall one. This allows to collect a series of pictures with a different point of view from the typical nadir sight. ### Related work Thanks to the recent evolution of new sensors (digital camera, LIDAR data, high resolution satellites) and the development of efficient algorithms, the automatic production of urban DSMs including buildings and trees is now possible (Maas, 2001; Fraser _et al._, 2002; Zinger _et al._, 2002). However the automatic computation of 3D vector data is much more difficult because of low-contrasted building contours, hidden areas and complex-shaped buildings (Jung and Paparoditis, 2003). For most applications requiring high quality vector data, a manual or a semi-automatic intervention is necessary (Baillard, 2004). The combined use of aerial oblique images and 3D data was investigated in various sorts. Some approaches texture mapping a 3D model obtained from aerial and ground-based or aerial laser scans with oblique aerial imagery (Frueh _et al._, 2004); others are concerned with dense height map reconstruction from aerial oblique image sequences (Le Besnerais, 2007). In this work, we want investigate the use of information, such line and shape, extracted from oblique images in order to improve the 3D model (generated by LiDAR or Image Matching techniques). A related problem to this approach is the registration between the two data-set. The most common methods for solving the registration problem are based on the identification of common points. Such methods are not applicable when dealing with LiDAR surfaces, since they correspond to laser footprints rather than distinct points that could be identified in the imagery (Balsvaias, 1999). Alternative methodologies for the registration of photogrammetric and LIDAR data using three-dimensional, straight-line features (Habib _et al._, 2005). In this work we would investigate, first the accuracy of this aerial oblique images and if is possible to use them combined with LiDAR information. Afterwards the registered images can be used with the aim to extract information, such as breakline or facades plan of buildings. So, these information can be integrated in the LiDAR point in order to improve the 3D model. ## 2 Analises of requirements ### Pictometry system The Pictometry technology consists in a system which is composed of 5 cameras: one takes pictures of the territory in nadir way, the others four simultaneously take pictures the territory with oblique frames in 90 degree directions staggered. Oblique imagery is angled view imagery in which four directions are captured so that feature faces and the represented area can be seen from North, South, East and West (Fig.1). Pictometry comprises two types and two levels of image. Orthogonal images are traditional straight down images, whilst oblique photographs are taken at an angle typically between 45 and 60 degrees. \"Community\" images are flown at approximately 1.500-1.800 metres and have an average resolution of 60 cm/pixel. \"Neighbourhood\" images are flown at approximately 600 - 750 metres with an average resolution of 15 cm/pixel. This allows to collect a series of pictures with a different point of view from the typical nadir sight. The typical characteristics of the system are: * Average fly altitude 900 meters. * Orthogonal Camera, focal length 65mm, average footprint 15cm. * Oblique Cameras, focal length 85mm, average footprint 13-18cm. * Sensor size 4008x 2672 pixel size 0.009mm. * Overlap approximately 30%-50% The system is also provided with a software for the images library management. This system combines aerial images and has the ability to visualize data and to make some analysis. Figure 1 the straight-down and the four oblique images taken by Pictometry system The interpretive software programs are not geographical information systems (GIS) as they are understood today. Instead, they are Information Systems that allow for navigation and prolific use of both orthophotos and oblique images. The geo-referenced imagery allows measurements to be made, including the physical height of objects, elevation of ground surface, distance (taking into consideration terrain traversed), bearing and height. A first step of this work will be to investigate the accuracy of the information extracted from the Pictometry images. In fact, one characteristic of the system is the possibility to directly measure the point coordinates on the ground plane. This should allow to extract some information directly to the Geo-Referenced oblique images with a considerable time-saving. Therefore the topic is to evaluate the accuracy of this measures and the quality of extracted coordinates and if it is comparable with LiDAR data precision. ### LiDAR technology LiDAR is a consolidated technology which utilizes the Global Positioning System (GPS), precision inertial navigation systems, laser-range finders, and high speed computing for data collection. LiDAR systems on airborne platforms (e.g., an airplane or helicopter) usually measure the distance between an object the laser beam hits and the airborne platform carrying the system. Airborne laser mapping instruments are active sensor systems, as opposed to passive imagery such as cameras. With LIDAR, it is possible to obtain elevation information on large tracts in relatively short time; elevation data obtained with LiDAR can be up to 15cm accurate. LiDAR system uses the speed of light to determine distance by measuring the time it takes for a light pulse to reflect back from a target to a detector. A laser emitter can send about 5.000 pulses per second, but due to the high speed of light a detector can sense the reflected pulse before the next one is sent. LiDAR systems produce data that can be used in digital elevation models (DSM). The high density of elevation points provides the possibility to create high-resolution DSM models. LiDAR has been effectively used in several applications. In this work will investigate the possibility to integrate the data acquired by LiDAR with some data extracted from oblique Images. It is important to underline that from oblique images should be identified all that features and entities which are not visible by airborne laser data or traditional nadir images. ### Used Data Set1 Footnote 1: The pictometry images, EFS software and LiDAR Data has been granted by CGR Parma (Bloom group) in range of a research project with DIIAR. The aerial images and the digital map of Milan has been kindly granted in use by the Municipality. The test area consists in two zones with different characteristic; the first area is a portion of the city of Milan, in this region there are a very dense built-on with much infrastructures such as railways, bridge, ways and canals. The second is a portion of the city of Lecco (lakeside town in Lombardy); which is characterized by a mixture between anthropic and environmental elements. The Milan test field is composed by a series of pictometry images collected in two different times with 15cm footprint resolution, a photogrammetric aerial strip acquired for cartographic production with 6cm footprint resolution and a vector digital map (scale 1:1000) accuracy about 20cm. The Lecco test field is composed by a series of pictometry images collected with 15cm footprint resolution and LiDAR data resolution 1 point/m\\({}^{2}\\)(Fig2, Fig3). \\begin{tabular}{|l|c|c|c|} \\hline Data set & Oblique & Aerial images & LiDAR \\\\ & Images & (Wild RC30) & data \\\\ & (pictometry) & & \\\\ \\hline Milan & X & X & \\\\ \\hline Lecco & X & & X \\\\ \\hline \\end{tabular} The first test field will be used in first to test the accuracy of the oblique direct georeferenced images and in then to improve the DSM generated by Image matching from aerial images with the data collected in oblique images. Some of this data was already used in other research about the 3D city model and the generation of 3D object for GIS, in the previous studies (Brumana, 2006) was emerged that the 3D digital map is not enough to completely define the 3D model of complex scene such as urban areas. It is necessary to use some additional data these can be aerial images, high resolution DSM and also oblique images. The second test field will be used to experiment the opportunity to combine the information in the oblique images and in Laser Data, especially in a region with great vertically development. In fact, the Lecco lake region is characterized by the presence of high mountain near the cost line, in this case a different view of the territory, such as oblique images, should be an important source of information. ### Evaluation of Data Accuracy The opportunity to have in a short time the information contained in a aerial survey is a important topic in photogrammetry field. Pictometry system allows to obtain many geo-referenced images quickly after the data acquisition. In case of public safety applications, the speed at which imagery of a given area (typically a county) can be captured, the ability to integrate the imagery with other third-party systems and the cost savings associated with the oblique imagery metric use, provide many advantages. Other benefits include the image content which is far more intuitive and information richness is greater than that they obtain from traditional ortho-photography. For these emergency applications a lower accuracy is enough but, if we want to use this data to extract information and to combine them with other we must be sure that the accuracy is comparable. In order to test the accuracy of Pictometry images we used the Milan test field as source. The georeferenced ground positions of higher accuracy, were provided from multiple sources. Most points were checkpoints, used in previous photogrammetric survey for aerial triangulation. This point are very accurate, well-defined and photo-identifiable on the airborne oblique imagery. Other points was obtained directly from the digital map and it was used only as X-Y checkpoints. We measured the x, y and z coordinates of these checkpoints (fig.4) on each of the 4-view Pictometry images, where visible, to compute errors in Eastings, Northings, and Elevations. For each checkpoint, we also averaged the Eastings, Northings and elevations for all views that were visible; for many, the average resulted from four views, but some points were obscured by buildings, trees, cars, etc., so the average resulted from the mean of three, two, and (in a few cases) only one view. All errors were squared and averaged to compute the mean square errors; then the square root was taken of the mean square errors to compute the root-mean-square-errors (RMSE, RMSEy, RMSEr, and RMSEz). RMSEr is the radial statistic which equals the square root of [RMSEx\\({}^{2}\\) + RMSEy\\({}^{2}\\)]. The results are showed in table 2. The average accuracy of the coordinates is 6.142m, this value is acceptable in case of emergency if it is necessary a quick response and a approximate knowledge of the area interested by the phenomena. Our purpose is to use the 3D data extracted by oblique images into the 3D model of the landscape both urban both environmental. It is obvious that the information directly extracted by Pictometry system is not enough for this aim. It is significant to evidence the difference between the X-Y and Z accuracy, this is probably due to the data set. The city of Milan morphology is flat and the elevation difference between two point is small if the points are nearest; so a difference in the coordinates value give a little differences in elevation value. \\begin{table} \\begin{tabular}{|l|c|c|c|c|} \\hline Data set & RMS\\({}_{\\text{X}}\\)[m] & RMS\\({}_{\\text{Y}}\\)[m] & RMS\\({}_{\\text{Z}}\\)[m] & RMS\\({}_{\\text{Z}}\\)[m] \\\\ \\hline Milan & 2.320 & 5.654 & 6.142 & 0.378 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Accuracy statistics Figure 4: Direct measure of point elevation in oblique images realized in Pictometry EFS system Figure 3: Oblique images of Lecco Lake We must give up the way of a direct and rapid use of the images but, it is necessary a intermediate step which consist in the orientation and registration of the oblique images. ### Images Orientation The image orientation process consists in to find the camera's 6 parameters: Y, X, Z, Omega, Phi, Kappa. Camera orientation requires some scene knowledge, usually some control points, for relating the camera coordinate systems to the object coordinate system. Identification of control information cannot easily be automated especially if the position of the images in the space is totally unknown, if the overlap between the images is not much and if the orientation angle between two pictures is big. In recent years many work was realized for the automatic registration of the images (Forstner and Gulch, 1999). In the case of pictometry images the boundary condition are not good due the position (in our case) is unknown, the overlap is about 30%, the orientation angle between two directions is about 90 degrees. In this situation is difficult to obtain a good automation in the orientation process and also the manual procedure requires some attentions. To obtain a good results is essential the knowledge of the interior orientation of the cameras. The pictometry system use two different cameras one for the straight-down view and one for the oblique views. The used camera was calibrated and the parameters are report in table 3. For the registration process we use Photomodeler software, that allows to orient the images with any orientation. The process is time-cost and is not simple due to the particular geometry of the pictures. The geometry of the cameras position are not excellent. The problem is the placement of the cameras, in fact there are four main position (N-S-E-W) where the system acquire several overlapping photographs taken while moving the camera in one of four cardinal direction. The output is a series of images concentrates around a direction, for examples all the photo taken in W direction are nearest, but with an approximate 90deg degrees angle between the two other nearest different directions (N-S). This geometry doesn't allow a simple solution of the photogrammetric block and sometimes the solution is not stable and it is enough to add a single photo for the not convergence of the system. Another problem, specially in the lake images, is that a big part of the images is take up from water and it is difficult to obtain a good placing of tie-points and the overlap is very short. All these factor give back the orientation process time-cost and complicated. In these condition, if the camera exterior parameters are unknown, it is difficult obtain a good automation in automatic orientation of the images. The images has very different prospective and is not simple to recognize the homologous points. After the orientation process we con obtain a series of georeferenced images with a sub-meter RMS in points re-projection. ### Feature extraction After the orientation process, the subsequent step is the extraction of information from the oriented images. The oblique images must attend as a source of a different or additional information respect to the straight-down ortho-images. The most important characteristic of the oblique pictures is the different point of view respect the traditional nadir photos. There are two general research line to develop, one is related identification of important object to extract, another is related to the next possibility to automated the extraction process. It is evident that the feature extraction must be addressed to the elements and shape which are not visible from the orthogonal-view. The object class most significant are the building facades, \\begin{table} \\begin{tabular}{|l|c|c|} \\hline Parameters & \\multicolumn{2}{|c|}{Camera} \\\\ \\hline & nadir & oblique \\\\ \\hline Pixel Size [mm] & 0.009 & 0.009 \\\\ \\hline C [mm] & 64.8258 & 84.4937 \\\\ \\hline X\\({}_{a}\\) [mm] & 0.3843 & 0.5195 \\\\ \\hline Y\\({}_{a}\\)[mm] & 0.3391 & -0.3600 \\\\ \\hline K1 & 4.3853e\\({}^{6}\\) & 8.3243e\\({}^{6}\\) \\\\ \\hline K2 & -9.5613e\\({}^{9}\\) & -9.76607e\\({}^{9}\\) \\\\ \\hline K3 & -3.047e\\({}^{621}\\) & -5.3268e\\({}^{12}\\) \\\\ \\hline P1 & -1.4691e\\({}^{-25}\\) & 5.46611e\\({}^{-26}\\) \\\\ \\hline P2 & -2.032e\\({}^{405}\\) & 1.98325e\\({}^{-25}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 3: Interior orientation parameters of cameras Figure 5: Oblique Image collected tie point, image are 90 degree direction (S-E) staggered. The interpretation of the images and the recognition of homologues point is not simple Figure 6: Result of camera orientation, the 10 photos are taken in two 90° direction (S-E). It is possible to see as the camera are nearest for one specific direction. the walls all that elements which have a big vertical development. Due to the complexity of aerial images, different view angles and occlusion, straight edge matching for 3-D generation is a difficult task in photogrammetry and computer vision (Zhang 2003). These difficulties are amplified with the use of oblique image, for this reason a manual extraction of the elements will be investigated in order to select some elements and to verify the possibility to insert them in the general 3D Model. Anyway the automation in the extraction process is a important topic in photogrammetry. The first step, after the registration process, is the recognition of the features. A methodology can be the automatic individuation of the edge with edge detector like Canny filter. The second step is to use this edge to generate the features, the main problem, if the image are already oriented, is the individuation of the homologous line in different photos that is a complicated image matching task. Due the complexity of this task at this step of work we are investigated only the possibility to manually generate the objects and in future, to insert, them in the model. ## 3 Conclusion In this paper we have presented the first result of a work which has the aim to find a way to use the oblique aerial images taken from pictometry system in order to improve 3D LiDAR model for a relatively small areas. In first we investigated the accuracy of Pictometry direct georeferenced images and we compared it with the LiDAR accuracy. A direct use of the information extracted from the images is not possible because the accuracy is about 6 meter in the X-Y direction, for this reason a second part of the research is dedicated to the orientation of the images which is necessary to improve the quality to the 3D coordinates extracted by the images. The orientation process have some problems due to the particular geometry of the photo taken. The pictometry system have a short overlap between the images (30-50%) and the photo are taken in four direction (N-S-E-W). For these reason we obtain a series of images which are relatively nearest, if taken in the same view direction, and with a 90degdegrees angle respect the images collected from other directions. In these conditions there are a small angle between the point and the solution can have a not good accuracy. Once registered the images the extraction of features has been done. The particularity of Pictometry images is the different point of view, for this reason it is important to dedicate a meticulous attention to the typologies of the extracted objects. We are interested to all the elements which have a main development in vertical direction and are difficulty to identificate both in ortho-images both in LiDAR data. The object class can be include building facades, walls, docks seaside. Due to the particular geometry of the images it is very strong to automate the recognition process of the feature in the image, in fact also the semantic interpretation of the scene is complex with the perspective is very changing trough two different images. So the feature extraction is manually done and it is realized with the aim to evaluate the possibility to use the data combined to the LiDAR data generating a 3D model for relatively small areas. The first result are good in terms of accuracy of the data extracted and for the information and resolution of oblique images. The work still have to examine the integration and the registration of the data extracted from the images with the Laser data especially in case of dense point cloud. A future topic would be the improvement in the automation in the examination of the image contained with the aim to help the feature extraction. In conclusion these first analysis of the photogrammetric use of Pictometry and the subsequent use combined with LiDAR data shown some interesting opportunities but the challenge must be the improvement and the reduction of manual work. Another topic which can do and advance in the sense of a more accurate use of the oblique image is the change in the flight plan and in the data acquisition. A bigger overlap and more attention at the geometry of the photogrammetric taken can be drastically reduce the time in the orientation process. Only handling these problems and the challenge in automation the photogrammetry use of oblique aerial Multiple-image can be a cost effective alternative to vertical stereo aerial photography and optical satellite imagery for analysis of relatively small areas. ## References * Baltsavias (1999) Baltsavias, E., 1999. A comparison between photogrammetry and laser scanning. _ISPRS Journal of Photogrammetry and Remote Sensing_, 54 1999, pp.83-94. * Baillard (2004) Baillard, C., 2004 Production of dsm/dtm in urban areas: role and influence of 3-D vectors. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. XXth ISPRS Congress, Istanbul, Turkey, Vol. XXXV, part B, 2004. * Brumana et al. (2006) Brumana R., Prandi F., Fassi F., 2006. Definition of the 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2006, pp. 185-194 * Brumana et al. (2007) Brumana R., Prandi F., Fassi F., 2007. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2007, pp. 185-194 * Brumana et al. (2008) Brumana R., Prandi F., Fassi F., 2008. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2008, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2000. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 * Brumana et al. (2009) Brumana R., Prandi F., Fassi F., 2009. The 3D content and geometric level of congruence of numeric cartography. _3DGeoinfo'06 International Workshop on 3D Geoinformation_. Kuala Lampur 2009, pp. 185-194 Figure 7: Manual Feature extraction, a vertical wall Forstner, W., Gulch, E., 1999. Automatic orientation and recognition in highly structured scenes. _ISPRS Journal of Photogrammetry & Remote Sensing,_ 54 1999, pp. 23-34. * Fraser et al. (2002) Fraser, C.S., Baltsavias, E., Gruen, A., 2002. Processing of Ikonos imagery for submete 3D positioning and building extraction. _ISPRS journal of Photogrammetry & Remote Sensing,_ 56(3), pp. 177-194 * Frueh et al. (2004) Frueh, C., Sammon, R., Zakhor A., 2004. Automated Texture Mapping of 3D City Models With Oblique Aerial Imagery. _Proceedings of 2nd International Symposium on 3D Data Processing, Visualization and Transmission._ * Jung and Paparoditis (2003) Jung, F., Paparoditis, N., 2003. Extracting 3D free-form surface boundaries of man-made objects from multiple calibrated images: a robust, accurate and high resolving power edge matching and chaining approach. _In Photogrammetric Image Analysis, IAPRS Vol.34,_ 2003 Munich, pp. 39-44. * Habib et al. (2005) Habib, A., Ghanna, M., Morgan M., Al-Ruzouq R., 2005. Photogrammetric and Lidar Data Registration Using Linear Features. _Photogrammetric Engineering & Remote Sensing Vol. 71, No. 6, June 2005_, pp. 699-707. * Le Besnerais et al. (2007) Le Besnerais, G., Sanfourche, M., Champagnat, F., 2007. Dense height map estimation from oblique aerial image sequences. _Computer Vision and Image Understanding 109 (2008)_, pp.204-225 * Maas (2001) Maas, H.G., 2001. The suitability of airborne laser scanner data for automatic 3D object reconstruction, 2001. _In Automatic Extraction of Man-Made Objects from Aerial and Space images (III)_, pp. 291-296. * Prandi et al. (2007) Prandi F., Brumana R., Monti C., 2007. Interoperabily 3D City model as support to the planning and to the project. _In Optical 3D Measurement Techniques._ VIII Conference on Optical 3D Measurement Techniques. Zurich 2007,Vol. 1 pp.362-370 * Zhang (2003) Zhang, C., 2003. Towards an operational system for automated updating of road databases by integration of imagery and GeoData. _ISPRS Journal of Photogrammetry & Remote Sensing_ 58 2004, pp. 166-186. * Zinger et al. (2002) Zinger, S., Nikolova, M., Roux, M. Maitre, H., 2002. 3D resampling for airborne Laser data of urban areas. _In IAPRS_ Vol.34, n\\({}^{\\circ}\\) 3A, pp. 418-423.
The territorial data acquisition technologies vary second of the final product scale use. So it can be passed that the classical topographical survey or terrestrial laser scanners survey for engineering works, up to the satellite images multispectral analysis for regional scale and over.The classical aerial photogrammetry and LiDAR have a territory nadir (straight-down) vision, however, in many cases, the possibility to have a scene oblique vision can reveal very interesting. The Pictometry technology of images taking comes really meeting to this demand of a different vision of the environment. The system is composed of 5 cameras: one takes picture the territory in nadir way, the others four simultaneously take pictures the territory with oblique frames in 90degdirections staggered. This allows to collect a series of pictures with a different point of view from the typical nadir sight. Widening the photogrammetric use of this data type, it is possible to think about exploiting the contained information in the oblique images for the three-dimensional object extraction and the 3-D models construction also in the urban zones where, a lot of times, the purely nadir images doesn't succeed in furnishing all the necessary information to such purpose. In this paper we will present the first result of a work which has the aim to find a way to use the oblique aerial images taken from pictometry system in order to improve 3D LiDAR model for a relatively small areas. Photogrammetry, LIDAR, DTM, 3D Model
Condense the content of the following passage.
307
arxiv-format/0401066v1.md
# Statistical Physics in Meteorology M. Ausloos SUPRATECS 1 and GRASP2, Institute of Physics, B5, University of L'ege, B-4000 Liege, Belgium Footnote 1: SUPRATECS = Services Universitaires Pour la Recherche et les Applications Technologies de materiaux Electroceramiques, Composites et Supraconduceurs Footnote 2: GRASP = Group for Research in Applied Statistical Physics November 4, 2021 ## I Introduction and Foreword This contribution to the 18th Max Born Symposium Proceedings, cannot be seen as an extensive review of the connection between meteorology and various aspects of modern statistical physics. Space and time (and weather)limit its content. Much of what is found here can rather be considered to result from a biased view point or limited understanding of a frustrated new researcher unsatisfied by the present status of the field. Yet only to be found is a set of basic considerations and reflections expecting to give lines for various investigations, in the spirit of modern statistical physics ideas. The author came into this subject starting from previous work in econophysics, when he observed that some \"weather derivatives\" were in use, and some sort of game initiated by the Frankfurt Deutsche Borse[1] in order to attract customers which could predict the temperature in various cities within a certain lapse of time, and win some prize thereafter. This subject was similar to predicting the S&P500 or other financial index values at a certain future time. Whence various techniques which were used in econophysics, like the detrended fluctuation analysis, the multifractals, the moving average crossing techniques, etc. could be attempted from scratch. Beside the weather (temperature) derivatives other effects are of interest. Much is said and written about e.g. the ozone layer and the Kyoto \"agreement\". The El Ni\\(\\tilde{n}\\)o system is a great challenge to scientists. Since there is some data available under the form of time series, like the Southern Oscillation Index, it is of interest to look for trends, coherent structures, periods, correlations in noise, etc. in order to bring some knowledge, if possible basic parameters, to this meteorological field and expect to import some modern statistical physics ideas into such climatological phenomena. It appeared that other data are also available, like those obtained under various experiments, put into force by various agencies, like the Atlantic Stratocumulus Transition Experiment (ASTEX) for ocean surfaces or those of the Atmospheric Radiation Measurement Program[2, 3] (ARM), among others. However it appeared that the data is sometimes of rather limited value because of the lack of precision, or are biased because the raw data is already transformed through models, and arbitrarily averaged (\"filtered\") whence even sometimes lacking the meaning it should contain. Therefore a great challenge comes through in order to sort out the wheat from the chaff in order to develop meaningful studies. I will mention most of the work to which I have contributed, being aware that I am failing to acknowledge many more important reports than those, - for what I truly apologize. There are very interesting lecture notes on the web for basic modules on meteorological training courses, e.g. one available through ECMWF website[4]. In Sect.2, I will briefly comment on the history of meteorology. The notion of clouds, in Sect. 3, allows for bringing up the geometrical notion of fractals for meteorology work, thus scaling laws, and modern data analysis techniques. Simple technical and useful approaches, based on standard statistical physics techniques and ideas, in particular based on the scaling hypothesis for phase transitions and percolation theory features will be found in Sect. 4. # Introduction From the beginning of times, the earth, sky, weather have been of great concern. As soon as agriculture, commerce, travelling on land and sea prevailed, men have wished to predict the weather. Later on airborne machines need atmosphere knowledge and weather predictions for best flying. Nowadays there is much money spent on weather predictions for sport activities. It is known how the knowledge of weather (temperature, wind, humidity,..) is relevant, (even \\(fundamental\\)!), e.g. in sailing races or in Formula 1 and car rally races. Let it be recalled the importance of knowing and predicting the wind (strength and directions), pressure and temperature at high altitude for the (recent) no-stop balloon round the world trip. The first to draw sea wind maps was Halley[5], an admirer of Breslau administration. That followedthe \"classical\" isobaths and isoheights (these are geometrical measures!!) for sailors needing to go through channels. I am very pleased to point out that Heinrich Wilhelm Brandes(1777-1834), Professor of Mathematics and Physics at the University of Breslau was the first\\({}^{5}\\) who had the idea of displaying weather data (temperature, air pressure, a.s.o.) on geographical maps\\({}^{1}\\). Later von Humboldt (1769-1859) had the idea to connect points in order to draw isotherms\\({}^{5}\\). It is well known nowadays that various algorithms will give various isotherms, starting from the same temperature data and coordinate table. In fact the maximum or minimum temperature as defined in meteorology\\({}^{6,7}\\) are far from the ones acceptable in physics laboratories. Note that displayed isotherms connect data points which values are obtained at different times! No need to say that it seems essential to concentrate on predicting the uncertainty in forecast models of weather and climate as emphasized elsewhere\\({}^{8}\\). ## III Climate and Weather. The Role of Clouds Earth's climate is clearly determined by complex interactions between sun, oceans, atmosphere, land and biosphere\\({}^{9,10}\\). The composition of the atmosphere is particularly important because certain gases, including water vapor, carbon dioxide, etc., absorb heat radiated from Earth's surface. As the atmosphere warms up, it in turn radiates heat back to the surface that increases the earth's \"mean surface temperature\". Much attention has been paid recently\\({}^{11,12}\\) to the importance of the main components of the atmosphere, in particular clouds\\({}^{13}\\), in the water three forms-- vapor, liquid and solid, for buffering the global temperature against reduced or increased solar heating [14]. This leads to efforts to improve not only models of the earth's climate but also predictions of climate change [15], as understood over long time intervals, in contrast to shorter time scales for weather forecast. In fact, with respect to climatology the situation is very complicated because one does not even know what the evolution equations are. Since controlled experiments cannot be performed on the climate system, one relies on using ad hoc models to identify cause-and-effect relationships. Nowadays there are several climate models belonging to many different centers [16]. Their web sites not only carry sometimes the model output used to make images but also provide the source code. It seems relevant to point out here that the stochastic resonance idea was proposed to describe climatology evolution [17]. It should be remembered that solutions of Navier-Stokes equations forcefully depend on the initial conditions, and steps of integrations. Therefore a great precision on the temperature, wind velocity, etc. cannot be expected and the solution(s) are only looking like a mess after a few numerical steps [18]. The Monte Carlo technique suggests to introduce successively a set of initial conditions, perform the integration of the differential equations and make an average thereafter [18]. It is hereby time to mention Lorenz's [19] work who simplified Navier-Stokes equations searching for some prediciability. However, predicting the outcome of such a set of equations with complex nonlinear interactions taking place in an open system is a difficult task [20]. The turbulent character in the atmospheric boundary layer (ABL) is one of its most important features. Turbulence can be caused by a variety of processes, like thermal convection, or mechanically generated by wind shear, or following interactions influenced by the rotation of the Earth [21, 22]. This complexity of physical processes and interactions between them create a variety of atmospheric formations. In particular, in a cloudy ABL the radiative fluxesproduce local sources of heating or cooling within the mixed-layer and therefore can greatly influence its turbulent structure and dynamics, especially in the cloud base. Two practical cases, the marine ABL and the continental ABL have been investigated for their scaling properties[23, 24, 25] Yet, let it be emphasized that the first modern ideas of statistical physics implemented on cloud studies through fractal \\(geometry\\) are due to Lovejoy who looked at the perimeter-area relationship of rain and cloud areas[26], fractal dimension of their shape or ground projection. He discovered the statistical self-similarity of cloud boundaries through area-perimeter analyses of the geometry of satellites,fractal scaling of the cloud perimeter in the horizontal plane. He found the fractal dimension \\(D_{p}\\simeq 4/3\\) over a spectrum of 4 orders of magnitude in size, for small fair weather cumuli (\\(\\sim 1021\\) km) up to huge stratus fields (\\(\\sim 103\\) km). Cloud size distributions have also been studied from a scaling point of view[27, 28, 29, 30]. Rain has also received much attenion[31, 32, 33, 34, 35, 36, 37]. ## IV Modern statistical physics approaches Due to the nonlinear physics laws governing the phenomena in the atmosphere, the time series of the atmospheric quantities are usually non-stationary[38, 39] as revealed by Fourier spectral analysis, - whih is usually the first technique to use. Recently, new techniques have been developed that can systematically eliminate trends and cycles in the data and thus reveal intrinsic dynamical properties such as correlations that are very often masked by nonstationarities,[40, 41]. Whence many studies reveal long-range power-law correlations in geophysics time series[39, 42] in particular in meteorology[43, 44, 45, 46, 47, 48, 49, 50]. Multi-affine properties[51, 52, 53, 54, 55, 56, 57, 58, 59] can also be identified, using singular spectrum or/and wavelets. There are different levels of essential interest for sorting out correlationsfrom data, in order to increase the confidence in predictability[60]. There are investigations based on long-, medium-, and short-range horizons. The \\(i\\)-diagram variability (\\(iVD\\)) method allows to sort out some short range correlations. The technique has been used on a liquid water cloud content data set taken from the Atlantic Stratocumulus Transition Experiment (ASTEX) 92 field program[61]. It has also been shown that the random matrix approach can be applied to the empirical correlation matrices obtained from the analysis of the basic atmospheric parameters that characterize the state of atmosphere[62]. The principal component analysis technique is a standard technique[63] in meteorology and climate studies. The Fokker-Planck equation for describing the liquid water path[64] is also of interest. See also some tentative search for power law correlations in the Southern Oscillation Index fluctuations characterizing El Ni\\(\\tilde{n}\\)o[65]. But there are many other works of interest[66]. ### Ice in cirrus clouds In clouds, ice appears in a variety of forms, shapes, depending on the formation mechanism and the atmospheric conditions[22, 51, 67, 68]. The cloud inner structure, content, temperature, life time,.. can be studied. In cirrus clouds, at temperatures colder than about \\(-40^{\\circ}\\) C ice crystals form. Because of the vertical extent, ca. from about 4 to 14 km and higher, and the layered structure of such clouds one way of obtaining some information about their properties is mainly by using ground-based remote sensing instruments[69, 70, 71, 72]. Attention can be focussed[50] on correlations in the fluctuations of radar signals obtained at isodepths of \\(winter\\) and \\(fall\\) cirrus clouds giving (i) the backscattering cross-section, (ii) the Doppler velocity and (iii) the Doppler spectral width of the ice crystals. They correspond to the physical coefficients used in Navier Stokes equations to describe flows, i.e. bulk modulus, viscosity, and thermal conductivity. It was found that power-law time correlations exist with a crossover between regimes at about 3 to 5 min, but also \\(1/f\\) behavior, characterizing the top and the bottom layers and the bulk of the clouds. The underlying mechanisms for such correlations likely originate in ice nucleation and crystal growth processes. ### Stratus clouds In stratus clouds, long-range power-law correlations\\({}^{45,49}\\) and multi-affine properties\\({}^{24,25,57}\\) have reported for the liquid water fluctuations, beside the spectral density\\({}^{73}\\). Interestingly, stratus cloud data retrieved from the radiance, recorded as brightness temperature,\\({}^{2}\\) at the Southern Great Plains central facility and operated in the vertically pointing mode\\({}^{74}\\) indicated a Fourier spectrum, \\(S(f)\\ \\sim\\ f^{-\\beta}\\), \\(\\beta\\) exponent equal to \\(1.56\\pm 0.03\\) pointing to a nonstationary time series. The detrended fluctuation analysis (DFA) method applied on the stratus cloud brightness microwave recording\\({}^{45,75}\\) indicates the existence of long-range power-law correlations over a two hour time. Contrasts in behaviors, depending on seasons can be pointed out. The DFA analysis of liquid water path data measured in April 1998 gives a scaling exponent \\(\\alpha=0.34\\pm 0.01\\) holding from 3 to 60 minutes. This scaling range is shorter than the 150 min scaling range\\({}^{45}\\) for a stratus cloud in January 1998 at the same site. For longer correlation times a crossover to \\(\\alpha=0.50\\pm 0.01\\) is seen up to about 2 h, after which the statistics of the DFA function is not reliable. However a change in regime from Gaussian to non-Gaussian fluctuation regimes has been clearly defined for the cloud structure changes using a finite size (time) interval window. It has been shown that the DFA exponent turns from a low value (about 0.3) to 0.5 before the cloud breaks. This indicates that the stability of the cloud, represented by antipersistent fluctuations is (for some unknown reason at this level) turning into a system for which the fluctuations are similar to a pure random walk. The same type of finding was observed for the so called Liquid Water Path3. Footnote 3: The liquid water path (LWP) is the amount of liquid water in a vertical column of the atmosphere; it is measured in cm\\({}^{-3}\\); sometimes in cm!! The value of \\(\\alpha\\approx 0.3\\) can be interpreted as the \\(H_{1}\\) parameter of the multifractal analysis of liquid water content[24, 25, 52] and of liquid water path[57]. Whence, the appearance of broken clouds and clear sky following a period of thick stratus can be interpreted as a non equilibrium transition or a sort of fracture process in more conventional physics. The existence of a crossover suggests two types of correlated events as in classical fracture processes: nucleation and growth of diluted droplets. Such a marked change in persistence implies that specific fluctuation correlation dynamics should be usefully inserted as ingredients in _ad hoc_ models. ### Cloud base height The variations in the local \\(\\alpha\\)-exponent (\"multi-affinity\") suggest that the nature of the correlations change with time, so called intermittency phenomena. The evolution of the time series can be decomposed into successive persistent and anti-persistent sequences. It should be noted that the intermittency of a signal is related to existence of extreme events, thus a distribution of events away from a Gaussian distribution, in the evolution of the process that has generated the data. If the tails of the distribution function follow a power law, then the scaling exponent defines the critical order value after which the statistical moments of the signal diverge. Therefore it is of interest to probe the distribution of the fluctuations of a time dependent signal \\(y(t)\\) prior investigating its intermittency. Much work has been devoted to the cloud base height [54, 55, 56], under various ABL conditions, and the LWP [57, 64]. Neither the distribution of the fluctuations of liquid water path signals nor those of the cloud base height appear to be Gaussian. The tails of the distribution follow a power law pointing to \"large events\" also occurring in the meteorological (space and time) framework. This may suggest routes for other models. ### Sea Surface Temperature Other time series analysis have been investigated searching for power law exponents, like in atmospheric [76] or sea surface temperature (SST) fluctuations [77]. These are of importance for weighing their impacts on regional climate, whence finally to greatly increase predictability of precipitation during all seasons. Currently, climate patterns derived from global SST are used to forecast precipitation. Recently we have attempted to observe whether the fluctuations in the Southern Oscillation index (\\(SOI\\)) characterizing El Ni\\(\\tilde{n}\\)o were also prone to a power law analysis. For the \\(SOI\\) monthly averaged data time interval 1866-2000, the tails of the cumulative distribution of the fluctuations of \\(SOI\\) signal it is found that large fluctuations are more likely to occur than the Gaussian distribution would predict. An antipersistent type of correlations exist for a time interval ranging from about 4 months to about 6 years. This leads to favor specific physical models for El Ni\\(\\tilde{n}\\)o description [65]. ## V Conclusions Modern statistical physics techniques for analyzing atmospheric time series signals indicate scaling laws (exponents and ranges) for correlations. A few examples have been given briefly here above, mainly from contributed papers in which the author has been involved. Work by many other authors have not been included for lack of space. This brief set of comments is only intended for indicating how meteorology and climate problems can be tied to scaling laws and inherent time series data analysis techniques. Those ideas/theories have allowed me to reduce the list of quoted references, though even like this I might have been unfair. One example can be recalled in this conclusion to make the point: the stratus clouds break when the molecule density fluctuations become Gaussian, i.e. when the molecular motion becomes Brownian-like. This should lead to better predictability on the cloud evolution and enormously extend the predictability range in weather forecast along the lines of nonlinear dynamics [78]. **Acknowledgments** Part of this studies have been supported through an Action Concertee Program of the University of Li\\(\\grave{e}\\)ge (Convention 02/07-293). Comments by A. Pekalski, N. Kitova, K. Ivanova and C. Collette are greatly appreciated. ## References * [1][http://deutsche-boerse.com/app/open/xelsius](http://deutsche-boerse.com/app/open/xelsius). * [2][http://www.arm.gov](http://www.arm.gov). * [3] G.M. Stokes, S.E. Schwartz, Bull. Am. Meteorol. Soc. 75 (1994) 1201. * [4][http://www.ecmwf.int/newsevents/training/rcourse](http://www.ecmwf.int/newsevents/training/rcourse)\\({}_{-}\\)notes/index.html. * [5] M. Monmonier, _Air Apparent. How meteorologists learned to map, predict, and dramatize weather_ (U. Chicago Press, Chicago, 1999). * [6][http://www.maa.org/features/mathchat/mathchat](http://www.maa.org/features/mathchat/mathchat)\\({}_{-}\\)4\\({}_{-}\\)20\\({}_{-}\\)00.html. * [7] R.E. Huschke, (Ed.), Glossary of Meteorology (Am. Meteorol. Soc., Boston, 1959). * [8] T.N. Palmer, Rep. Phys. Rep. 63 (2000) 71. * [9] R.A. Anthens, H.A. Panofsky, J.J. Cahir, A. Rango: _The Atmosphere_ (Bell & Howell Company, Columbus, OH, 1975). * [10] D. G. Andrews, _An Introduction to Atmospheric Physics_ (Cambridge University Press, Cambridge, 2000). * [11] A. Maurellis, Phys. World 14 (2001) 22. * [12] D. Rosenfeld, W. Woodley, Phys. World 14 (2001) 33. * [13] R.R. Rogers,_Short Course in Cloud Physics_ (Pergamon Press, New York, 1976). * [14] H.-W. Ou, J. Climate 14 (2001) 2976. * [15] K. Hasselmann, in _The Science of Disasters_, A. Bunde, J. Kropp, H.J. Schellnhuber (Springer, Berlin, 2002) 141. * [16][http://stommel.tamu.edu/baum/climate](http://stommel.tamu.edu/baum/climate)\\({}_{-}\\)modeling.html. * [17] R. Benzi, A. Sutera, A. Vulpiani, J. Phys. 14 (1981) L453. * [18] A. Pasini, V. Pelino, Phys. Lett. A 275 (2000) 435. * [19] E. N. Lorenz, J. Atmos. Sci. 20 (1963) 130. * [20] J.B. Ramsey and Z. Zhang, in _Predictability of Complex Dynamical Systems_, (Springer, Berlin, 1996) 189. * [21] J. R. Garratt, _The Atmospheric Boundary Layer_ (Cambridge University Press, Cambridge, 1992) * [22] A. G. Driedonks and P.G. Duynkerke, Bound. Layer Meteor. 46 (1989) 257. * [23] N. Kitova, Ph. D. thesis, University of Liege, unpublished * [24] A. Davis, A. Marshak, W. Wiscombe, R. Cahalan, J. Atmos. Sci. 53 (1996) 1538. * [25] A. Marshak, A. Davis, W. Wiscombe, R. Cahalan, J. Atmos. Sci. 54 (1997) 1423. * [26] S. Lovejoy, Science 216 (1982) 185. * [27] R.F. Cahalan, D. A. Short, G. R. North, Mon. Weather Rev. 110 (1982) 26. * [28] R. F. Cahalan and J. H. Joseph, Mon. Weather Rev. 117 (1989) 261. * [29] R.A.J. Neggers, H.J.J. Jonker, A.P. Siebesma, AP, J. Atmosph. Sci. 60 (2002) 1060. * [30] S.M.A. Rodts, P. G. Duynkerker, H.J.J. Jonker, J.J. Ham, J. Atmosph. Sci. 60 (2002) 1895. * [31] S.T.R. Pinho, R.F.S. Andrade, Physica A 255 (1998) 483 * [32] R.F.S. Andrade, Braz. J. Phys. 33 (2003) 437. * [33] J.G.V. Miranda, R.F.S. Andrade, Physica A 295 (2001) 38; Theor. Appl. Climatol. 63 (1999) 79. * [34] Y. Tessier, S. Lovejoy, D. Schertzer, J. Appl. Meteorol. 32 (1993) 223. * [35] D. Schertzer, S. Lovejoy, J. Appl. Meteorol. 36 (1997) 1296. * [36] S. Lovejoy, D. Schertzer, J. Appl. Meteorol. 29 (1990) 1167. * [37] C. S. Bretherton, E. Klinker, J. Coakley, A. K. Betts, J. Atmos. Sci. 52 (1995) 2736. * [38] O. Karner, J. Geophys. Res. 107 (2002) 4415. * [39] A. Davis, A. Marshak, W. J. Wiscombe, and R. F. Cahalan, in _Current Topics in Nonstationary Analysis_, Eds. G. Trevino, J. Hardin, B. Douglas, and E. Andreas, (World Scientific, Singapore, 1996) 97-158. * [40] Th. Schreiber, Phys. Rep. 308 (1999) 1. * [41] P.J. Brockwell and R.A. Davis, _Time Series : Theory and Methods_ (Springer-Verlag, Berlin,1991) * [42] K. Fraedrich, R. Blender, Phys. Rev. Lett. 90 (2003) 108501 * [43] E. Koscielny-Bunde, A. Bunde, S. Havlin, H. E. Roman, Y. Goldreich, H.-J. Schellnhuber, Phys. Rev. Lett. 81 (1998) 729. * [44] E. Koscielny-Bunde, A. Bunde, S. Havlin, Y. Goldreich, Physica A 231 (1993) 393. * [45] K. Ivanova, M. Ausloos, E. E. Clothiaux, and T. P. Ackerman, Europhys. Lett. 52 (2000) 40. * [46] A.A. Tsonis, P.J. Roeber and J.B. Elsner, Geophys. Res. Lett. 25 (1998) 2821. * [47] A.A. Tsonis, P.J. Roeber and J.B. Elsner, J. Climate 12 (1999) 1534. * [48] P. Talkner and R.O. Weber, Phys. Rev. E 62 (2000) 150. * [49] K. Ivanova, M. Ausloos, Physica A 274 (1999) 349. * [50] K. Ivanova, T.P. Ackerman, E.E. Clothiaux, P.Ch. Ivanov, H.E. Stanley, and M. Ausloos, J. Geophys. Res., 108 (2003) 4268. * [51] S.G. Roux, A. Arneodo, N. Decoster, Eur. Phys. J. B 15 (2000) 765. * [52] A. Davis, A. Marshak, W. Wiscombe, R. Cahalan, J. Geophys. Research. 99 (1994) 8055. * [53] A. Marshak, A. Davis, W. J. Wiscombe, R. F. Cahalan, J. Atmos. Sci. 54 (1997) 1423. * [54] N. Kitova, K. Ivanova, M. Ausloos, T.P. Ackerman, M. A. Mikhalev, Int. J. Modern Phys. C 13(2002) 217. * [55] K. Ivanova, H.N. Shirer, E.E. Clothiaux, N. Kitova, M.A. Mikhalev, T.P.Ackerman, and M. Ausloos, Physica A 308 (2002) 518. * [56] N. Kitova, K. Ivanova, M.A. Mikhalev and M. Ausloos, in \"From Quanta to Societies\", W. Klonowski, Ed. (Pabst, Lengerich, 2002) 263. * [57] K. Ivanova, T. Ackerman, Phys. Rev. E 59 (1999) 2778. * [58] C.R. Neto, A. Zanandrea, F.M. Ramos, R.R. Rosa, M.J.A. Bolzan, L.D.A. Sa, Physica A 295 (2001) 215. * [59] H.F.C. Velho, R.R. Rosa, F.M. Ramos, R.A. Pielke, C.A. Degrazia, C.R. Neto, A. Zanadrea, Physica A 295 (2001) 219. * [60] B.D. Malamud, D.L. Turcotte, J. Stat. Plann. Infer. 80 (1999) 173. * [61] K. Ivanova, M. Ausloos, A.B. Davis, T.P. Ackerman, Physica A 272 (1999) 269. * [62] M. S. Santhanam, P. K. Patra, Phys. Rev. E 64 (2001) 16102. * [63] M.J. O'Connel, Comp. Phys. Comm. 8 (1974) 49. * Atmosph. 107 (2002) 4708. * [65] M. Ausloos and K. Ivanova, Phys. Rev. E 63 (2001) 047201. * [66] J.I. Salisbury, M. Winbush, Nonlin. Process. Geophys. 9 (2002) 341. * [67] K. R. Sreenivasan, Ann. Rev. Fluid Mech. 23 (1991) 539. * [68] C. S. Kiang, D. Stauffer, G. H. Walker, O. P. Puri, J. D. Wise, Jr. and E. M. Patterson, J. Atmos. Sci. 28 (1971) 1222. * [69] E.R. Westwater, in: _Atmospheric Remote Sensing by Microwave Radiometry_, ed. by M.A. Janssen (John Wiley and Sons, New York 1993) pp. 145-213. * [70] E.R. Westwater, Radio Science 13 (1978) 677. * [71] W.G. Rees: _Physical Principles of Remote Sensing_ (Cambridge University Press, Cambridge, 1990). * [72][http://www.arm.gov/docs/instruments/static/blc.html](http://www.arm.gov/docs/instruments/static/blc.html). * [73] H. Gerber, J.B. Jensen, A. Davis, A. Marshak, W. J. Wiscombe, J. Atmos. Sci. 58 (2001) 497. * [74] J.C. Liljegren, B.M. Lesht, IEEE Int. Geosci. and Remote Sensing Symp. 3 (1996) 1675. * [75] K. Ivanova, E.E. Clothiaux, H.N. Shirer, T.P. Ackerman, J. Liljegren and M. Ausloos, J. Appl. Meteor. 41 (2002) 56. * [76] J.D. Pelletier, Earth Planet. Sci. Lett. 158 (1998) 157. * [77] R.A. Monetti, S. Havlin, A. Bunde, Physica A 320 (2003) 581. * [78] F. Molteni, R. Buizza, T.N. Palmer, T. Petroliagis, Q. J. R. Meteorol. Soc. 122 (1996) 73.
Various aspects of modern statistical physics and meteorology can be tied together. The historical importance of the University of Wroclaw in the field of meteorology is first pointed out. Next, some basic difference about time and space scales between meteorology and climatology is outlined. The nature and role of clouds both from a geometric and thermal point of view are recalled. Recent studies of scaling laws for atmospheric variables are mentioned, like studies on cirrus ice content, brightness temperature, liquid water path fluctuations, cloud base height fluctuations, Technical time series analysis approaches based on modern statistical physics considerations are outlined.
Give a concise overview of the text below.
121
arxiv-format/2408_06868v2.md
# A Comprehensive Survey on Synthetic Infrared Image synthesis Avinash Upadhyay [email protected] Manoj Sharma [email protected] Prerana Mukherjee Amit Singhal Brejesh Lall ORCID(s): [email protected] ###### + Footnote †: footnoteinfoinfo]This survey paper delves into the realm of synthetic Infrared image and video synthesis, covering its principles, applications, methodologies, methodologies, and challenges. We initiate our exploration with the 'Basics of Infrared Imagery', shedding light on the fundamental physics and mathematics underpinning this technology in sec. 2. The paper thennavigates through the diverse 'Applications of Synthetic Infrared Images' in sec. 3 and provides insights into the available 'Datasets' in sec. 4. We subsequently elucidate the 'Methodologies' for generating synthetic IR images and videos in sec. 6, and address the associated 'Challenges' in sec. 7. In the final segment, we tie together our discussions in the 'Conclusion', reflecting on the current state and future trajectory of synthetic Infrared image/video synthesis in sec. 8. As we traverse this landscape, we aim to provide a comprehensive resource for researchers, practitioners, and enthusiasts in this field. ## 2 Basics of Infrared Imagery This section provides a brief overview and mathematical preliminaries of infrared (IR) imagery. It discusses blackbody and graybody radiation, methods of detecting IR radiation, the different types of IR, and tools and algorithms for modeling the atmospheric transfer function. ### Types of IR Infrared (IR) radiation is a diverse range of electromagnetic waves, each with unique characteristics and applications. The near-IR (NIR) spectrum, spanning 0.7-0.9 \\(\\upmu\\)m, is widely used in telecommunications and fiber optics, enabling fast and reliable data transmission over long distances. Short-wavelength IR (SWIR), operating within 0.9-2.5 \\(\\upmu\\)m range, is employed in night vision devices, allowing users to navigate in low-light conditions, and in remote sensing, where it helps gather data on the Earth's surface. Mid-wavelength IR (MWIR) and long-wavelength IR (LWIR) are both utilized in thermal imaging, with MWIR (3-5 \\(\\upmu\\)m) also used in missile guidance systems due to its ability to penetrate atmospheric obstacles. LWIR (8-14 \\(\\upmu\\)m), on the other hand, is used in surveillance and night vision applications, where its longer wavelengths can detect temperature differences in objects. Finally, far-IR (FIR), covering 15-1000 \\(\\upmu\\)m, has applications in astronomy, where it helps study the formation of stars and galaxies, as well as in environmental monitoring and thermal efficiency analysis, where it can detect subtle changes in temperature and energy patterns. Different types of IR spectrum is tabulated in table 1 along with the popular use cases of those spectrums. ### Infrared Radiometry and Detection The operation of infrared imaging systems is predicated on the detection of thermal radiation emitted by objects, a phenomenon governed by the principles of radiometry. To accurately interpret the infrared radiation received by a sensor, a comprehensive understanding of the underlying radiometric physics is essential. This includes the calculation of surface temperature through the solution of heat equilibrium equations, the determination of spectral radiance based on the surface temperature, and the subsequent calculation of radiance intensity. Furthermore, the atmosphere plays a significant role in attenuating the radiation, necessitating the modeling of atmospheric transmittance to accurately predict the radiation received by the sensor. This subsection provides an in-depth examination of the radiometry and detection aspects of infrared imaging systems, encompassing the calculation of surface temperature, spectral radiance, and radiance intensity, as well as the modeling of atmospheric transmittance and the conversion of radiance intensity to voltage. A thorough grasp of these fundamental concepts is crucial for the design and optimization of infrared imaging systems for a diverse range of applications, including thermal imaging, surveillance, and environmental monitoring. A diagram representing the Infrared system is presented in the figure 1. #### 2.2.1 Heat Equilibrium equation and surface temperature calculations The ability of a body to radiate is closely related to its ability to absorb radiation since the body is in thermal equilibrium with its surroundings at a constant temperature; it must absorb and radiate energy at the same rate. The thermal equilibrium equation for an object can be described as the summation of absorbed incident radiations from the sun and environment, conduction of internal heat source is equal to the radiance emitted by the blackbody and the heat convection and other effects [10, 64, 59]. \\[Q_{d}=Q_{i}+Q_{sun}+Q_{env}+Q_{conv} \\tag{1}\\] Where \\(Q_{d}\\) is heat conduction into the object, \\(Q_{i}\\) is the internal heat source, \\(Q_{sun}\\) is the absorbed solar energy which constitutes of both direct and diffuse components, \\(Q_{env}\\) is the \\begin{table} \\begin{tabular}{|c|c|l|} \\hline **Type of IR** & **Wavelength Band** & **Usage** \\\\ \\hline Near-IR (NIR) & 0.7 - 0.9 \\(\\upmu\\)m & Telecommunications, fiber optics \\\\ Short-Wavelength IR (SWIR) & 0.9 - 2.5 \\(\\upmu\\)m & Night vision, remote sensing \\\\ Mid-Wavelength IR (MWIR) & 3 - 5 \\(\\upmu\\)m & Thermal imaging, missile guidance \\\\ Long-Wavelength IR (LWIR) & 8 - 14 \\(\\upmu\\)m & Thermal imaging, night vision, surveillance \\\\ Far-IR (FIR) & 15 - 1000 \\(\\upmu\\)m & Astronomy, environmental monitoring, thermal efficiency analysis \\\\ \\hline \\end{tabular} \\end{table} Table 1: Types of IR, their Wavelength Bands, and Usageenergy absorbed from the environmental factors. The heat conduction component within the object can be written in terms of specific heat capacity \\(C\\), linear density of the object \\(\\rho\\), and depth of heat penetration \\(h\\). Specific heat capacity and density of the object are material properties. \\[C\\rho h\\frac{dT}{dt}=Q_{i}+Q_{sun}+Q_{env}+Q_{conv} \\tag{2}\\] Solving the above equation gives the Temperature of the surface \\(T\\) of the object. #### 2.2.2 Spectral radiance of body The spectral radiance of the black body for specific wavelength \\(\\lambda\\) and temperature \\(T\\) is given as follows, \\[S_{r}=\\frac{c_{1}}{\\lambda^{5}}\\frac{1}{\\exp\\{\\frac{c_{2}}{\\lambda T}\\}-1}d \\lambda\\quad W/cm^{2}\\mu sr \\tag{3}\\] Where \\(c_{1}=1.191\\times 104[\\frac{W\\mu m^{4}}{cm^{2}s_{2}}]\\), and \\(c_{2}=1.428\\times 104[\\mu mK]\\) are radiance constants. The radiance intensity \\(L(T)\\) of an IR wavelength bandwidth for a blackbody will be given as, \\[L(T)=\\int_{\\lambda_{2}}^{\\lambda_{1}}\\frac{c_{1}}{\\lambda^{5}}\\frac{1}{\\exp\\{ \\frac{c_{2}}{\\lambda T}\\}-1}d\\lambda\\quad W\\mu sr/cm^{2} \\tag{4}\\] And for a graybody whose spectral emissivity is less than 1, the equation will be, \\[L(T)=\\int_{\\lambda_{2}}^{\\lambda_{1}}\\frac{\\epsilon(\\lambda)c_{1}}{\\lambda^{5 }}\\frac{1}{\\exp\\{\\frac{c_{2}}{\\lambda T}\\}-1}d\\lambda\\quad W\\mu sr/cm^{2} \\tag{5}\\] Where \\(\\epsilon(\\lambda)\\) is the spectral emissivity of the object. #### 2.2.3 Radiance to Detector Voltage If such radiation falls on the IR detector with an area of \\(A_{p}\\) and the solid angle of the detector at the target is \\(\\Omega\\), the output voltage \\(V_{det}\\) generated by the detector will be, \\[V_{det}=A_{p}\\Omega L(T)\\int_{\\lambda_{2}}^{\\lambda_{1}}\\tau_{amb}(\\lambda) \\tau_{opt}(\\lambda)\\Re(\\lambda)d\\lambda\\quad V \\tag{6}\\] Where \\(\\tau_{amb}(\\lambda)\\) represents the atmospheric transfer function and \\(\\tau_{opt}(\\lambda)\\) represents the transmittance of detector optics. \\(\\Re\\) is the spectral responsivity of the detector with the unit \\([V/W]\\). The responsivity function, denoted as \\(R(\\lambda,x,y,t)\\), describes the system's response to an input signal, where \\(\\lambda\\) represents the spectral distribution, \\(x\\) and \\(y\\) represents the spatial distribution, and \\(t\\) represents the temporal distribution. In an ideal system, the responsivity function would be a linear and shift-invariant function of the input signal, allowing for straightforward prediction of the system's response. However, in practice, the responsivity function is often nonlinear and shift-variant, due to factors such as detector nonlinearity, optical aberrations, and electronic noise. If we assume these variables as shift-invariant system, integral values of the wavelength band will have constant value. Since, the detector area and the solid angle is also constant we can safely say, \\[V_{det}\\propto L(T) \\tag{7}\\] The spectral radiance intensity is the function of the spectral emissivity \\(\\epsilon(\\lambda)\\) and surface temperature. The emissivity is constant for the surface. Hence, it can be derived that the Voltage generated by the detector is directly proportional to the temperature of the surface. The equation relating radiance to pixel brightness in IR imaging depends on several factors, including the specific Figure 1: Pictorial representation of an Infrared System. MWIR imaging system used, the target's characteristics, and the atmospheric conditions at the imaging time. In general, the process of converting radiance (measured in watts per square meter per steradian) to pixel brightness (measured in digital counts or units) involves several steps, including: 1. **Calibration:** The IR imaging system must be calibrated using a known radiance source to establish a relationship between radiance and pixel brightness. 2. **Correction for atmospheric effects:** The radiance measured by the imaging system may be affected by scattering, absorption, and other atmospheric effects that can reduce image quality. These effects can be corrected using atmospheric correction models and algorithms. 3. **Temperature conversion:** The radiance can be converted to temperature using a radiometric conversion equation that considers factors such as the target's emissivity and the imaging system's spectral response. 4. **Display:** Finally, the temperature values can be mapped to pixel brightness values using a lookup table or color scale to represent the IR image visually. Each step's specific equations and algorithms will vary depending on the IR imaging system and application. IR wavelength band conversion is an extremely difficult and challenging area of research because materials have different emissivity, reflectivity and transmissivity that varies with the IR wavelength band. The variance is strongly affected by water vapour and carbon dioxide amongst atmospheric gases because these components have a specific absorption/emission spectrum according to the wavelength band. ## 3 Tools and Algorithms for modelling atmospheric transfer function Atmospheric transmission or atmospheric transfer function refers to the process by which electromagnetic waves, including light and other forms of radiation, pass through the Earth's atmosphere. The atmosphere is made up of a mix of different gases, particles, and water vapour, all of which can interact with electromagnetic waves in various ways. These interactions can include absorption, where the energy of the wave is taken up by the atmospheric components, and scattering, where the direction of the wave is changed. In the context of IR imaging, atmospheric transmission is particularly relevant because the atmosphere can absorb specific wavelengths of IR radiation more than others. This is due to the presence of gases like water vapour and carbon dioxide, which can absorb specific frequencies of IR radiation. This effect is known as atmospheric absorption or atmospheric attenuation. Accurate measurement of the atmospheric transfer function is crucial in synthetic IR imaging for realism and precision. This accurate modelling enhances the quality of synthetic images, creating a more representative training dataset for machine learning models. Furthermore, it improves the generalizability of these models to real-world conditions and allows for effective performance evaluation of IR imaging systems, ensuring they can effectively interpret real-world IR data. #### 3.0.1 DISORT (Discrete Ordinate Radiative Transfer) This method is a widely used radiative transfer model that can simulate the transmission of radiation through a horizontally stratified atmosphere. DISORT uses a discrete ordinate method to solve the radiative transfer equation and can account for scattering and absorption by clouds and aerosols. #### 3.0.2 RTTOV (Radiative Transfer for TOVS) This method was developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) and is used to simulate the transmission of MWIR radiation through the Earth's atmosphere for weather forecasting applications. RTTOV uses a fast radiative transfer algorithm to calculate the radiative transfer coefficients based on atmospheric profiles. #### 3.0.3 ARTS (Atmospheric Radiative Transfer Simulator) This method is a flexible radiative transfer model that can simulate the transmission of MWIR radiation through complex atmospheric conditions, such as clouds and aerosols, and can be used to simulate the performance of remote sensing instruments. #### 3.0.4 TES (Transmittance Estimation from the Surface) This method is a simplified model that estimates the transmittance of MWIR radiation through the Earth's atmosphere based on the surface temperature and atmospheric water vapour content. TES is often used for quick estimates of radiation transmittance for remote sensing applications. #### 3.0.5 LOWTRAN and MODTRAN LOWTRAN (Low-Resolution Transmission) and MODTRAN (MODerate resolution atmospheric TRANmission) are widely used methods for simulating the transmission of MWIR radiation through the Earth's atmosphere. LOWTRAN was developed by the US Air Force and is a simplified model that calculates radiation transmission based on the atmospheric profile and the concentration of atmospheric constituents, such as water vapour, carbon dioxide, and ozone. LOWTRAN assumes a horizontally homogeneous atmosphere and provides low spectral resolution. MODTRAN, on the other hand, is a more sophisticated model developed by the US Air Force and Spectral Sciences, Inc. It provides moderate spectral resolution and can account for more complex atmospheric conditions, such as horizontal and vertical variations in atmospheric constituents and the effects of clouds and aerosols on radiation transmission. MODTRAN can also simulate the reflection and emission of radiation from the Earth's surface and can be used to predict the performance of remote sensing instruments, such as satellites or airborne sensors. Both LOWTRAN and MODTRAN are widely used by researchers and practitioners in atmospheric science, remote sensing, and defense applicationsto simulate the transmission of MWIR radiation through the Earth's atmosphere and to evaluate the performance of MWIR remote sensing instruments. #### 3.0.6 **Atran[37]** ATRAN (Atmospheric TRANmission) is a software module developed for accurately modeling the transmission of electromagnetic radiation through Earth's atmosphere. It considers factors such as altitude, temperature, pressure, and gas concentrations to compute synthetic spectra of atmospheric transmission. Used in fields like meteorology, remote sensing, and telecommunications, ATRAN aids in interpreting sensor data, designing communication systems, and generating realistic synthetic IR images. ## 4 Datasets In this section, we explore a diverse range of infrared datasets, which can be used to generate the synthetic IR scenes and targets. The datasets on the Infrared can be categorized into three types: i) standalone Infrared datasets, ii) RGB-Infrared unpaired datasets, and iii) RGB-Infrared paired datasets. Standalone Infrared Datasets consists exclusively of infrared images captured using IR sensors. They are particularly useful for applications focusing solely on infrared imaging or for developing algorithms that leverage the unique characteristics of IR data. RGB-Infrared Unpaired Datasets comprise both RGB and infrared images; the images in these datasets are not paired, which means that corresponding RGB and infrared images may not share the same scene or capture time. These datasets are valuable for exploring the strengths of RGB and infrared imaging and for developing algorithms that can process and analyze both data types independently. In the RGB-Infrared Paired Datasets, RGB and infrared images are strictly paired, meaning they share the same scene and capture time. This pairing allows researchers to develop and evaluate algorithms that fuse RGB and infrared data, capitalizing on the complementary information provided by both imaging modalities. Further, the datasets discussed in this section cover a broad spectrum of imaging scenarios, encompassing aerial, outdoor-non-aerial, and indoor images. Aerial infrared datasets consist of images captured from elevated platforms such as satellites, aeroplanes, or unmanned aerial vehicles (UAVs) like drones. These datasets provide valuable information for various applications, including environmental monitoring, disaster management, agriculture, urban planning, and land use analysis. Aerial infrared images reveal insights not readily discernible in visible light images, such as temperature variations, moisture content, and vegetation health. The outdoor-Non-Aerial category includes infrared datasets captured at ground level in various outdoor environments. Figure 3: Image snippets from different datasets. Figure 2: Atmospheric transmission pattern generated by MODTRAN module for the wavelength range of MWIR \\(3\\mu m\\) to \\(5\\mu m\\). Atmosphere Model used is US Standard 1976, ground temperature is 294.2K, Aerosol model used is Urban with a visibility of 23km, sensor altitude is 5km and zenith is 90 degrees. These datasets can cover diverse scenarios, such as pedestrian and vehicle detection, road and traffic monitoring, infrastructure assessment, and wildlife observation. Outdoor-non-aerial infrared images can be particularly beneficial in low-light or obscured conditions, where traditional RGB imaging may struggle to provide sufficient information for computer vision tasks. Indoor infrared datasets comprise images taken within enclosed spaces, such as buildings, homes, factories, or other structures. These datasets can be used for applications like thermal anomaly detection, energy efficiency analysis, surveillance and security, and human activity recognition. Indoor infrared imaging can reveal hidden details, such as heat signatures, moisture levels, or insulation issues, which can be critical for assessing building performance or detecting potential safety hazards. Following are the datasets available in the public domain: ### OSU Databases The datasets under discussion were meticulously prepared and shared by researchers at Ohio State University. #### 4.1.1 OSU Thermal Pedestrian Database [12] Comprised of thermal images in the wavelength range of 7-14 \\(\\mu m\\), this standalone thermal dataset was proposed for developing person detection algorithms. The images were captured using a Raytheon 300D thermal sensor, equipped with a high-quality 75mm lens, ensuring accurate and detailed infrared data. The dataset contains a total of 10 video sequences, which are further divided into 284 individual images. Each image is presented in 8-bit grayscale bitmap format, with a resolution of 360\\(\\times\\)240 pixels. Specifically designed for person detection applications, this dataset enables the development and evaluation of algorithms that can accurately identify and track individuals in a variety of environments and conditions. #### 4.1.2 OSU Color-Thermal Database[13] This dataset focuses on the fusion of color and thermal imagery, with specific applications in fusion-based object detection using both color and thermal data. The dataset was collected using two sensors: a Raytheon PalmIR 250D thermal sensor with a 25mm lens and a Sony TRV87 Handycam color sensor. These cameras were mounted adjacent to each other on a tripod at two separate locations, approximately three stories above the ground, with manual control over gain and focus settings. The dataset consists of six color and thermal sequences, with three captured at each location. In total, there are 17,089 images in the dataset. The thermal images are in 8-bit grayscale bitmap format, while the color images are in 24-bit color bitmap format. Each image has a resolution of 320x240 pixels, and the sampling rate is approximately 30Hz. The color and thermal images were registered using homography, a technique that relies on manually-selected points to align the images accurately. This registration process ensures that the corresponding color and thermal images share the same scene and capture time, making it possible to develop and evaluate fusion-based object detection algorithms. The dataset contains the thermal images in the wavelength range of 7-14 \\(\\mu m\\). ### ATR Algorithm Development Image Database [9] The US Army NVESD has created the ATR (Automatic Target Recognition) Database for ATR algorithm developers. It contains 600000 visible and MWIR 3-5 \\(\\mu m\\) images of various targets and backgrounds, ground truth data, meteorological data and related information. The image size of MWIR The database covers different target types at different distances and angles, such as people, military vehicles, and civilian vehicles. The MWIR and RGB images both are captured in 16 bit format and has resolution of 640x480. The database also provides target temperature differentials, meteorological data, target photographs, and a user's guide. The images were captured using commercial cameras in the MWIR and visible bands. It is one of the largest and only available public data in the MWIR band. This dataset is exclusive and available only to researchers in NATO member countries. ### Terravi Databases [38] This dataset focuses on detection and tracking using thermal imagery. The images were captured using a Raytheon L-3 Thermal-Eye 2000AS sensor with wavelength sensitivity of 7-14 \\(\\mu m\\), which specializes in acquiring detailed thermal data. The dataset comprises 18 thermal sequences in total, covering a diverse range of scenarios to accommodate various applications and challenges. These scenarios include 11 outdoor motion and tracking scenes, 1 outdoor house surveillance scene, 1 indoor hallway motion scene, 1 plane motion and tracking scene, 2 underwater and near-surface motion scenes, and 2 uneventful background motion scenes. This wide variety of scenarios ensures that the dataset is suitable for exploring different detection and tracking tasks using thermal imagery. The images in the dataset are in 8-bit grayscale JPEG format. Each image has a resolution of 320x240 pixels, providing sufficient detail for computer vision tasks. By offering a comprehensive collection of thermal images across various scenarios, this dataset enables the development and evaluation of advanced detection and tracking algorithms that leverage the unique advantages of thermal imaging. ### CSIR-CSIO Moving Object Thermal Infrared Imagery Dataset (MOTIID) [4, 2, 3] This dataset focuses on moving object detection in thermal infrared imagery, targeting various subjects such as pedestrians, vehicles, and animals. The images were captured using a thermal infrared camera mounted on a tripod at a height of approximately 4 feet. Additional sensor details can be found in the referenced materials. The dataset contains 18 thermal sequences, encompassing a diverse range of moving targets, including two different models of 4-wheelers (Ambassador and Innova), a 3-wheeler (auto-rickshaw), a 2-wheeler (motorcycle), humans walking at varying distances, a strolling dog, and a flying bird. These diverse scenarios enable researchers and developers to explore and develop robust detection algorithms for various moving objects in thermal infrared imagery. The images have a resolution of 640x480 pixels, and the sampling rate is 10Hz. The duration of each thermal video sequence varies between 4-22 seconds, with each sequence featuring one or more moving targets entering and exiting the camera's field of view. This comprehensive dataset provides a valuable resource for the development and evaluation of advanced moving object detection algorithms using thermal infrared imaging. ### Maritime Imagery in the Visible and Infrared Spectrums [67] The VAIS dataset is a collection of simultaneously acquired unregistered thermal and visible images of ships acquired from piers. It is suitable for object classification research. The dataset includes 2865 images, of which 1242 are infrared images and 1623 are visible images. There are 1088 pairs of images in the dataset, each of which contains an infrared image and a visible image of the same ship. The dataset includes 264 unique ships, of which 154 are night IR images. The dataset is divided into 6 basic categories: merchant, sailing, passenger, medium, tug, and small. There are also 15 fine-grained categories within these basic categories. The VAIS dataset was created by researchers at the University of Washington and the University of California, Berkeley. The dataset was collected from piers in the San Francisco Bay Area. The images in the dataset were captured using two different sensors: a visible-light sensor and an infrared sensor. The visible-light sensor captures images in the visible spectrum, while the infrared sensor captures images in the infrared spectrum. The visible-light sensor is an ISVI IC-C25, which captures 5,056x5,056 layered color pixel images. The infrared sensor is a Sofradir-EC Atom 1024, which captures 1024x68 pixel images. The VAIS dataset is a valuable resource for object classification research. The dataset is large and diverse, and it includes a variety of ship types and conditions. The dataset is also well-organized and easy to use. ### Thermal Infrared Video Benchmark for Visual Analysis [62] The Thermal Infrared Video Benchmark for Visual Analysis (BU-TIV) is a comprehensive dataset designed to facilitate research on object detection, counting, and tracking in single and multiple-view infrared videos. Captured using FLIR SC8000 sensors, the dataset includes over 60,000 frames, hundreds of annotations, and camera calibration files for multi-view geometry. The sequences within the dataset are specifically crafted to test various vision tasks such as tracking single pedestrians or flying bats at low resolution, monitoring multiple moving objects like pedestrians, cars, bicycles, and motorcycles, as well as tracking multiple flying bats and people with the planar motion from multiple views. The dataset also covers 3D tracking of multiple flying bats from three distinct views and counting flying bats in high-density environments. With a diverse range of scenarios and frame sizes, this benchmark dataset is an invaluable resource for researchers and practitioners in the field of visual analysis using thermal infrared videos. ### Teledyne FLIR Thermal Sensing for ADAS The FLIR Thermal Dataset is a free dataset of thermal and visible spectrum images for the development of object detection systems using convolutional neural networks (CNNs). The dataset contains over 26,000 annotated images with 520,000 bounding box annotations captured at day and night, and includes classification of fifteen groups: bike, car, motorcycle, bus, train, truck, traffic light, fire hydrant, street sign, dog, skateboard, stroller scooter, and other vehicle. The frames were captured using a Teledyne FLIR Tau 2 640x512, 13mm f/1.0 (HFOV 45\\({}^{\\circ}\\), VFOV 37\\({}^{\\circ}\\)) thermal sensor and a Teledyne FLIR Blackfly S BFS-U3-51S5C (IMX250) camera and a 52.8\\({}^{\\circ}\\) HFOV Edmund Optics lens RGB camera. The dataset can be used to train and evaluate object detection algorithms for ADAS and autonomous vehicles. ### LLVIP A visible-infrared Paired Dataset for Low-light Vision[23] The dataset provides visible-infrared paired images for very low-light vision. The dataset has 30976 images, including RGB and IR, translating to 15488 RGB-IR pairs. The wavelength of the dataset is in the thermal range i.e. 8\\(\\sim\\)14\\(\\mu m\\). The dataset is captured using HIKVISION DS-2TD8166BJZFY-75H2F/V2 which is a binocular camera and have both visible light and infrared camera. ### The TNO Multiband Image Data Collection [52] The dataset provides intensified visual (390-700 nm), near-infrared (700-1000 nm), and longwave infrared (8-12 \\(\\mu\\)m) nighttime imagery. The dataset contains 16 motion sequences depicting various military and surveillance scenarios, featuring different objects and targets such as people and vehicles set against diverse backgrounds that include both rural and urban settings. The dataset is captured using their own TRILOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. ### The Linkoping Thermal InfraRed (LTIR) dataset [8] The Linkoping Thermal InfraRed (LTIR) dataset is a thermal infrared dataset for evaluation of Short-Term Single-Object (STSO) tracking. It encompasses 20 thermal infrared sequences, each consisting of 536 frames, yielding a substantial volume of data for analysis. With a high-resolution frame size of 1920x480, the dataset offers detailed thermal infrared imagery, enhancing the precision of tracking applications. The dataset was captured using a variety of advanced sensors, including the FLIR A35, FLIR Tau320, and FLIR A655SC. The use of multiple sensors demonstrates the dataset's versatility and suitability for diverse trackingalgorithms, making it an invaluable resource for researchers and developers in the field of object tracking using thermal infrared data. ### ICRA Thermal Infrared Dataset [45] This dataset consists of 4381 aerial thermal infrared images featuring humans, a cat, a horse, and 2418 background images, all of which come with manually annotated ground truths. The image size is 324x256 pixels. The images are split into eight sequences and are available in both 16-bit and downsampled 8-bit formats. The dataset, recorded with a handheld FLIR Tau 320 thermal infrared camera, is suitable for tracking algorithms due to its uniform sampling rate. It also includes a training set comprising cropped images of humans exclusively. Ground truths have been annotated using Matlab evaluation/labeling code (3.2.0). ### Multispectral Pedestrian Detection Dataset [20] The KAIST dataset is a specially curated collection for pedestrian detection tasks, featuring meticulously aligned color-thermal image pairs. The uniqueness of the dataset lies in its use of beam splitter-based specialized hardware, which enables the creation of registered RGB-Thermal images, ensuring a high level of precision and consistency across the dataset. The collection comprises 95,000 images, each having a resolution of 320x256, representing a substantial source of data for analysis. The thermal images in the dataset were captured using the FLIR-A35 sensor, while the RGB images were obtained through the PointGrey Flea3. The blend of these advanced technologies underscores the robustness and reliability of the KAIST dataset in pedestrian detection and related research applications. ### Other Datasets #### 4.1.3.1 MODIS (Moderate Resolution Imaging Spectroradiometer) MODIS is a key instrument aboard the Terra and Aqua satellites, providing data in NIR, MWIR, and LWIR regions. It covers a wide range of Earth observation data, including the atmosphere, land, and ocean. #### 4.1.3.2 Landsat The Landsat program is a series of Earth-observing satellite missions jointly managed by NASA and the U.S. Geological Survey. Landsat provides multispectral data, including NIR and SWIR (short-wave infrared) and LWIR bands. #### 4.1.3.3 Sentinel-2 A part of the Copernicus program, Sentinel-2 is a series of satellites providing high-resolution optical imagery, including NIR and SWIR bands. The data is freely available and frequently used for land monitoring, vegetation, and disaster management applications. #### 4.1.3.4 VIIRS (Visible Infrared Imaging Radiometer Suite) VIIRS is an instrument aboard the Suomi NPP and NOAA-20 satellites, providing data in NIR, MWIR, and LWIR regions. It is primarily used for monitoring the Earth's environment, weather, and climate. \\begin{table} \\begin{tabular}{|c|l|c|l|l|l|l|} \\hline **SNO** & **Dataset** & **No of Im-** & **IR Band** & **Resolution** & **No of bits** & **Sensor** \\\\ \\hline 1 & OSU Thermal Pedestrian & 284 & LWIR & 360\\(\\times\\)240 & 8 & Raytheon 300D Thermal Sensor \\\\ 2 & OSU Color-Thermal & 17089 & RGB and LWIR & 320\\(\\times\\)240 & 24bit RGB \\& & Sony TRV87 and Raytheon PalmIR 250D \\\\ 3 & Terravi Databases & 18 Video Sequences & LWIR & 320\\(\\times\\)240 & 8 & thermal sensor \\\\ 4 & CSIR-CSIO & 18 Video Sequences & LWIR & 640\\(\\times\\)480 & & L-3 Thermal-Eye 2000AS sensor \\\\ 5 & ICRA ASL-TIR & 4381 & LWIR & 324\\(\\times\\)256 & 8 & FLIR Tau 320 thermal infrared camera \\\\ 6 & VAIS & 2865 & RGB and LWIR & 5056\\(\\times\\)5056 & and & SSVI IC-C25 and Softarf-EC Atom 1024 \\\\ 7 & BU-TIV & 65590 & LWIR & upto 1024x & 16 & FLIR SC8000 \\\\ 8 & CVC-14 & 8385 & RGB and LWIR & 476\\(\\times\\)640 & & \\\\ 9 & CVC-09 & 11071 & LWIR & 640\\(\\times\\)480 & & \\\\ 10 & FLIR-ADAS & 26000 & RGB and LWIR & 640\\(\\times\\)512 & 14bit Tiff, Bbit & Teledyne Filt Blackfly \\& BFPS-U3-5155C \\\\ & & & & JPEG,, & (IMOX250) camera and Teledyne Filt \\\\ & & & & RGB & Tau 2 thermal sensor \\\\ 11 & LLVIP & 30976 & RGB and LWIR & 1920x1080 and & 8 and 16 & HIKVISION DS-2TDB166BJEFY- \\\\ 12 & TNO & 16 Video sequences & RGB, NIR and LWIR & 256x256 & 8 & TRFLDES (TRI-band Color Low-light OBSERVation) all-day all-weather surveillance system \\\\ 13 & LTIR & 11260 & LWIR & 1920x480 & 8 and 16 & FLIRA35, FLIR Tau320, FLIRA655C \\\\ 14 & KAIST & 95000 & RGB and LWIR & 320\\(\\times\\)256 & 8 & FLIR A35 and PointGrey Flea3 \\\\ 15 & ATR & 600000+ & RGB and MWIR & 640\\(\\times\\)480 & 16 & Illus Camera with Nikon Zoom Lens and Night Conquer MWIR imager \\\\ \\hline \\end{tabular} \\end{table} Table 2: Tabular representation of Infrared Datasets present in Public Domain #### 4.13.5 ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) A part of NASA's Terra satellite, ASTER provides high-resolution multispectral data, including NIR, SWIR, and TIR (thermal infrared) bands. It is used for studying land surface processes, including vegetation, hydrology, and geology. ## 5 Synthetic Infrared Scene Simulation Tools The desirable data for developing the Infrared systems is almost impossible to obtain using the measurements. Hence, the Infrared scene simulation is an essential and integral part of any optical system development. Considering its importance, various research has been done to develop GUI-based tools which could simulate any infrared scene simulation with large wavelength variation. These tools generate simulation videos in various steps. Most of these tools generate the 3D scene using graphic tools and then use physics-based libraries to model the properties of various elements between the source and the observer/sensor and sensor optics to the detector for complete simulation. The Source-to-Sensor elements include atmospheric properties, spectral irradiance at the source, spectral reflectance at the source, weather conditions, special effects such as fires, smoke, plumes, etc., natural and man-made light sources, dynamic heating and cooling of the source, shadows, etc. The Sensor-to-Detector elements include Optical aberrations, Detector spectral response, Aeroptical effects, detector array, IFOV sampling, Diffractions, Amplification, Gain, Dead Pixels etc. Along with this, they use physics-based material modelling to incorporate the material properties present in the scene. Materials behave differently in different wavelength bands. For example, the signature of a glass is always black in the Infrared region. Steps involved in the Generation of simulated IR videos 1. Generating the 3D scenes using Graphics tools such as Blender, Unity or Unreal Engine. 2. Finding out the radiance of the object based on the physics-based principles and material properties. The physics-based principles are not limited to Thermophysical properties, Spectral BRDF, Material physical property, spectral signature calculations for various wavelength ranges, Surface temperature properties and their change due to atmospheric conditions, etc. 3. Atmospheric physical properties modelling using tools such as MODTRAN. 4. Scene rendering based on ray tracing and photon counting algorithms. There have been many such tools developed, such as OSV [50], Otal [43], MuSES and CoTherm [51], Ondulus [46], DIRSIG [21], OSSIM [11] and others. A brief description of these tools are given below. 1. JRM Technologies OSV [50] is a Real-time spectral EO/IR Sensor Scene Simulator. The OSV can generate real-time scene simulations using in-house developed libraries such as SigSim and SemSim in the 0.2 to 25.0 \\(\\mu m\\) wavelength band. The software uses the Open scene graph (OSG) toolkit, which has materially-encoded targets and terrain and SigSim and SenSim libraries to predict correlated radiometrically correct 2D sensor imagery for arbitrary sensor bands under arbitrary weather conditions and spatio-temporal viewing locations. SigSim is a JRM's signature physics library to predict the accurate wavelength band-specific signatures. The SigSim includes the diurnal, ephemeris, scattering parameters, natural and synthetic irradiant causes and surface temperatures to model the wavelength phenomenon accurately. SigSim uses the MODTRAN to model the atmospheric phenomena. SenSim library provides physics-based sensor properties such as post-aperture sensor noise, blur, gain and other effects. Using SenSim, the user can define the optical properties, detector array, signal processing and display parameters. 2. OKTAL-SE's products [43, 17] provides Physics-based rendering engines and 3D geometry-based scene editing for EO/IR applications. They use Blender as their renderer and use advanced thermal equations for IR characterization. They are using MODTRAN for atmospheric modelling. With a dedicated add-on, the tool can generate simulations for complex phenomena such as Sea modelling, Wakes, Foam, cloud layers, 3D clouds, Rain, Snow, Flares, Blasts, Vapor trails, Lights, Exhaust plumes and Lightning strikes. Based on the required scenario, the user can \\begin{table} \\begin{tabular}{|l|l|l|} \\hline **Target** & **Atmosphere** & **Sensor/Device** \\\\ \\hline Type & Absorptions & Spectral Band \\\\ Size & Scattering & Field of View \\\\ Emittance & & IFOV: Spatial \\\\ Reflectance & & Resolution System Parameters: \\\\ & & MRTD, NETD and MTF \\\\ Thermal Contrast & & Aperture \\\\ Clutter & & Detector D* \\\\ Motion & & Pixel size and pitch \\\\ Time & & Cold Shield EFF \\\\ & & Signal Processing \\\\ \\hline \\end{tabular} \\end{table} Table 3: Different elements of Infrared Imagingdefine the Geometry, Physical properties of materials, Atmospheric and thermal conditions, Mobile and instanced objects, Special events during simulation (flares, explosion, ), Trajectories, Scripted animation of the scene and the target using the tool. User can also define the sensor characteristics for sensor-specific simulation and validation. 3. Thermo Analysis's MuSES and CoTherm [51] is a similar EO/IR simulation tool. The two software are used with each other for sensor modelling and scene generation tasks for simulation. The MuSES uses comprehensive heat transfer equations to estimate realistic temperature and EO/IR sensor radiance. It generates high-resolution 3D geometries and calculates component heat sources based on material properties while considering the environmental boundary conditions. The CoTherm provides automation capabilities to the scene simulation process. It helps generate MuSES imagery by taking input parameters from the user. 4. Digital Imaging and Remote Sensing Image Generation (DIRSIG) [21] is a synthetic image generation tool developed by the Digital Imaging and Remote Sensing Laboratory at Rochester Institute. The tool was initially developed for Remote Sensing applications, but later, a flexible development approach was taken to expand its applicability to the development of LIDAR, RADAR, Cloud Modelling, etc. The tool can also be used for low-light-level photon mapping, polarimetric imaging, etc. The tool can produce radiometrically correct broad-band, multi-spectral and hyper-spectral images. 5. OSSIM (Optronic Scene Simulator) [11, 59] is developed by the Council for Scientific and Industrial Research, South Africa and Denel Dynamics Ltd. It is designed explicitly for Infrared Scene simulation and covers the 0.4-20 \\(\\mu m\\) of spectral region. The intended use of the OSSIM is for the development of thermal image systems, missile seeker sensors, Sensor and image processing algorithms, etc., along with their optimization and performance simulations. The tool creates radiometrically accurate images for all wavelength bands. They use MODTRAN's Joint Modelling and Simulation Systems (JMASS) interface to model the atmospheric parameters. The tool's thermal equations and heat balance equations are discussed in the next section. 6. CAMEO-SIM for CCD assessment [41] was developed by the UK Defence Evaluation and Research Agency and Hunting Engineering Ltd. This physics-based broadband scene simulation tool is specifically designed for evaluating camouflage, concealment, and deception (CCD). It is employed for infrared scene simulation within the 0.4-14 \\(\\mu m\\) range. CAMEO-SIM is capable of generating high-fidelity imagery, utilizing an image generator that integrates both radiosity and ray tracing processes. Additionally, it offers the flexibility to produce various levels of fidelity, balancing accuracy and rendering time according to specific requirements. Due to the ability to generate synthetic data in a wide spectrum band, these tools can be used to generate data for training various deep learning models for applications such as security and surveillance, Target Detection, Tracking, Recognition and Identification, Missile Guidance systems, Automatic Driving Assistance System (ADAS), etc. The deep learning models have changed the course of computer vision with their tremendous efficiency in handling non-linear tasks. These methods learn the data directly and hence require a large amount of data for training and evaluation. The availability of such data in the Infrared domain is very challenging to find in comparison to RGB data. To develop deep learning models, these tools can be employed for data creation. Further, deep learning is also making these tools better. Since most of the tools rely on graphic tools such as Blender, the scene looks animated and lacks realism. Deep learning methods have lately shown promise in generating realistic images. These proficient models can be utilized in realistic scene creation with these tools. Octal uses such technology to improve its overall simulation experience. Most of these tools are proprietary and expensive to purchase. They are designed for vast use cases. They employ a robust verification methodology to ensure the accuracy and integrity of their data and maintain high standards of quality in their processes. ## 6 Methods of synthetic IR image/video generation In this section, various methods used for the generation of Synthetic IR are discussed. These methods could be categorized into two major parts. First, we are going to discuss the computational methods where IR physics has been prominently used for IR scene simulation, and secondly, we will discuss the latest trends in generative methods in computer vision and their application in synthetic IR scene simulation. A pictorial depiction of the methods discussed in this paper is shown in figure 4. ### Computational methods Computational IR image/video generation methods are among the most reliable as they utilize the physics-based approach to generate realistic and accurate IR imagery. IR signatures of an object depend on multiple factors, including material properties, atmospheric propagation, sensor optics, temperature, reflections, solar influence, viewing angle, shadow effects, and wind speed. The material properties of the object that play a prominent role in its IR signature include emissivity, which is the object's ability to emit thermal radiation. Emissivity is a material property that varies for different materials and alloys, as well as for different wavelength ranges, temperatures, and observation directions. It also depends on the surface structure and regular geometry. The IR photons emitted by the object travel through a medium, the atmosphere. The attenuation caused by the atmosphere changes the behaviour of the IR signature. This attenuation is modeled as atmospheric propagation, which includes ambient temperature, relative humidity, atmospheric temperature, and optical properties of matter in the atmosphere. MODTRAN and related tools are used to model atmospheric propagation. Similarly, once the photon flux reaches the sensor, the sensor optics also modify IR signature. The optics of the sensor, such as lens material, aperture, field of view, transmission, and sensor temperature, play a crucial role in IR imagery. Solar influence is another critical factor that affects the IR signature. The sun's rays falling on the object carry heat, which is absorbed, reflected, and radiated by the object. These rays are also refracted by the atmosphere and fall on the detector, significantly changing the IR imagery. The shadow effect, which blocks the fall of solar rays on the object, changes the heat equilibrium of that part and thus changes its IR signature. For radiometrically accurate computation method based IR scene generation, researchers have explored multiple methods. A seminal paper by Garnier et al. [16] introduced a sensor modeling methodology that establishes the geometric and radiometric relationships between points in a 3D observed scene and the corresponding pixels in the IR sensor output image. This work provided the mathematical foundation for physics-based sensor modeling techniques. [24, 69, 15, 47, 35, 25] have used 3D modelling software to create a 3D Scene and then using physics-based modelling, they have generated the IR scenes. [28, 5, 6] have used a known IR signature from a specific band and using the temperature calculation method they have produced the IR signature in different spectral bands. [42, 32, 10, 64] have used images and developed physics-based modelling to describe the IR signature emitted by the objects in the image. Yu et al. [64] have proposed a method for realistic IR image generation by establishing a heat equilibrium equation of the object surface and using it along with the physics of the heat transfer inside and on the boundary of the object; they computed the temperature and radiometric details. They applied the method in a patch-wise manner on the object and used Gouraud shading along with corresponding radiometric information to draw each patch in IR. They have calculated the radiance of the object's surface in the atmospheric window where the atmosphere's attenuation is minimal to certain specific wavelengths. They have not used the MODTRAN for atmospheric attenuation. Even though the paper thoroughly explores the physics behind various heats occurring due to solar irradiance and the object itself, their IR signature is dependent on the assumption of the transparent atmospheric window, which is not practical in real life. Choi et al. [10] proposed composite heat transfer model for objects present in the scene to calculate surface temperature distribution. The heat transfer model included the conduction, convection and solar irradiance for calculation of surface temperature. The equation governing the surface temperature is similar to equation 2. They have used MODTRAN 4 for calculating the atmospheric transmittance, various radiance such as atmospheric background radiance, solar and lunar radiance, thermal radiance, etc. To calculate the radiance received by the sensor, they have considered the emission from the object's surface, solar irradiance reflection by the object surface, and atmospheric scattering. The resulting equation is similar to the big equation discussed in sections before. They have demonstrated the efficiency of their computational model by generating the infrared signature of objects made up of Asphalt and Aluminium. They have calculated the diurnal Infrared signature for these objects. The Density, specific heat, thermal conductivity, and total absorptivity were essential material properties used during radiance calculations. Leja et al. [32] proposed a mathematical model and IR data generation for uncooled IR sensors with the aim of aiding the calibration of IR sensors. IR sensors are prone to fixed pattern noise (FPN) caused by the non-uniform current-voltage characteristics of the amplifier and bolometer's non-uniform responsiveness. This results in undesirable artefacts in the imagery, including dead and defective pixels. They have used the mathematical modelling of various elements of sensors, including bolometer, focal plane array, optics and environment, to generate synthetic infrared images pertaining to an uncooled sensor. With mathematical modelling, they were able to simulate various sensor issues such as pixel defectiveness, non-uniformity, etc. More et al. [42] have proposed synthetic IR cloud generation, radiance calculation of aircraft and scene rendering for air-borne targets. They have used MODTRAN for radiometric calculation and Virtual reality modeling language for scene rendering. They proposed a method to generate clouds with rich spectral information and texture by improving on the Gardner's method and self-similarity algorithms. They have used 3D geometric models of aircraft with planar triangular facets and assumed the temperature of various elements of the aircraft and then used MODTRAN to calculate the radiance of the aircraft as a whole. Another approach taken by researchers is to generate IR images of different wavelength bands from an IR image of a known wavelength band. These methods can also be Figure 4: IR image/video synthesis methods classification. called conversion methods, which convert IR images from one wavelength band to another. Kim et al. [28] and Bae et al. [5, 6] have proposed the generation of IR images of arbitrary spectral bands using IR images with known spectral bands. They first estimate the radiance of the target and background from the IR images of arbitrary wavelength bands. Using the radiance, they estimate the temperature component of objects in the IR images. Using this information, they have generated the temperature image. Further, using the temperature-to-radiance models and radiance to grey-level transfer functions, they generated the image in three different bands, namely LWIR, MWIR and SWIR. The result obtained by the method is shown in figure 5. Computer graphic-aided tools are also used by the researchers for the generation of accurate synthetic scenes. These methods serve as the foundation for the development of software-based IR generation methods. He et al. [19] have used the Object-oriented Graphics Rendering Engine (OGRE), an open-source engine, to generate the 3d geometrical models. Further, they have calculated the infrared radiation and texture of the models using thermal models. Atmospheric effects and radiation transfer model effects were then added to these textures to accommodate atmospheric transmission of the radiation in a specific wavelength. Then the 3D infrared scene was rendered using the final textures of the 3D objects. Results of the method is shown in figure 6. Similar to this, Jiang et al. [24] have used heat balance equation models and OSG engine to predict the surface temperature of various materials and radiance and generate large-scale IR scenes, respectively. The IR image \\begin{table} \\begin{tabular}{|c|c|} \\hline **Method** & **Papers** \\\\ \\hline & Zhaoyi et al. [25] \\\\ \\cline{2-3} & Jiang et al.[24] \\\\ \\cline{2-3} 3D Modelling software and physics-based modelling & Zhijan et al. [69] \\\\ \\cline{2-3} & Dulski et al. [15] \\\\ \\cline{2-3} & Qi et al. [47] \\\\ \\cline{2-3} & Li et al. [35] \\\\ \\hline & Kim et al. [28] \\\\ \\cline{2-3} & Bae et al. [5] \\\\ \\cline{2-3} & Bae et al. [6] \\\\ \\hline & Choi et al. [10] \\\\ \\cline{2-3} & Yu et al. [64] \\\\ \\cline{2-3} & More et al. [42] \\\\ \\cline{2-3} & Leja et al. [32] \\\\ \\hline \\end{tabular} \\end{table} Table 4: List of Computational Methods Figure 5: Conversion of one IR band to another IR band by first estimating the temperature of objects in the image, then compensating the temperature and predicting IR signature in different bands using method proposed by Bae et al [6]. Image courtesy [6] was generated using the ray tracing method. Further, they have used the material partitioning method to improve the overall temperature prediction. Zhijian et al. [69] have used Unity3D graphic rendering engine to generate the IR imagery for simulation. They have used panoramic IR images generated using stitching small real IR images as a background. They mathematically modelled the 3D aircraft trajectory and altitude. Further, they have used IR physical modelling to generate the target. The 3D aircraft's tail nozzle, tail flame and skin have been modelled as the target. Then, all this information is fused using the Unity3D graphic rendering engine to generate Infrared data with target tracking. The proposed pipeline is shown in figure 7 and the resultant simulated scenes are depicted in figure 8. Dulski et al. [15] have proposed a numerical modelling method for simulating cloud radiation in the infrared spectral range for development of automatic target recognition algorithms. The method incorporates experimental data based on characteristic temperature profiles of clouds and sky, considering different seasons and meteorological conditions in Poland. The researchers developed the IR Sky software to generate virtual thermal images of sky and clouds, which were then analyzed to understand the impact of radiation features on the thermo-detection process. Computer graphics-based IR generation systems provide accurate and efficient ways to model realistic scenes but are not specifically designed to handle radiative transfer simulations. Qi et al. [47] have proposed a photon tracing method and backward path tracing method to simulate BRDF and photon flux information for multiple spectra and to generate sensor and large scale spectral images respectively. The backward path tracing is also used to simulate thermal infrared radiation by computing the sunlit and shaded scene components. ### Learning based methods Deep learning based image generation methods recently have shown very promising result in RGB based image generation. These methods are fast as compared to the computational methods and produce more diversified images. There have been multiple methods to generate RGB images, including Deep Boltzmann Machines, which are energy-based models; Variational Autoencoders [58, 29], which are directed probabilistic graphic models; Deep AutoRegressive models; normalising flow Models and Generative models [18, 39, 70, 22]. Deep learning-based IR generation models can be divided into two groups. The first group consists of models which work on translating the RGB images to IR images [30, 31, 66, 40, 34, 48, 53, 33, 56, 26, 55, 27, 44, 54, 57, 65, 63, 36, 1]. These models require RGB-IR pair data for Figure 8: Real and simulated Infrared Scenes with targets obtained using the graphic rendering tool and physical simulation. Image courtesy [69] Figure 6: IR scene simulation results using OGRE rendering engine and physics-based model proposed by [19]. Image courtesy [19]. Figure 7: Zhijian et al. [69] pipeline for generating synthetic IR scenes with embedded targets: This pipeline leverages physics-based modelling to accurately simulate target thermal signatures and utilize the Unity rendering engine to define target trajectories and other scene elements. The resulting output is a collection of synthetic IR scenes with seamlessly integrated targets. training and work on the principle of mapping RGB images to IR. The second group consists of direct IR generation [68]. These models are trained on IR images directly and generate similar IR images from the learned distributions. Generating thermal images from the RGB image is particularly an inverse and ill-posed task. Since the RGB image contains radiative information in the range of 400-700 nm, whereas the thermal images belong to a band of 3-15 \\(\\mu\\)m, most of the models have tried this as a problem of domain translation instead of developing physics-based models. Mizzinov et al. [40] have explored the multimodal network for synthetic IR generation from color band to thermal band. Researchers argue that GAN networks average the thermal contrast over the entire object, hence losing the important thermal characteristics of particular regions. To address this issue, they provide segmented thermal zones of objects along with the depth map to improve the network's ability to localise the object's thermal characteristics in the generated image, thereby improving the overall synthetic IR quality. They have used 3D modelling tools to create realistic 3D models and generated thermal contrast maps, depth maps and masks of objects. These were given as input to the GAN network with U-net [49] as the generator and patchGAN [14] as the discriminator. Ozkanoglu et al. [44] have proposed an InfraGAN network which uses a Unet-based generator to learn the transfer mapping between the IR and RGB. The generator loss is optimized using structural similarity losses (SSIM) and L1 loss, and discriminator loss. The discriminator proposed in the method classifies the entire image as real or fake and then classifies each pixel as real or fake. Kniaz et al. [30] have used the squeezeNet CNN network to translate the RGB data into thermal images. The network was inspired by colorization problem. They developed a training dataset consisting of 1000 images of geometrically aligned RGB and thermal images of various objects. Yuan et al. [65] have used conditional GAN [39] for synthesizing the NIR images from RGB images. They have used a robust adaptive loss function proposed in [7] along with the SSIM loss. They have used paired images from the Sentinel-2 dataset. Pix2Pix and CycleGAN based network has been the foremost choice of researchers due to their direct application in domain adaptation and image translation. Zhang et al. [66], shown in figure 9, explored image-to-image translation models such as CycleGAN[70] and Pix2Pix GAN[22] for unpaired and paired image translation to create large IR tracking data from RGB videos. Further, the intermediate features of these networks were used to enhance the tracking \\begin{table} \\begin{tabular}{|l|l|l|} \\hline \\multicolumn{1}{|c|}{**Method**} & \\multicolumn{1}{c|}{**Papers**} & \\multicolumn{1}{c|}{**Deep Learning Technique**} \\\\ \\hline & Kniaz et al.[30] & SqueezeNet CNN & \\\\ \\cline{2-3} & Kniaz et al.[31] & ThermalGAN & \\\\ \\cline{2-3} & Zhang et al.[66] & CycleGAN and Pixel GAN & \\\\ \\cline{2-3} & Mizginov et al.[40] & Vanila GAN with Unet as Generator & \\\\ \\cline{2-3} & Li et al.[34] & Multi-Generator Network & \\\\ \\cline{2-3} & Qian et al.[48] & Pix2Pix GAN with Sparse Unet as Generator & \\\\ \\cline{2-3} & Abbott et al.[1] & CycleGAN with object specific loss & \\\\ \\cline{2-3} & Yuan et al.[65] & Conditional GAN[39] with Robust adaptive loss function [39] & \\\\ \\cline{2-3} & Uddin et al.[53] & Attention GAN & \\\\ \\cline{2-3} & Li et al.[33] & Pix2Pix GAN with D-link Net as Generator & \\\\ \\cline{2-3} & Wang et al.[56] & CPSTM-style transfer Network & \\\\ \\cline{2-3} & Kim et al.[26] & BiCycleGAN (Intensity Modulation Network) & \\\\ \\cline{2-3} & Ulusoy et al.[55] & Vanila GAN with Unet as Generator & \\\\ \\cline{2-3} & Kim et al.[27] & CNN based neural style transfer network & \\\\ \\cline{2-3} & Ozkanoglu et al.[44] & InfraGAN & \\\\ \\cline{2-3} & Li et al.[36] & DAGAN & \\\\ \\cline{2-3} & Yi et al.[63] & CycleGAN architecture with channel and spatial attention [60] and gradient normalization module [61] & \\\\ \\cline{2-3} & Uddin et al.[54] & MWIR-GAN & \\\\ \\cline{2-3} & Wang et al.[57] & V2IR-GAN & \\\\ \\hline Direct IR Generation & Zhang et al.[68] & SIR-GAN & \\\\ \\hline \\end{tabular} \\end{table} Table 5: List of Learning based Methodsof IR images. The results obtained by these models as presented in their paper is shown in the figure 10. Similar to these architecture Qian et al. [48] have proposed sparse U-net generator[49] and patchGAN based Pix2Pix network for RGB to Thermal IR images. They select partial low-level and high-level features only and use intensity and gradient loss to optimise the network. Li et al. [33] have used D-LinkNet architecture instead of U-net as their generator in the Pix2Pix network to learn the image textures and interdependencies within the image to improve overall generated synthetic IR quality. synthetic IR with a target for target tracking applications. They have used BicycleGAN [71] based network for IR synthesis from RGB image calling it as background images and then used intensity modulation network along with the target masked background images to render realistic IR targets on them. The proposed intensity modulation network is a variant of GAN that is used to adapt the target mask region to the background region. They have trained the BiCycleGAN on the KAIST dataset [20], which features aligned color-thermal image pairs. Wang et al. [56] have proposed a cross-modality perceptual style transfer network to generate pseudo IR images from RGB images which have sharp structure of the actual IR image. Building on the idea that the pseudo IR and RGB images will be paired, they have used these pseudo IR images to calculate the displacement between the RGB image and Real IR images for registration and fusion purposes. The perceptual style transfer constraint controls the learning of the network to generate structure enhanced pseudo Infrared images. IR scenes have unique distinctive characteristics based on many factors. This complicates the learning process as a single generator could not model all unrelated characteristics of the IR scene. Learning all these characteristics from the RGB image to generate a realistic IR image is very difficult. To address this issue, Li et al. [34] have proposed an ensemble learning-based multi-generator network learning different semantic information for IR synthesis for remote sensing applications. They have used a ResNet-50-based scene classification prior to the multi-generator network to make sure that each generator learns the generation of the scene with a specific characteristic. To avoid the complexity of the network arising due to a multi-generator network they have used cluster-based methods to identify and group the characteristics which are similar in nature and can be addressed by a single generator hence significantly limiting the number of generator required. They have used Pix2Pix GAN network for image translation. Most of the Deep learning based IR generation models do not explore the physical principles while generating the IR images. To address this issue, Wang et al. [57] have proposed V2IR-GAN to generate IR images which models the physical process of IR images while generating novel images. They have developed three modules: the spontaneous emission module, the reflected radiation module and the transmission coefficient module, which models the spontaneous emission generated by the object, inter-reflection radiation from the surrounding object or environment, and atmospheric radiations, respectively. They have developed a more physics-based approach to generate the thermal images from the RGB images. Zhang et al. [68] have proposed IR image refinement network SIR-GAN which learns the bidirectional mappings between two domains Real IR and computationally calculated synthetic IR for enhancing the realism in the simulated IR images. They have used cycle consistency loss, SIR refinement loss and adversarial loss to optimize there network. They have used FlexCam expert IR thermal imager to capture 800 samples of real IR images. To generate synthetic IR images, they have first build the 3D geometric model of the target and then used IR physical modeling to get IR textures of the target. Further, they have used OGRE rendering along with the atmospheric model to simulate the IR images. Once these synthetic images were obtained they used SIR-GAN to bring realism into the synthetic IR images. The constraints of having a sufficient supply of accurately labelled images can often hinder the training of deep learning systems, leading to insufficient diversity in aspects like target angles, time of the day, and seasonal variations. This challenge becomes particularly pronounced when working beyond the visible spectrum. Acquiring an abundant collection of real remote-sensing images can be costly, demanding extensive fieldwork and post-processing efforts. Thankfully, synthetic imagery emerges as a valuable and cost-effective substitute. ## 7 Challenges in Synthetic Infrared Image synthesis Despite the remarkable potential of Synthetic Infrared (IR) Image synthesis, it's important to note that the technology is not without its challenges. These hurdles often revolve around the accuracy and realism of the synthetic images, the complexity of the modeling process, and ethical considerations. **Accuracy and Realism:** One of the main challenges in synthetic IR image synthesis is ensuring the images accurately represent real-world thermal scenarios. This involves not only replicating the appearance of different objects under IR imaging but also simulating the various factors that can affect an object's thermal signature, such as its material Figure 13: Visual Results of RGB to IR translation from IGAN [33]. (a) represents the RGB image, (b) is Real IR and (c) is the generated IR of the cooling tower from the IGAN model. Image courtesy [33]. properties, the environmental conditions, or the angle of observation. **Data Diversity:** To train robust and reliable models, a wide range of scenarios need to be represented in the synthetic IR images. This includes different weather conditions, times of day, seasons, and types of objects. Ensuring such diversity in the synthetic dataset can be a challenging task. **Computational Complexity:** The process of generating synthetic IR images can be computationally intensive, requiring significant processing power and time. This can be a barrier, especially when large datasets of synthetic images are needed for training robust machine learning models. **Validation:** Validating the effectiveness of synthetic IR images can be a complex process. This usually involves comparing the performance of models trained on synthetic images with those trained on real-world IR images. However, obtaining a large and diverse dataset of real-world IR images for this comparison can be difficult due to privacy, security, and logistical issues. **Ethical Considerations:** There are also ethical considerations involved in the use of synthetic IR images. For instance, if used in surveillance or facial recognition systems, there are concerns about privacy and consent. Additionally, there is the potential risk of misuse if the technology falls into the wrong hands. **Generalizability:** Models trained on synthetic IR images need to generalize well to real-world situations. However, due to the inherent differences between synthetic and real-world images, there can be a domain gap that hampers the model's performance in real-world scenarios. ## 8 Conclusion Synthetic infrared image and video generation is a crucial research area with significant implications across myriad fields, including defense, gaming, healthcare, and environmental monitoring. This comprehensive survey offers an in-depth overview of the existing methods for creating synthetic IR imagery, encompassing the physics of IR radiation emission, sensor modeling, atmospheric attenuation, simulation tools, and computational techniques for IR scene simulation. Additionally, it addresses various datasets available in the infrared domain used for IR image generation through deep learning methods, as well as the deep learning-based approaches for IR scene creation. The survey identifies the challenges in IR scene generation, such as the need for more realistic and diverse datasets, the demand for more accurate and efficient simulation tools, and the necessity of addressing the domain shift problem between synthetic and real-world IR data. Moreover, the development of synthetic IR imagery has the potential to enable new applications in autonomous systems, surveillance, and predictive maintenance, significantly impacting various industries. Future research directions may include the advancement of more sophisticated simulation tools, the creation of larger and more varied datasets, and the exploration of innovative deep learning architectures and techniques for IR scene generation. Additionally, integrating synthetic IR imagery with other modalities, such as visible and hyperspectral imaging, can lead to more robust and accurate scene understanding and object recognition. Overall, this survey aims to provide a foundation for further research and development in the field of synthetic IR image and video generation, which has the potential to transform numerous industries and applications. ## References * [1]Abbott, R., Robertson, N.M., Martinez del Rincon, J., Connor, B., 2020. Unsupervised object detection via laviHPS translation, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE. URL: [http://dx.doi.org/10.1109/CVPRW5498.2020.00053](http://dx.doi.org/10.1109/CVPRW5498.2020.00053), doi:10.1109/cvprw5498.2020.00053. * [2]Akula, A., Ghosh, R., Kumar, S., Sardana, H.K., 2013. Moving target detection in thermal infrared imagery using spatiotemporal information. J. Opt. Soc. Am. A 30, 1492-1501. URL: [https://opg.optics.org/josas/abstract.cfm?URI=josaka-30-8-1492](https://opg.optics.org/josas/abstract.cfm?URI=josaka-30-8-1492), doi:10.1364/JOSA.30.001492. * [3]Akula, A., Khanna, N., Ghosh, R., Kumar, S., Das, A., Sardana, H., 2014. Adaptive contour-based statistical background subtraction method for moving target detection in infrared video sequences. Infrared Physics and Technology 63, 103-109. URL: [https://www.sciencedirect.com/science/article/pii/S1530445513002387](https://www.sciencedirect.com/science/article/pii/S1530445513002387), doi:[https://doi.org/10.1016/j.infrared.2013.12.012](https://doi.org/10.1016/j.infrared.2013.12.012). * [4]Akula, A., Khanna, N., Ghosh, R., Kumar, S., Das, A., Sardana, H., 2024. Class-cision moving object thermal infrared imagery dataset (moditd). [https://ucyli-okstate.org/pbs/bench/Data/08/Benchmark.zip](https://ucyli-okstate.org/pbs/bench/Data/08/Benchmark.zip). Accessed: 2024-08-13. * [5] Bae, T.W., Kim, Y.C., Ahn, S.H., 2018. Infrared image synthesis of background and target using temperature estimation, in: 2018 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1-2. doi:10.2391/CEINFOCOM.2018.833621. * [6] Bae, T.W., Kim, Y.C., Ahn, S.H., 2019. IR composite image generation by wavelength band based on temperature synthesis estimated from IR target signature and background scene. Journal of Sensors 2019, 1-17. URL: [https://doi.org/10.1155/2019/9423976](https://doi.org/10.1155/2019/9423976), doi:10.1155/2019/9423976. * [7]Barron, J.T., 2019. A general and adaptive robust loss function. URL: [https://arxiv.org/abs/1701.03277](https://arxiv.org/abs/1701.03277), arXiv:1701.032077. * [8]Berg, A., Ahlberg, J., Felsberg, M., 2015. A thermal object tracking benchmark, in: Advanced Video and Signal Based Surveillance (AVSS), 2015 12th IEEE International Conference on. * disac. [https://disac.org/databases/str-algorithm-development-image-database/](https://disac.org/databases/str-algorithm-development-image-database/). Accessed: 2024-08-13. * [10] Choi, J.H., Kim, T.K., 2009. Study on spectral transmission characteristics of the reflected and self-emitted radiations through the atmosphere, in: Lecture Notes in Electrical Engineering. Springer Berlin Heidelberg, pp. 393-406. URL: [https://doi.org/10.1007/978-3-540-98595-7_27](https://doi.org/10.1007/978-3-540-98595-7_27), doi:10.1007/978-3-540-98595-7_27. * [11] CSIR, 2024. Optronic system simulator | defsec. [https://defsec.csir.co.nz/optronic-sensor-systems-os/infrastructure/optronic-sensor-systems-infrastructure](https://defsec.csir.co.nz/optronic-sensor-systems-os/infrastructure/optronic-sensor-systems-infrastructure). Accessed: 2024-08-13. * Volume 1, pp. 364-369. doi:10.1109/ACVNOT.2005.14. * [13] Davis, J.W., Sharma, V., 2007. Background-subtraction using contour-based fusion of thermal and visible imagery. Computer Vision and Image Understanding 106, 162-182. URL: [https://www.sciencedirect.com/science/article/pii/S1077314260801834](https://www.sciencedirect.com/science/article/pii/S1077314260801834), doi:[https://doi.org/10.1016/j.cviu.2006.06.010](https://doi.org/10.1016/j.cviu.2006.06.010), special issue on Advances in Vision Algorithms and Systems beyond the Visible Spectrum. * Demir and Unal (2018) Demir, U., Unal, G., 2018. Patch-based image inpainting with generative adversarial networks. URL: [https://arxiv.org/abs/1803.07422](https://arxiv.org/abs/1803.07422), doi: 10.4555/abstract/abstract.1803.07422. * Dulski et al. (2011) Dulski, R., Sosnowski, T., Polakowski, H., 2011. A method for modelling IR images of sky and clouds. Infrared Physics Technology 54, 53-60. URL: [https://doi.org/10.1016/j.infrared.2010.12.011](https://doi.org/10.1016/j.infrared.2010.12.011), doi: 10.1016/j.infrared.2010.12.011. * Garnier et al. (1999) Garnier, C., Colliore, R., Filita, J., Mouclier, C., Rousee, F., 1999. Infrared sensor modeling for realistic thermal image synthesis, in: 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258), IEEE. URL: [http://dx.doi.org/10.1109/ICASSP.1999.757604](http://dx.doi.org/10.1109/ICASSP.1999.757604), doi: 10.1109/ICASSP.1999.757600. * Goff et al. (2000) Goff, A.L., Kersaudy, P., Latier, J., Cathala, T., Stolte, N., Barillot, P., 2000. Automatic temperature computation for realistic IR simulation, in: Watkins, W.R., Clement, D., Reynolds, W.R. (Eds.), Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process, SPIE. URL: [https://doi.org/10.1117/12.392526](https://doi.org/10.1117/12.392526), doi: 10.1117/12.392526. * Goodfellow et al. (2014) Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., 2014. Generative adversarial networks URL: [http://arxiv.org/pdf/1406.2651](http://arxiv.org/pdf/1406.2651), arXiv:1406.2661. * He and Li (2013) He, Y.J., Li, M., 2013. Research on the key technologies of infrared scene simulation based on 3d rendering engine, in: 2013 Seventh International Conference on Image and Graphics, pp. 737-741. doi: 10.1189/ICIC.2613.150. * Hwang et al. (2015) Hwang, S., Park, J., Kim, N., Choi, Y., Kweon, I.S., 2015. Multispectral pedestrian detection: Benchmark dataset and baseline, in: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1037-1045. doi: 10.1109/CVPR.2015.729876. * Imagine and Generation (2024) Imagine, D., Generation, R.S.I., 2024. Documentation. [https://disrig.cis.rit.edu/docs/](https://disrig.cis.rit.edu/docs/), Accessed: 2024-08-13. * Isola et al. (2017) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A., 2017. Image-to-image translation with conditional adversarial networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE. URL: [https://doi.org/10.1109/cvpr.2017.632](https://doi.org/10.1109/cvpr.2017.632), doi: 10.1109/cvpr.2017.632. * Jia et al. (2021) Jia, X., Zhu, C., Li, M., Tang, W., Zhou, W., 2021. Llivip: A visible-infrared paired dataset for low-light vision, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3496-3504. * Jiang et al. (2019) Jiang, Y., Liu, Y., Peng, Q., Jie, F., Ming, D., 2019. Infrared image generation method based on visible light remote sensing image, in: 2019 IEEE 3rd Advanced Information Management, Communications, Electronic and Automation Control Conference (IMCE), IEEE. URL: [https://doi.org/10.1109/imecec64724.2019.9894157](https://doi.org/10.1109/imecec64724.2019.9894157), doi: 10.1109/imec64724.2019.9894157. * Jiang et al. (2003) Jiang, Z., Wang, Z., Peng, Q., 2003. Real-time generation of dynamic infrared scene. International Journal of Infrared and Millimeter Waves 24, 1737-1748. URL: [http://dx.doi.org/10.1023/a:1026099721060](http://dx.doi.org/10.1023/a:1026099721060), doi: 10.1023/a:1026099721060. * Kim and Hwang (2022) Kim, J.H., Hwang, Y., 2022. GAN-based synthetic data augmentation for infrared small target detection. IEEE Transactions on Geoscience and Remote Sensing 60, 1-12. URL: [https://doi.org/10.1109/tgrs.2022.3179891](https://doi.org/10.1109/tgrs.2022.3179891), doi: 10.1109/tgrs.2022.3179891. * Kim and Bang (2022) Kim, T., Bang, H., 2022. Fractal texture enhancement of simulated infrared images using a CNN-based neural style transfer algorithm with a histogram matching technique. Sensors 23, 422. URL: [https://doi.org/10.3309/s23014022](https://doi.org/10.3309/s23014022), doi: 10.3309/s23014022. * Kim et al. (2014) Kim, Y.C., Bae, T.W., Kwon, H.J., Kim, B.I., Ahn, S.H., 2014. Infrared (IR) image synthesis method of IR real background and modeled IR target. Infrared Physics Technology 63, 54-61. URL: [https://doi.org/10.1016/j.infrared.2013.12.006](https://doi.org/10.1016/j.infrared.2013.12.006), doi: 10.1016/j.infrared.2013.12.006. * Kingma and Welling (2013) Kingma, D.P., Welling, M., 2013. Auto-encoding variational bayes. URL: [https://arxiv.org/abs/1312.6114](https://arxiv.org/abs/1312.6114), doi: 10.48550/ARCV.1312.6114. * International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 41-45URL: [https://api.semanticscholar.org/CorpusID:5564833](https://api.semanticscholar.org/CorpusID:5564833). * Kniaz et al. (2019) Kniaz, V.V., Knyaz, V.A., Hladukka, J., Kropatsch, W.G., Mirginov, V., 2019. ThermalGAN: Multimodal color-to-thermal image translation for person re-identification in multispectral dataset, in: Lecture Notes in Computer Science. Springer International Publishing, pp. 606-624. URL: [https://doi.org/10.1087/978-3-030-11024-6_46](https://doi.org/10.1087/978-3-030-11024-6_46), doi: 10.1087/978-3-030-11024-6_46, * Leja et al. (2022) Leja, L., Purlan, V., Novickis, R., Cvetkovs, A., Ozols, K., 2022. Mathematical model and synthetic data generation for infra-red sensors. Sensors 22, 9458. URL: [https://doi.org/10.3390/s22239458](https://doi.org/10.3390/s22239458), doi: 10.3390/s22239458. * Li et al. (2021) Li, B., Xian, Y., Su, J., Zhang, D.Q., Guo, W.L., 2021. I-GANs for infrared image generation. Complexity 2021, 1-11. URL: [https://doi.org/10.1155/2021/6635242](https://doi.org/10.1155/2021/6635242), doi: 10.1155/2021/6635242. * Li et al. (2019) Li, L., Li, P., Yang, M., Gao, S., 2019. Multi-branch semantic GAN for infrared image generation from optical image, in: Intelligence Science and Big Data Engineering. Visual Data Engineering. Springer International Publishing, pp. 484-494. URL: [https://doi.org/10.1087/978-3-030-36189-1_40](https://doi.org/10.1087/978-3-030-36189-1_40), doi: 10.1087/978-3-030-36189-1_40. * Li et al. (2015) Li, N., Lv, Z., Wang, S., Gong, G., Ren, L., 2015. A real-time infrared radiation imaging simulation method of aircraft skin with aerodynamic heating effect. Infrared Physics & Technology 71, 533-541. URL: [https://www.sciencedirect.com/science/article/pii/513504495150801516](https://www.sciencedirect.com/science/article/pii/513504495150801516), doi: [https://doi.org/10.1016/j.infrared.2015.06.014](https://doi.org/10.1016/j.infrared.2015.06.014). * Li et al. (2023) Li, Y., Ko, Y., Lee, W., 2023. A feasibility study on translation of rgb images to thermal images: Development of a machine learning algorithm. SN Computer Science 4. URL: [http://dx.doi.org/10.1087/s42579-82-02404-4](http://dx.doi.org/10.1087/s42579-82-02404-4), doi: 10.1087/s42579-82-0240-4. * Lord (1992) Lord, S.D., 1992. A new software tool for computing earth's atmospheric transmission of near- and far-infrared radiation. [https://atran.arc.nasa.gov/cgi-bin/atran/atran.cgi](https://atran.arc.nasa.gov/cgi-bin/atran/atran.cgi). NASA Technical Memorandum 103957. * Miezianko and (2014) Miezianko, R.,. Terravic research infrared database. [https://vcipl-obat.org/bus/bench/Data/5/download.html](https://vcipl-obat.org/bus/bench/Data/5/download.html). * Mirza and Osindero (2014) Mirza, M., Osindero, S., 2014. Conditional generative adversarial nets. URL: [https://arxiv.org/abs/1411.1784](https://arxiv.org/abs/1411.1784), arXiv:1411.1784. * Mizginov and Danilov (2019) Mizginov, V.A., Danilov, S.Y., 2019. SYNTHETIC THERMAL BACKGROUND AND OBJECT TEXTURE GENERATION USING GEOMETRIC INFORMATION AND GAN. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2W12, 149-154. URL: [https://doi.org/10.5194/ispers-archives-xili-2-w12-149-2019](https://doi.org/10.5194/ispers-archives-xili-2-w12-149-2019), doi: 10.5194/ispers-archives-xili-2-w12-149-2019. * Moorehead et al. (2001) Moorehead, I.R., Gilmore, M.A., Houlbrook, A.W., Oxford, D.E., Filipe, D.R., Stroud, C.A., Hutchings, G., Kirk, A., 2001. CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies. Optical Engineering 40, 1896-1905. doi: 10.1117/1.1390289. * More et al. (2002) More, S.T., Pandit, A.A., Merchant, S.N., Desai, U.B., 2002. Synthetic IR scene simulation of air-borne targets, in: Chaudhuri, S., Zisserman, A., Jain, A.K., Majumder, K.L. (Eds.), ICVGIP 2002, Proceedings of the Third Indian Conference on Computer Vision, Graphics & Image Processing, Ahmadabad, India, December 16-18, 2002, Allied Publishers Private Limited. * [46] Preszsig, 2024. Prasegis onduits is resnor simulation software. [https://www.preszis.com/en/protod/rodb/s-ir/](https://www.preszis.com/en/protod/rodb/s-ir/). Accessed: 2024-08-13. * [47] Qi, J., Xie, D., Yin, T., Yan, G., Gastell-Echegory, J.P., Li, L., Zhang, W., Mu, X., Norford, L.K., 2019. ELES: Large-scale remote sensing data and image simulation framework over heterogeneous 3d scenes. Remote Sensing of Environment 221, 695-706. URL: [https://doi.org/10.1016/j.res.2018.11.036](https://doi.org/10.1016/j.res.2018.11.036), doi:10.1016/j.res.2018.11.036. * [48] Qian, X., Zhang, M., Zhang, F., 2020. Sparse GANs for thermal infrared image generation from optical image. IEEE Access 8, 180124-180132. URL: [https://doi.org/10.1109/access.2020.3024576](https://doi.org/10.1109/access.2020.3024576), doi:10.1109/access.2020.3024576. * [49] Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation, in: Lecture Notes in Computer Science. Springer International Publishing, pp. 234-241. URL: [https://doi.org/10.1087/978-3-319-24574-4_28](https://doi.org/10.1087/978-3-319-24574-4_28), doi:10.1087/978-3-319-24574-4_28. * [50] Technologies, J., 2024. Osv l jrm technologies. [http://www.jrtech.com/products/OSV](http://www.jrtech.com/products/OSV). Accessed: 2024-08-13. * [51] ThermoAnalytics, 2024. Scene simulation for artificial intelligence applications. [https://blog.thermanalytics.com/blog/scene-simulation-for-artificial-intelligence-applications](https://blog.thermanalytics.com/blog/scene-simulation-for-artificial-intelligence-applications). Accessed: 2024-08-13. * [52] Toet, A., 2017. The motiband image data collection. Data in Brief 15, 249-251. URL: [https://www.sciencedirect.com/science/article/pii/5232248917304693](https://www.sciencedirect.com/science/article/pii/5232248917304693), doi:[https://doi.org/10.1016/j.dib.2017.09.03](https://doi.org/10.1016/j.dib.2017.09.03). * [53] Uddin, M.S., Hoque, R., Islam, K.A., Kwan, C., Gribben, D., Li, J., 2021. Converting optical videos to infrared videos using attention GAN and its impact on target detection and classification performance. Remote Sensing 13, 3257. URL: [https://doi.org/10.3396/rs13163257](https://doi.org/10.3396/rs13163257), doi:10.3396/rs13163257. * [54] Uddin, M.S., Kwan, C., Li, J., 2023. MWIRGAN: Unsupervised visible-to-MWIR image translation with generative adversarial network. Electronics 12, 1039. URL: [https://doi.org/10.3396/electronics12041839](https://doi.org/10.3396/electronics12041839), doi:10.3396/electronics12041839. * [55] ULUSOY, U., YILMAZ, K., OZSAHIN, G., 2022. Generative adversarial network for generating synthetic infrared image from visible image. Gazi Universitesi Fen Bilimpli Derpisi Part C: Tsamm v Teknolol 10, 286-299. URL: [https://doi.org/10.2918/gwjsc.1011486](https://doi.org/10.2918/gwjsc.1011486), doi:10.2918/gwjsc.1011486. * [56] Wang, D., Liu, J., Fan, X., Liu, R., 2022. Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration. URL: [https://arxiv.org/abs/2205.11876](https://arxiv.org/abs/2205.11876), doi:10.4855/ARCHY.2205.11876. * [57] Wang, L., Cheng, J., Song, J., Pan, X., Zhang, C., 2023. Learning to measure infrared properties of street views from visible images. Measurement 207, 112320. URL: [https://doi.org/10.1016/j.measurement.2022.112320](https://doi.org/10.1016/j.measurement.2022.112320), doi:10.1016/j.measurement.2022.112320. * a comparative evaluation. IEEE Access 8, 153651-153670. URL: [https://doi.org/10.1109/access.2020.3018151](https://doi.org/10.1109/access.2020.3018151), doi:10.1109/access.2020.3018151. * [59] Wilters, C.J., Wilters, M.S., Lapirere, F., 2011. Signature modelling and radiometric rendering equations in infrared scene simulation systems, in: Titterton, D.H., Richardson, M.A. (Eds.), Technologies for Optical Countermeasures VIII, International Society for Optics and Photonics. SPIE. p. 81870R. URL: [https://doi.org/10.1117/12.98352](https://doi.org/10.1117/12.98352), doi:10.1171/12.98352. * [60] Woo, S., Park, J., Lee, J.Y., Kweon, I.S., 2018. Cham: Convolutional block attention module, in: Proceedings of the European conference on computer vision (ECCV), pp. 3-19. * [61] Wu, Y.L., Shuai, H.H., Tam, Z.R., Chiu, H.Y., 2021. Gradient normalization for generative adversarial networks, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6373-6382. * [62] Wu, Z., Fuller, N., Theriault, D., Betke, M., 2014. A thermal infrared video benchmark for visual analysis, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 201-208. doi:10.1109/CVPR.2014.39. * [63] Yi, X., Pan, H., Zhao, H., Liu, P., Zhang, C., Wang, J., Wang, H., 2023. Cycle generative adversarial network based on gradient normalization for infrared image generation. Applied Sciences 13, 635. URL: [http://doi.org/10.3396/app1301635](http://doi.org/10.3396/app1301635), doi:10.3396/app1301635. * [64] Yu, W., Peng, Q., Tu, H., Wang, Z., 2009. An infrared image synthesis model based on infrared physics and heat transfer. International Journal of Infrared and Millimeter Waves 19, 1661-1669. URL: [https://doi.org/10.1023/a:102175120244](https://doi.org/10.1023/a:102175120244), doi:10.1023/a:102175120244. * [65] Yuan, X., Tian, J., Reinartz, P., 2020. Generating artificial near infrared spectral band from high image using conditional generative adversarial network. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020, 279-285. URL: [http://dx.doi.org/10.5194/ispers-annals-v-3-2020-279-2020](http://dx.doi.org/10.5194/ispers-annals-v-3-2020-279-2020), doi:10.5194/ispers-annals-v-3-2020-279-2020. * [66] Zhang, L., Gonzalez-Garcia, A., van de Weijer, J., Danelljan, M., Khan, F.S., 2019. Synthetic data generation for end-to-end thermal infrared tracking. IEEE Transactions on Image Processing 28, 1837-1850. doi:10.1109/TIP.2018.2879249. * [67] Zhang, M.M., Choi, J., Daniilidis, K., Wolf, M.T., Kanan, C., 2015. Vais: A dataset for recognizing maritime imagery in the visible and infrared spectrum. 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 10-16URL: [https://doi.org/10.55amitecholar.org/Corpus10-724655](https://doi.org/10.55amitecholar.org/Corpus10-724655). * [68] Zhang, R., Mu, C., Xu, M., Xu, L., Shi, Q., Wang, J., 2019. Synthetic IR image refinement using adversarial learning with bidirectional mappings. IEEE Access 7, 153734-153750. URL: [https://doi.org/10.1109/access.2019.2947657](https://doi.org/10.1109/access.2019.2947657), doi:10.1109/access.2019.2947657. * [69] Zhijian, H., Bingwei, H., Shuji, S., 2022. An infrared sequence image generating method for target detection and tracking. Frontiers in Computational Neuroscience 16. URL: [https://doi.org/10.3389/fncom.2022.930827](https://doi.org/10.3389/fncom.2022.930827), doi:10.3389/fncom.2022.930827. * [70] Zhu, J.Y., Park, T., Isola, P., Efros, A.A., 2017a. Unpaired image-to-image translation using cycle-consistent adversarial networks, in: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE. URL: [https://doi.org/10.1109/iccv.2017.244](https://doi.org/10.1109/iccv.2017.244), doi:10.1109/iccv.2017.244. * [71] Zhu, J.Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E., 2017b. Toward multimodal image-to-image translation. URL: [https://arxiv.org/abs/1711.11586](https://arxiv.org/abs/1711.11586), doi:10.4855/ARXIV.1711.11586.
Synthetic infrared (IR) scene and target generation is an important computer vision problem as it allows the generation of realistic IR images and targets for training and testing of various applications, such as remote sensing, surveillance, and target recognition. It also helps reduce the cost and risk associated with collecting real-world IR data. This survey paper aims to provide a comprehensive overview of the conventional mathematical modelling-based methods and deep learning-based methods used for generating synthetic IR scenes and targets. The paper discusses the importance of synthetic IR scene and target generation and briefly covers the mathematics of blackbody and grey body radiations, as well as IR image-capturing methods. The potential use cases of synthetic IR scenes and target generation are also described, highlighting the significance of these techniques in various fields. Additionally, the paper explores possible new ways of developing new techniques to enhance the efficiency and effectiveness of synthetic IR scenes and target generation while highlighting the need for further research to advance this field.
Condense the content of the following passage.
191
arxiv-format/1705_04451v1.md
Using Satellite Imagery for Good: Detecting Communities in Desert and Mapping Vaccination Activities Anza Shakeel Information Technology University [email protected] Mohsen Ali Information Technology University [email protected] ## 1 Introduction In today's era, geo-located maps fuel a considerable portion of the digital economy. From ride sharing services like Uber to finding restaurants in your neighborhood, we are dependent upon comprehensive, up to date and accurate maps. Aside from these commercial examples, accurate maps play a vital role in running many functions of the government. They are used for city planning, monitoring development and also in organizing rescue operations in times of disasters. The sheer usefulness and economic importance of these maps have pushed both government and non-government bodies to invest in creating, managing and updating these maps. Ideas like using mobile-phone-activity to geo-locate roads or directly crowdsourcing (Open Street Map, Wikimapia, etc..) the map information have been used to build and update maps. In the developing countries, government maintained maps are mostly not geo-located or are not readily accessible in the digitized format. This is true especially for remote regions of the developing countries, where mostly crowdsourced information fails to be helpful. Satellite imagery provides opportunity to extract much of the information like urban and non-urban population, roads, crop yield without placing humans on ground. However, using humans to tag information by eyeballing the satellite imagery is a painstakingly difficult task. Government of Punjab has been revamping vaccination program to increase its geographical coverage and to better manage vaccination staff spread across whole Punjab. To analyze the coverage of the vaccination activities especially in the areas far away from the urban centers of the Punjab, we need information about locations and size of communities in those regions. Since this information is not readily available, we have employed deep learning to detect house-like-structures on the freely available low resolution satellite imagery (visible spectrum) of the Cholistan Desert (Figure 1). Our algorithm detects areas that are clustered together (to represent housing-clusters) and polygon fitting is performed so that we can create manageable representations (you can visualize our results on the interactive map below and technical details are discussed in the last section). Finally we map vaccination and public school location data over these detected regions. For Figure 1: (Top) Polygon of the Cholistan region of Punjab is zoomed in and displayed on map view of Pakistan. (Bottom) Satellite view of the Cholistan region is shown. vaccination data we calculate both count of vaccination activity per quarter per region, as well as coverage to understand how geographically diverse activities were performed in those region. Our analysis shows that although not all the detected regions appear to be covered, large numbers of them are. Some get vaccination activity in all the quarters; others are missed in one or two of the quarters. Large number of detected regions had at-least one public school as per government record. Certain detected regions were found to be getting very few or none of vaccination activity, their location and cluster indicates to us that there could be anomaly in the data; we hope to verify and correct that in our future version. We are in process of conducting such survey for whole Punjab, on much higher resolution satellite imagery. ## 2 Data-Set Details In order to deeply investigate built-structure detection in satellite imagery, we had to be selective in terms of the data-set. Villages in desert areas are difficult to detect as the texture of roof-tops often merge with the ground, hence these regions demand very fine detection. Keeping this issue in mind we opted VillageFinders data-set for training [1]. The data is marked for segmentation purpose and includes nucleated villages from fifteen countries located mainly in Africa and Asia. The training data-set has a total of 300 images of size 512\\(\\times\\)512 with their respective ground-truths as binary images (pixels that represent a village area are 1 and others are marked 0). To deal with our case of detection, we extracted multiple patches of size 128\\(\\times\\)128 from random locations of each image. These patches were then resized to 64\\(\\times\\)64 and were also jittered. During the process of patch extraction, patches from respective ground-truth binary images were also generated and a threshold of 0.75 was selected to decide whether the patch contained a building or not. Patches that had number of ones greater than 0.75 were assigned class label 1 (built) and patches that had number of ones less than 0.1 were assigned class label 0 (non-built). Sample patches from training set are shown in figure 2. The training data is normalized channel wise meaning that a random set of patches were selected from the training set and a mean image was computed from this set. The mean image was then subtracted from every training example as well as from each testing area. We have used a Matlab(r) library that saves geo-located images from Google Earth(tm) when given maximum and minimum location of the area. This library is used to get images for Cholistan region of Punjab to test our trained model. ## 3 Implementation Details ### Network Architecture Our initial step was to fine tune a pre-trained deep neural network and so we selected the VGG network [2]. After changing the last fully connected layer according to our classes, the error was only propagated till the fully connected layers. Training was turned off for all previous layers. The results from this trained model were not good this is due to the fact that VGG model is trained on imagenet data and our training examples are totally different. Due to this we decided to re-train the network on our data. The classification results were reasonable but as we had softmax in the end of fully connected layer the network was deciding on a patch. Though we had two hundred thousand patches in the training set, still we had doubts that our results might improve more if we had more data as convolutional neural networks are known to be data hungry. In order to cater the need of more data, we augmented data during training. This was done by pausing training after every ten epochs and the model was tested with a portion of training data. All false negative examples were then added in the training set. This procedure was repeated five times. In images taken from above, each pixel represents some area on ground. We wanted to preserve that information so we converted all three fully connected layers to convolutional layers [3]. In this way the network is now able to decide on each pixel. Instead of adding an up-sampling layer in the network, we have used bilinear interpolation to up-sample the probability maps to the size of input. For our final network (shown in figure 3) we have not used the batch normalization layer as it was distorting the results this was happening because satellite imagery has much more variance than a convolutional neural network can handle and so the batch normalization layers were not working. Also we first had Relu layer applied in the end of last conv layer which was later removed Figure 1: (Top) some of the patches from Class label 1 are shown. (Bottom) similarly patches from Class label 0. because it was making the output noisy. This we realized after visualizing the textures each layer was learning. All model files are written in python using Chainer Framework. The network is trained on a 3 GB Nvidia gpu and took a week to get fully trained. ### Post Processing Details Once probability maps are interpolated they are mapped on their corresponding input images to see the detected regions. Initially the model is tested on both VillageFinder testing set and also on the area of Model Town Lahore images saved from Google EarthTM. Results are shown in figure 4. To define neighborhoods we fitted polygons on the results. This was done by applying threshold on the probability maps. Probabilities greater than or equal to 0.5 were said to be 1 and values less than threshold were marked zero. Methods of dilation and erosion were performed in order to clean the binaries before fitting polygons on them. As the testing images from Google EarthTM are geo referenced, we converted our results to world coordinates and wrote a kml of the polygons. Before polygon fitting we also tested our results and divided the world coordinates of our region of interest in small cells. These cells where given a value that was equal to the average probability of pixels covering a cell. While writing kml for this, the cell probability values where multiplied by the color of each cell. In this way we were able to visualize areas which the network detected with higher probability and also the regions which it detected with lower probability. Figure 4: (Top-Left) Final Results on Model Town, Blue areas are detected built structures. (Top-Right) Final Results on VillageFinder testing image. (Bottom) Some initial results computed by fine-tuning only fully-connected layers of the VGG network, red areas are detected built structures. Figure 3: Network Architecture. ### Testing Network on Cholistan Region Cholistan region of Punjab Pakistan was selected to run testing experiments due to its difficult landscape. The exact location of Cholistan was noted from the Open Street Maps website and images were saved using Matlab(r). The images were then passed from our trained model and then kml was generated as discussed in previous section (shown in figure 5). ### Correlating Vaccination Activity Pakistan has been aggressively running vaccination programs to decrease child mortality rate and eradicate diseases like polio. Government trained vaccinators travel to different parts of cities and far away villages to inoculate children in cities and far away villages. Punjab Information Technology Board (PITB) has distributed smartphones to these officials and a phone based application records information about the inoculation activity. Using this vaccination activity data, we analyze how often and in what ratio vaccinators visit different identified regions. We have only used data that is starting from August 2015 till December 2016, and have divided them into the interval of 3 months. Number of vaccinators who accessed a detected cluster is computed and a histogram is plotted to show the distribution of assigned vaccinators in the Cholistan region. Further a buffer of 200 meters is created around each location tagged by the vaccinator and then intersection of these buffers is computed resulting in an approximate movement of vaccinators. Percentage coverage is computed using movement of vaccinators in fixed area of detected segments. As per Government's vaccination programme each vaccinator maintains a record that contains age of children with their pending vaccines. Following this record vaccinators visit their assigned vicinities to complete a vaccine's round according to its specificity. During each activity a histogram (shown in figure 7) is plotted showing number of vaccinators who accessed a detected community. These numbers justify the fact that regions with larger areas were accessed the most. The movement of vaccinators covering an area in each community is proportioned over the area of detected region. These coverage percentages are graphed giving us segments that are covered most (shown in figure 8). A pattern of coverage can be seen based on the movement of vaccinators over the span. ### Mapping Public Schools Data Just like vaccination activity information, government schools data was correlated with the detected regions to understand accessibility to education. It was discovered that a large number of the detected regions did have the government school in them. Red areas show polygons with no schools, whereas the shades of green represent increasing number of schools with light green being the least (one or two) and dark green with most (more then 10) schools. Figure 5: (Top) Detected segments of the Cholistan region of Punjab. Pink areas show detected regions. (Bottom) Heatmap of the test region. Figure 6: Strength of public schools mapped on detected segments. Figure 8: Histogram of area covered per detected segment during an activity (three month interval) Figure 7: Histogram of vaccinators per segment ## 4 Future Work Satellite imagery offers a vast research area. To take full advantage of the diversity these images contain, we plan to study the variety of bands that hyperspectral imagery presents. Detecting built structures successfully has inaugurated problems like house counting, tracking urbanization and detecting damaged structures. We intend to deeply analyze the detected segments in order to locate hospitals, schools and parks near each community. ## 5 Acknowledgements We are grateful to Punjab Information Technology Board (PITB) especially Ms. Maria Badar for sharing vaccination activity data as well as public schools data. ## References * [1] K. Murtaza, S. Khan and N. Rajpoot, \"Villagefinder: Segmentation of nucleated villages in satellite imagery\" British Machine Vision Conference, 2009. * [2] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.CoRR,abs/1409.1556, 2014 * [3] Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015
Deep convolutional neural networks (CNNs) have outperformed existing object recognition and detection algorithms. On the other hand satellite imagery captures scenes that are diverse. This paper describes a deep learning approach that analyzes a geo referenced satellite image and efficiently detects built structures in it. A Fully Convolution Network (FCN) is trained on low resolution Google earth satellite imagery in order to achieve end result. The detected built communities are then correlated with the vaccination activity that has furnished some useful statistics.
Summarize the following text.
98
arxiv-format/2309_08095v2.md
# RELAX: Reinforcement Learning Enabled 2D-LiDAR Autonomous System for Parsimonious UAVs Guanlin Wu\\({}^{1}\\), Zhuokai Zhao\\({}^{2}\\), and Yutao He\\({}^{3}\\) \\({}^{1}\\)Guanlin Wu is with the School of Data Science, The Chinese University of Hong Kong, Shenzhen, Shenzhen, GD 518172, China [email protected]\\({}^{2}\\)Zhukai Zhao is with the Department of Computer Science, University of Chicago, Chicago, IL 60637, USA [email protected]\\({}^{4}\\)Yutao He is with the Computer Science Department, University of California, Los Angeles, Los Angeles, CA 90095, USA [email protected] ## I Introduction Unmanned Aerial Vehicles (UAVs), commonly known as drones, have gained immense importance and become a transformative technology across many application domains [1]. In addition to more commonly-known use cases such as military navigation [2], search-and-rescue [3], and commercial package delivery [4], UAVs are also widely used in metrology [5], agriculture [6], and mining [7] thanks to their compact sizes and relatively high cost-efficiency, especially when compared to the piloted aircraft. Despite tasks from different applications pose different, often specific challenges, one of the key challenges shared across all domains is being able to operate autonomously. Specifically, it covers many aspects of the UAV operations, including environment perception, path planning and real-time dynamic obstacle avoidance. It is especially important when UAVs are to operate under unknown or dynamically changing environments, where human control is unavailable. UAV autonomous navigation algorithms require on-board sensors to understand the surrounding environments, and optimally navigates UAV to travel from one place to another [8]. Optimal navigation can be defined in terms of the length of the traveled path [9], traveled time [10] and trajectory smoothness [11] while being collision-free [12]. Numerous efforts have been devoted to advance this field [13]. However, existing solutions often require UAVs to equip with expensive sensor setups, including multiple RGB-D cameras [14, 15, 16, 17] or 3D LiDAR [18]. While these sensors can help build real-time 3D maps for better environment representations, they significantly increase the cost of the autonomous system, as most RGB-D cameras are priced at a few hundreds dollars in average [19], and 3D LiDAR often costs well above a thousand US dollars [20]. Consequently, such high cost has become a primary factor preventing UAVs from wider adoptions [21]. Therefore, new solutions enabling UAVs to perform _successful and reliable autonomous navigation while using much simpler and cheaper sensor setups_ are urgently needed. Simpler sensor configuration, which changes the system perception of the surrounding world, poses many new challenges for all system components including surrounding environment construction, path planning, dynamic obstacle avoidance, and often calls for an entire new system design. Specifically, it introduces practical challenges in surrounding detection [22, 23] and RL training [24]. To this end, we introduce **R**einforcement **E**nabled 2D-**LiDAR **A**utonomous **S**ystem (**RELAX**), an end-to-end autonomous system presenting novel algorithms addressing these intricacies, so that parsimonious UAVs that carry only one 2D-LiDAR sensor can navigate autonomously in unknown environments. Specifically, RELAX comprises three components: a _map constructor_, which generates occupancy maps using 2D-LiDAR data; a _mission planner_, which creates obstacle-free paths using these maps; and an _online re-planner_, which addresses the dynamic obstacle avoidance. The main contribution of this paper is that we propose RELAX, _the first UAV autonomous navigation system that requires only a single 2D-LiDAR to support the entire UAV autonomous navigation pipeline_, which includes the initial environment mapping, offline planning and online re-planning for dynamic obstacle avoidance. To address the unique challenges that come with the less feature-rich sensor inputs, we propose novel algorithms to enhance the capability and generalizability of our framework. Experiments shows that RELAX achieves comparable successful rates as more expensive UAVs navigation systems, at only a fraction of the cost. In addition, we advocate RELAX as a _successful proof-of-concept and a platform that boosts future research_ by releasing a real-time training suite in ROS-Gazebo-PX4 simulator, which supports easy adaptation of RELAX algorithms into training future newly designed RL algorithms. In other words, the idea of modularization behind the design of RELAX brings larger potential for further improvement of its performance. ## II Related Work Existing end-to-end UAV autonomous navigation systems leverage sensor (e.g. RGB-D, 3D-LiDAR) inputs to perceive and understand surrounding environment, then conduct path planning and automatic dynamic obstacle avoidance [25]. Besides differences in the algorithmic aspects, sensor configurations also fundamentally affect the overall design of the system architecture, as well as the specific algorithms within each component. In this section, we briefly discuss different UAV navigation systems that equip with different sensor configurations. **Vision-based UAV navigation systems.** Vision-based systems that employ RGB or RGB-D images to capture the environment are arguably the most prevalent configuration in autonomous UAVs [26]. More specifically, RGB images are taken by monocular cameras, while RGB-D images refer to 3D representations of the world world that is captured by either binocular cameras or monocular camera with additional depth sensor. Numerous efforts have been devoted to vision-based UAV systems. For example, Engel et al. [27] developed a quadrotor carrying a monocular camera that is capable of visual navigation in unstructured environments. Although being low in cost, the proposed system does not support obstacle avoidance, which is a major disadvantage for many modern tasks. As a result, many works choose to use binocular cameras [28, 29]. However, such systems are very prune to weather changes and are hard to operate at night, greatly limiting their working scenarios. Because of the aforementioned disadvantages of monocular and binocular configurations, RGB-D which involves both RGB and infrared depth cameras quickly attracts many attentions, resulting in various UAV applications [30, 15]. While being effective, the use of RGB-D cameras inevitably increases both cost and on-board computational requirement, posing limitations and preventing designs of simple, low-cost and light-weight UAVs for wider adoptions. **LiDAR-based UAV navigation systems.** Thanks to its robust performance under various weather and lighting conditions, LiDAR has quickly become the mainstream sensor in many modern UAV navigation systems [31, 18]. LiDAR sensors can be divided into two categories: single-line and multi-line, where single-line scans one plane of the obstacles to obtain a 2D map, while multi-line scans multiple surfaces to obtain a 3D point cloud of the environment. Based on the output types, single- and multi-line LiDAR are also called 2D and 3D LiDAR. Attracted by the richer environment representations that 3D LiDAR produces, most existing UAV autonomous systems utilize 3D LiDAR as the sensor configurations [18, 32, 33]. However, despite existing work's favor into 3D LiDAR, the rich 3D environmental representations might not be all necessary to perform UAV path planning and robust obstacle avoidance, leaving room for better cost-effective designs. In other words, configurations that utilize 2D LiDAR, where we call a _parsimonious configuration_, may achieve a more balanced trade-off between performance and cost. For example, Gabriel et al. [34] leverage 2D LiDAR and propose an adaptive path-planning solution that combines Rapidly Exploring Random Trees (RRT) and deep RL for the autonomous trajectory generation of UAVs in agricultural environments. However, it does not depend on UAV-scanned data at all stages but rather leverages a comprehensive Python environment for its operations, failing to equip the system with efficient obstacle avoidance capabilities. Contrary to this, RELAX prioritizes enhancing obstacle avoidance by mainly utilizing LiDAR data. More specifically, we employ the ROS-Gazebo-PX4 simulator for developmental purposes, incorporating a variety of algorithms aimed at overcoming different obstacles and ensuring the training's applicability in a real-time simulation setting. ## III Methodology RELAX is designed specifically for parsimonious UAVs, which are drones that lack odometers, RGB-D cameras, 3D-LiDAR, or gimbals systems, and only equip simple sensors, such as 2D-LiDAR and inertial measurement unit (IMU). More specifically, RELAX utilizes RPLiDAR1, a low-cost 2D laser scanner that performs \\(360\\)-degree scan within a certain range to produce 2D point clouds of the surrounding. Footnote 1: More details at [https://www.slanttec.ai/product/slanttec-rpldar-a1/](https://www.slanttec.ai/product/slanttec-rpldar-a1/). RELAX consists of five modules, as shown in Fig. 1. _Resources_ module contains necessary sensor outputs including point clouds captured by 2D-LiDAR, velocity and pose of UAV obtained from IMU, and the map generated by a _map constructor_. Specifically, _map constructor_ synthesizes an occupancy map of the environment using point clouds from 2D-LiDAR. _Mission planner_ provides an obstacle-free path from the starting point to t Fig. 1: System overview: RELAX starts from checking whether the occupancy grid map exists. If there is no map, it will run _map constructor_ to enter the map constructing mode. While we manually operate the drone to fly one complete circuit around the environment at a specific altitude, _map constructor_ processes the data from 2D-LiDAR and integrates these data to create an occupancy grid map. This map is then sent back to _resources_ and available to other modules. Next, _mission planner_ subscribes this map and use it to plan an obstacle-free path from start to target and sends to _online re-planner_ for dynamic obstacle avoidance using real-time 2D-LiDAR inputs. on the occupancy map. And _online re-planner_ navigates the drone (illustrated as the _UAV_ module) to move along this planned path and perform online re-planning to avoid dynamic obstacles. The \"dynamic obstacles\" in this paper refers to the static obstacles that are not included in the map produced by _map constructor_. Following [34], we separate static path planning from online path re-planning, with the underlying intuition that the environment does not undergo significant changes in a short period. And separation in the different path planning stages significantly reduces the time cost. We illustrate _map constructor_, _mission planner_ and _online re-planner_ with details in SSIII-A, SSIII-B, and SSIII-C, respectively. ### _Map Constructor_ Map constructor leverages 2D-LiDAR in tandem with Hector-SLAM [35] to construct a grid occupancy map of the environment. **LiDAR scanning.** LiDAR scanning module generates raw images of the surrounding environment at a particular UAV position, as shown in the left of Fig. 2. In 2D-LiDAR system, the scanned images adopts a structure that aligns with \\(x\\)- and \\(y\\)-axis after a reshape operation performed on the one-dimensional LiDAR data array. Then the image is processed into an occupancy grid map, as shown in the right of Fig. 2, where the level of confidence regarding obstacle existence is represented through dark (low) to light (high). While flying through the environment, the drone constantly generates \"raw\" images, contributing to the ongoing construction of the environment. **Hector-SLAM.** Map constructor employs Hector-SLAM [35] to integrate all the LiDAR-scanned \"raw\" images into a single map that represents the entire environment. More specifically, Hector-SLAM operates across three primary phases, which are _map access_, _scan matching_ and _multi-resolution map representation_. In _map access_, the initial occupancy grid map takes shape, driving from the first \"raw\" image. Next, _scan matching_ matches the \"raw\" image taken at time \\(t\\) to the previous occupancy grid map from \\(t-1\\) through points correspondence. To lower the risk of getting stuck in local optimal solution, Hector-SLAM applies _multi-resolution map representation_ to simultaneously keep different maps and update them based on pose estimations. The resulting map in our case is shown in the left of Fig. 3. ### _Mission Planner_ Mission planner receives the occupancy map from map constructor, and plans an obstacle-free path from start to end. It includes two components, which are _path planner_ and _point transformer_. The complete algorithm of mission planner is detailed in Appendix V-A. **Path planner.** Path planner is responsible for generating a collision-free path from start to end based on the static occupancy map. In our case, this map refers to the output of map constructor. Since the generation of such path does not depend on characteristics unique to parsimonious UAVs, any standard path planning algorithm should suffice. For wider adaptability, lower execution time, and relatively optimal path, we use Rapidly Exploring Random Tree (RRT) [36] in this paper to showcase the feasibility of our proposed framework. An example path is shown in the right of Fig. 3. **Point transformer.** To navigate UAV through real-life environment, a transformation is needed to convert the path from path planner into real-life coordinates. To begin with, we initiate a rotation of the map, as shown in the right of Fig. 3. The rotation angle emerges from the cumulative summation of three lines' shifting angles (\\(\\theta_{1}\\), \\(\\theta_{2}\\), \\(\\theta_{3}\\) in Fig. 3 right), where each bears a weight that minimizes potential errors. After rotation, every intermediate point along the trajectory undergoes calculation based on the ratio between distances in occupancy grid map and their counterparts in real environment. Four example points, UL, LL, UR, and LR are illustrated in Fig. 3, where their corresponding points in real-life environment are the xMinG, xMaxG, yMinG and yMaxG in Fig. 2. Let \\(\\theta\\) denotes the weighted sum of individual-axis rotation angles, \\(r\\) denote the distance between origin and point \\(p\\), and (\\(x_{pr},y_{pr}\\)) be the \\(x\\) and \\(y\\) of \\(p\\) after rotation, we have the real-life environment coordinates (\\(x_{p}^{\\text{new}},y_{p}^{\\text{new}}\\)): \\[\\text{x}_{p}^{\\text{new}}=(r\\cdot cos(\\theta)+x_{pr})\\cdot\\frac{\\text{xMaxG}- \\text{xMinG}}{\\sqrt{(x_{ur}-x_{ul})^{2}+(y_{ur}-y_{ul})^{2}}} \\tag{1}\\] \\[\\text{y}_{p}^{\\text{new}}=(r\\cdot sin(\\theta)+y_{pr})\\cdot\\frac{\\text{yMaxG}- \\text{yMinG}}{\\sqrt{(x_{ur}-x_{lr})^{2}+(y_{ur}-y_{lr})^{2}}}. \\tag{2}\\] ### _Online Re-planner_ As the multitude of dynamic obstacle scenarios makes it impractical to establish comprehensive avoidance rules, a learning-based planning algorithm is designed to perform autonomous obstacle avoidance in dynamic, unknown environment. More specifically, we propose a novel RL-based Fig. 3: Left: occupancy map constructed by Hector-SLAM. Right: “raw” 2D-LiDAR scanning image of the left environment. Fig. 2: Left: environment of UAV at a particular position; Right: “raw” 2D-LiDAR scanning image of the left environment. online re-planner combining Double Deep Q-networks (DDQN) [37] and dueling architecture [38]. **Network structure.** Dueling Double Deep Q-networks (D3QN) improved upon Deep Q-networks (DQN) [39] and Double Deep Q-networks (DDQN) [37] by incorporating the dueling architecture [38]. More specifically, it splits the Q-values estimations into two separate functions, namely a value function, \\(V(s)\\), estimating the reward collected from state \\(s\\); and an advantage function, \\(A(s,a)\\), estimating if action \\(a\\) is better than other actions at state \\(s\\). Both value and advantage functions are constructed with a set of dense layers and are later combined to output Q-values for each action, with the combination operator shown in Eq. (3). \\[Q(s,a)=V(s)+\\left(A(s,a)-\\frac{1}{|A|}\\sum_{a^{\\prime}}A(s,a^{\\prime})\\right) \\tag{3}\\] **State design.** We integrate the orientation vector spanning from the current location to the target, along with real-time LiDAR data, into our state design. Specifically, real-time LiDAR data, which are 360-vectors representing each degree, is partitioned into 8 sectors through thresholded min-pooling, where each corresponds to a specific direction, such as \"forward-left\" or \"forward-right\", as shown in the left of Fig. 4. More precisely, we have: \\[d_{i}=\\min\\left(all\\_dist\\in region_{i},det\\_range\\right),i\\in[0,7] \\tag{4}\\] where \\(det\\_range\\) denotes the threshold value and \\(all\\_dist\\) represents the distances of all points (in our case is \\(\\frac{360}{8}=45\\)) in \\(region_{i}\\). Let (\\(x_{c},y_{c},z_{c}\\)) and (\\(x_{t},y_{t},z_{t}\\)) denote the current and target position, we have the direction vector defined as: \\[(x_{d},y_{d},z_{d})=(x_{t},y_{t},z_{t})-(x_{c},y_{c},z_{c}) \\tag{5}\\] Finally, the current state is defined as: \\[state=[x_{d},y_{d},z_{d},dist_{0},dist_{1},\\ldots,dist_{6},dist_{7}] \\tag{6}\\] One of the biggest challenges using 2D-LiDAR is that the data may be extremely noisy due to the disturbances from UAV maneuvers. To address this challenge, we propose a novel data filtering mechanism, as illustrated in Algorithm 3 in Appendix V, to enhance the accuracy of the acquired data. The core idea for judging whether this data is noisy is that within all potential actions, the greatest conceivable variation in distance between two states should be \\(\\leq\\sqrt{2}<1.5\\). The rule of \\(\\sqrt{2}\\) comes from our definition of action, which will be illustrated with more details later in this section. Let \\(x_{i}\\), \\(y_{i}\\) denote the coordinate difference between step \\(i-1\\) and step \\(i\\) in \\(x\\) and \\(y\\)-axis, respectively, we have \\(\\sqrt{\\max|x_{i}|+\\max|y_{i}|}\\leq\\sqrt{2}\\). On the other hand, if the disparity is larger than 1.5, it is deemed to be spurious noisy and is more carefully handled as shown in Algorithm 3. The heuristic behind Algorithm 3 is that there is a very small likelihood of continuously obtaining noisy data more than \\(det\\_range/2\\) number of times within the same region. Thus, we maintain a \\(index\\_list\\) to record the number of times that noisy data occurred for each region and will dynamically decrease for getting data without noisy. In addition, after getting noisy data, we will set the distance to \\((det\\_range/2)+2\\) or subtract 1 from it depending on the corresponding number at \\(index\\_list\\). **Action space.** As we constrain UAV from moving vertically due to sensor limitations (RPLiDAR can only scan horizontally), the action space \\(A\\), as illustrated in Eq. (7) and Fig. 4 left, only contains 8 actions. Specifically, we have \\[\\begin{split} A=\\{[1,-1,0],[1,0,0],[1,1,0],[0,1,0],\\\\ [-1,1,0],[-1,0,0],[-1,-1,0],[0,-1,0]\\}\\end{split} \\tag{7}\\] **Reward function design.** Temporal reward \\(r_{t}\\) that trains the model to learn optimal actions reaching the target point is illustrated in Eq. (8). More specifically, let \\(d_{current}\\) denote the distance between current and target position, and \\(d_{last}\\) denote the distance between the previous and target position, we have: \\[r_{t}=\\begin{cases}3000,&d_{current}\\leq 3\\\\ -3000,&num\\_steps\\_taken\\geq max\\_num\\_steps\\\\ -50,&d_{current}>d_{last}\\\\ -4000,&collision\\end{cases} \\tag{8}\\] And the final reward \\(r\\) for each chosen action is simply: \\[r=r_{t}-\\frac{d_{current}^{2}}{100} \\tag{9}\\] The underlying rationale for the configuration of the reward mechanism is to prioritize the drone's learning process in avoiding collisions as the paramount task, while also discouraging the behavior of continuously circling a point proximate to the target. **Check done.** Ensuring timely notifications about the completion of an episode holds immense significance in RL training. Since we incorporate real-time LiDAR data into state representation, the training must proceed in Gazebo-ROS-PX4 simulator, which introduces complexity to the reset process after collision events. Upon drone's collision with an obstacle, automatic disarming occurs, and manual restarts of ROS, Gazebo, and PX4 are required for re-arming, rendering continuous training infeasible. To expedite model convergence and streamline the intricate reset procedure, we devise the check done function as shown in Eq. (10), which defines the completion of an episode when the distance between the UAV and the obstacle is less than a predefined collision threshold \\(col\\_threshold\\). Fig. 4: Left: state-action correspondence, where agent can choose or exclude action [\\(1,-1,0\\)] based on the distance. Right: training environment of the model. \\[done=\\begin{cases}True,&d_{current}\\leq 3\\\\ True,&\\left|x_{c}\\right|>\\left|limit\\_x\\right|or\\left|y_{c}\\right|>\\left|limit \\_y\\right|\\\\ True,&counter\\geq step\\_threshold\\\\ True,&\\exists i\\in[0,7]\\,s.t.\\,dist_{i}\\leq col\\_threshold\\\\ False,&else\\end{cases} \\tag{10}\\] **Reset.** Reset operation is the guarantee of a well-trained model. During training, at the end of each episode, the drone will be set to (\\(0,0,4.4\\)) in Gazebo world. However, according to the settings of ROS, the position of drone will not immediately be set to (\\(0,0,4.4\\)). It will either be directly set to (\\(0,0,4.4\\)) after some time, which depends on the distance between the previous position of drone and (\\(0,0,4.4\\)), or decreasing from the previous position to (\\(0,0,4.4\\)) step by step. For example, if the drone was at position (\\(5,6,4.4\\)), after setting to (\\(0,0,4.4\\)) in Gazebo world, the position of drone in ROS topic might change as \\((5,6,4.4)\\rightarrow(4,6,4.4)\\rightarrow(3,6,4.4)\\rightarrow \\rightarrow(0,0,4.4)\\). This setting raises a fatal problem: before the position of drone in ROS becomes (\\(0,0,4.4\\)), the drone will move uncontrollably and has high risk of colliding with obstacles during this period. To solve this problem, we propose a novel reset algorithm as shown in Algorithm 4 in Appendix V, in which (\\(x_{c},y_{c},z_{c}\\)) denotes the current drone position in ROS topic. The core idea of the Algorithm 4 is to control the movement of drone into a specific range of space, among which we can guarantee no collision will happen. The parameters such as \\(\\text{a}_{thr}\\), \\(\\text{b}_{thr}\\), \\(\\text{offset}_{a}\\), \\(\\text{offset}_{b}\\) are manually set to serve this purpose and should be modified when training environment is different. **Training details.** To ensure the model learns a policy that is independent of absolute positions (specified as \\(x,y,z\\)), the target position for each episode will be generated randomly within a predetermined range for \\(x\\) and \\(y\\), while maintaining \\(z\\) at a constant value. The complete training procedure, which summarizes the core algorithm of our proposed RELAX, is illustrated in Algorithm 1. Detailed hyperparameter values are shown in Table III in Appendix V. To emphasize the novelty of our proposed 2D-LiDAR UAV system RELAX, we summarize the differences between the most updated existing frameworks, to our best knowledge, and ours in Table I. We see that RELAX presents several advantages than the others. Firstly, the mapping function, as we illustrated in SSIII-A, enables the use of our system in a wide range of environments without the need of manually reconstructing the environments. Secondly, the ability to conduct real-time training in Gazebo-ROS-PX4 simulator ensures the learning of dynamic obstacle avoidance. More specifically, the dynamic obstacle avoidance utilizing real-time LiDAR data, as illustrated in SSIII-C, improves significantly when compared to existing system [34] that utilizes only \\((x,y,z)\\) positions. ``` 0: limits, start, max_vel, max_acc, max_jerk, det_range 0: None 1:for Number of Episodes do 2: (\\(x_{t},y_{t},z_{t})\\leftarrow\\) randomly generated target position 3: score \\(\\leftarrow\\)\\(0\\), counter \\(\\gets 0\\), Drone take off and go to start, last_req \\(\\gets Time.now()\\) 4: Initial state \\(s_{0}\\)\\(\\leftarrow\\) LiDAR Data after Alg.3 LiDAR Data Filtering 5:while not done do 6:if drone.armed and \\(Time.now()-\\) last_req \\(>6\\)then 7: Select an action \\(a_{t}\\) with \\(\\epsilon\\)-greedy algorithm 8: Drone execute the action \\(a_{t}\\), \\(\\text{detect\\_flag}\\gets True\\) 9: New state \\(s_{t+1}\\)\\(\\leftarrow\\) LiDAR Data after Alg.3 LiDAR Data Filtering 10: done \\(\\leftarrow\\) Eq.10 Check Done, Reward \\(r_{t}\\)\\(\\leftarrow\\) Eq.9 Reward Function, score \\(+=r_{t}\\) 11: Push (\\(s_{t}\\), \\(a_{t}\\), \\(r_{t}\\), \\(s_{t+1}\\), done) in memory buffer 12:if size(memory buffer) \\(\\geq B\\)then 13: Sample \\(B\\) transitions from memory buffer 14: Every \\(f_{u}\\) steps update the \\(\\theta_{t}\\) with \\(\\theta_{p}\\), \\(\\alpha_{t}\\) with \\(\\alpha_{p}\\), \\(\\beta_{t}\\) with \\(\\beta_{p}\\) 15: Do forward operation as Eq.3 for \\(Q_{policy}\\) and \\(Q_{target}\\), get q_pred, q_next and q_eval 16: q_target = \\(r_{t}+\\gamma\\)*q_next 17: Update the parameter \\(\\theta_{p},\\alpha_{p},\\beta_{p}\\) for \\(Q_{policy}\\) on loss (q_target, q_pred) 18:endif 19:\\(s_{t}=s_{t+1}\\), counter \\(+=1\\) 20:else 21: Maintain connection with drone 22:endif 23:endwhile 24: Call Alg.4 Reset 25:endfor ``` **Algorithm 1** RELAX ## IV Experiment - A Case Study In this section, we demonstrate RELAX on addressing a real-life challenge within the agricultural context, specifically catering to scenarios where farmers seek to do equipment checks during nocturnal hours or after extreme weathers such as storms. **Hardware and software setup.** The experiment is conducted on a desktop with Intel Core-i5-13400 CPU, Nvidia GeForce RTX4070Ti GPU and 64GB of RAM. The operating system is Ubuntu 20.04 bionic. The simulator is executed on Gazebo 11, ROS Noetic and PX4 v1.12.3.. **Experimental environment setup.** We firstly train our RL model in environment shown in Fig. 4 right and fine-tune it in environment shown in Fig. 5 left. To examine the feasibility of our proposed framework, we established a \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline Framework & Mapping & PP & Alg & DOA & TrInSim \\\\ \\hline Gabriel’s [34] & - & RRT & DQN & - & - \\\\ **RELAX (Ours)** & **H-S** & **RRT** & **D3QN** & \\(\\surd\\) & \\(\\surd\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE I: Comparison between RELAX and other 2D-LiDAR UAV Frameworks: PP means path planning algorithm. Alg means the RL algorithm. DOA means dynamic obstacle avoidance. TrInSim means real-time training in Gazebo-ROS-PX4 simulator and H-S means Hector-SLAM algorithm. test environment illustrated in Fig. 5 right. The agricultural land is divided into smaller zones by movable iron bars and wires to facilitate diverse crop cultivation. However, the dynamic nature of these iron bars, subject to seasonal rearrangements by farmers to accommodate varying crop types, presents noteworthy challenges, in which the static path planning based on a pre-scanned map becomes impractical. The iron bars, which are not shown during the map construction stage, are considered as dynamic obstacles in our experiments. **Results.** To comprehensively check the generalizability of our trained model, we conduct 50 run each with around 20 iron bars distributed randomly in the testing environment and report the average success rate. The results are shown in Table II, where the better variant of RELAX (with D3QN) achieves an average success rate of 90%, which is 8 times higher than the other 2D-LiDAR based algorithm, Gabriel's alg [34], making RELAX a practically usable solution in such agricultural applications. Additionally, the performance of RELAX is on par with other state-of-the-art algorithms requiring 3D LiDAR and RGB-D cameras, such as Deep PANTHER [40] and FAST-LIO [41], showcasing RELAX's competitiveness while keeping the total cost much lower. As an example, results from six sample experiments are shown in Fig. 6. The oscillatory patterns observed in the movements of our parsimonious UAV when in close proximity to obstacles, as depicted in the figures, distinctly illustrate its dynamic obstacle avoidance behavior. This behavior becomes particularly evident when the drone's distance from obstacles falls below the predefined threshold established during the training phase. Moreover, we frequently find that certain algorithms excel in specific areas. For instance, as demonstrated in Table II, Dijkstra's algorithm, despite achieving a lower success rate, boasts significant time efficiency. This variability in performance underscores the importance of allowing users to select algorithms tailored to their unique requirements, which highlights the value of the modular design of our framework, RELAX. This design facilitates easy integration and experimentation with emerging RL algorithms, offering a versatile platform for future research endeavors. ## V Conclusion And Future Works In this paper, we introduce **RELAX**, a RL-based autonomous system for parsimonious UAVs that carry only one single 2D-LiDAR to successfully perform navigation in unknown environments. Rigorous feasibility tests confirm its effectiveness, showcasing a remarkable success rate of 90% in diverse scenarios, outperforming existing algorithms by a significant margin. In addition, we demonstrate RELAX's great potential as both RRT and D3QN can be replaced by more advanced algorithms and network structures to achieve more desirable performance. Despite the success, ensuring precise 2D-LiDAR detection requires conservative speed settings, which limits our system's versatility. To mitigate this, we are investigating the integration of multiple 2D-LiDARs collecting data from different angles. By fusing these data, we aim to counteract the influence of the imperfect LiDAR readings, thus further broadening the application potential of RELAX. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline Algorithm & Time(SPP) & Time(ORP) & Success Rate \\\\ \\hline Genetic Alg [42] & 17.4s & - & 12\\% \\\\ Dijkstra’s Alg [43] & **4.8s** & - & 8\\% \\\\ Gabriel’s Alg [34] & 13.5s & - & 10\\% \\\\ Deep PANTHER [40] & - & [0.01, 0.03] & **100\\%** \\\\ FAST-LIO [41] & - & **[0.003, 0.013]** & 98\\% \\\\ **RELAX (D0N)** & 13.5s & [0.0003, 0.1] & 82\\% \\\\ **RELAX (D3QN)** & 13.5s & [0.0003, 0.05] & 90\\% \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE II: Performance Comparison between several algorithms and ours: For SPP (Static Path Planning), we mean planning a path on the map without iron bars, which mainly illustrates the performance difference between RRT and other algorithms. For ORP (Online Re-Planning), it denotes the time needed for online re-planning to avoid the dynamic obstacles based on real-time 2D-LiDAR data, at which Genetic Algorithm [42], Dijkstra’s Algorithm [43], and Gabriel’s Algorithm [34] are NOT able to handle. Deep PANTHER [40] is based on camera and FAST-LIO is based on 3D-LiDAR [41]. Since the inference time of a trained RL model for each step differences a lot depending on state, we just record the range of time needed for one-time inference for illustration purpose. For success rate, it illustrates the percentage of dynamic obstacles (iron bars) the drone avoided during the path it took from starting point to the target. The average success rate of 50 tests for each algorithm are shown. The bold number in each column illustrates achieving best performance for this criterion among all algorithms. Fig. 5: Left: training environment used for fine-tuning after training in environment shown in Fig. 4 right. Right: a typical farmland environment, where delineated regions are labeled as \\(a,b,c\\), and etc., and are separated by the movable iron bars and wires (shown as red dotted lines). The objective of the case study is to navigate a parsimonious UAV from house to tower for nocturnal inspections. ## Appendix ### _Mission Planner Algorithm_ ``` 1:start, target, number of iterations, grid, step size, test range 2:path between start and target in real environment Initialization : 3:\\(N_{start}\\leftarrow\\) treeNode(start); \\(N_{target}\\leftarrow\\) treeNode(end) 4:\\(R_{tree}\\leftarrow\\) RRTA algorithm(\\(N_{start}\\), \\(N_{end}\\), numOfIterations, grid, stepSize, testRange) 5:upperLeftPoint, upperRightPoint, lowerRightPoint, lowerLeftPoint \\(\\leftarrow\\) Scanned Occupancy Grid Map; xMinG, xMaxG, yMinG, yMaxG \\(\\leftarrow\\) Gazebo World Environment LOOP Process 6:for\\(i=0\\) to \\(numOfIterations\\)do 7:\\(R_{tree}\\).resetNearestValues(); point \\(\\leftarrow\\)\\(R_{tree}\\).sampleAPoint() 8:\\(N_{nearest}\\gets R_{tree}\\).findNearestPoint() 9:\\(N_{new}\\gets R_{tree}\\).steerToPoint(\\(N_{nearest}\\)) 10: flag \\(\\leftarrow\\) check if there are obstacles between \\(N_{new}\\) and \\(N_{nearest}\\) 11:if not reach target then 12: Add \\(N_{new}\\) to \\(R_{tree}\\) and check whether \\(N_{new}\\) in the test range of target, if yes, then break 13:endif 14:endfor 15:\\(R_{tree}\\).WayPoints \\(\\gets\\)\\(R_{tree}\\).retraceRRTPath() 16:WaypointsTransformed \\(\\leftarrow\\) Eq.1, Eq.2 17:returnWaypointsTransformed ``` **Algorithm 2** Mission Planner ### _Real-time 2D-LiDAR Filtering Algorithm_ ``` 1:LiDAR data from last episode (lidar_data_t) and this episode (lidar_data), a list used for recording function (index_list) 2:state 3:for\\(i=0\\) to len(lidar_data)-1 do 4:iflidar_data_t[i] - lidar_data[i] \\(\\geq\\) 1.5 then 5: index_list[i] \\(+=1\\) 6:\\(d_{r}\\) = floor(0.5\\(*\\)det_range) 7:ifindex_list[i] \\(\\geq d_{r}\\)then 8: lidar_data[i] \\(=\\) det_range \\(-d_{r}+1\\) 9: index_list[i] \\(-=\\) (det_range \\(-d_{r}-1\\)) 10:else 11: lidar_data[i] \\(=\\) lidar_data_t[i] \\(-1\\) 12:endif 13:else 14:ifindex_list[i] \\(>0\\)then 15: index_list[i] \\(-=1\\) 16:endif 17:endif 18:endif 19:endfor 20:state = lidar_data 21:return state ``` **Algorithm 3** Real-time 2D-LiDAR Data Filtering ### _Raster Algorithm_ ``` 1:a\\({}_{thr}\\), b\\({}_{thr}\\), offset\\({}_{a}\\), offset\\({}_{b}\\) 2:None 3:while\\(True\\)do 4:if distance between (\\(x_{c},y_{c},z_{c}\\)) and (\\(0,0,4.4\\)) \\(\\leq 1\\)then 5: break 6:else 7:if (\\(x_{t}\ eq x_{c}\\) or \\(y_{t}\ eq y_{c}\\)) then 8:if\\(|x_{t}|\\geq\\)a\\({}_{thr}\\)then 9:if\\(|x_{t}|\\geq\\)b\\({}_{thr}\\)then 10:\\(x_{t}\\leftarrow|x_{t}|-\\)offset\\({}_{b}\\) 11:endif 12:else 13:\\(x_{t}\\gets x_{c}\\) 14:endif 15:if\\(|y_{t}|\\geq\\)a\\({}_{thr}\\)then 16:if\\(|y_{t}|\\geq\\)b\\({}_{thr}\\)then 17:\\(y_{t}\\leftarrow|y_{t}|-\\)offset\\({}_{b}\\) 18:else 19:\\(y_{t}\\leftarrow|y_{t}|-\\)offset\\({}_{a}\\) 20:endif 21:else 22:\\(y_{t}\\gets y_{c}\\) 23:endif 24:start \\(\\leftarrow\\)\\([x_{t},y_{t},4.4]\\) 25: Let drone move to start 26:endif 27:endif 28:endwhile ``` **Algorithm 4** Reset ### _Parameters used in Training of RL Agent_ As Deep RL algorithms are sensitive to hyper parameters, we provide the hyper parameters we used through our training in Table III. \\begin{table} \\begin{tabular}{l l} \\hline \\hline Parameter & Value \\\\ \\hline State dimensions \\(N_{dim}\\) & 11 \\\\ Action dimensions \\(A_{dim}\\) & 8 \\\\ Training episodes \\(N_{eps}\\) & 500 \\\\ Maximum step for one episode \\(N_{step}\\) & 50 \\\\ Memory pool size \\(M\\) & \\(1\\times 10^{6}\\) \\\\ Batch size \\(B\\) & 96 \\\\ Target network parameter update frequency \\(f_{u}\\) & 1000 \\\\ Discount factor \\(\\gamma\\) & 0.99 \\\\ Learning rate \\(\\alpha_{l}\\) & \\(5\\times 10^{-4}\\) \\\\ \\(\\epsilon\\)-greedy possibility max \\(\\epsilon_{max}\\) & 1.0 \\\\ \\(\\epsilon\\)-greedy possibility min \\(\\epsilon_{min}\\) & 0.01 \\\\ \\(\\epsilon\\)-greedy decay factor \\(\\epsilon_{decay}\\) & \\(1\\times 10^{-4}\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE III: Parameters used in training of obstacle handler ## References * [1]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [2]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [3]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [4]A. Bachrach, L. M. Gonzalez-de Santos, and H. Gonzalez-Jorge (2022) Lidar based detect and avoid system for uav navigation in uam corridors. Drones6 (8), pp. 185. Cited by: SSII-A. * [5]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [6]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [7]A. Bachrach, L. M. Gonzalez-de Santos, and H. Gonzalez-Jorge (2022) Lidar based detect and avoid system for uav navigation in uam corridors. Drones6 (8), pp. 185. Cited by: SSII-A. * [8]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [9]A. Bachrach, L. M. Gonzalez-de Santos, and H. Gonzalez-Jorge (2022) Lidar based detect and avoid system for uav navigation in uam corridors. Drones6 (8), pp. 185. Cited by: SSII-A. * [10]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [11]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [12]A. Bachrach, L. M. Gonzalez-de Santos, and H. Gonzalez-Jorge (2022) Lidar based detect and avoid system for uav navigation in uam corridors. Drones6 (8), pp. 185. Cited by: SSII-A. * [13]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [14]A. Bachrach, L. M. Gonzalez-de Santos, and H. Gonzalez-Jorge (2022) Lidar based detect and avoid system for uav navigation in uam corridors. Drones6 (8), pp. 185. Cited by: SSII-A. * [15]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [16]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [17]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [18]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [19]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [20]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [21]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [22]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [23]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [24]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [25]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [26]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [27]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [28]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [29]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [30]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [31]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by: SSII-A. * [32]A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturuma, D. Fox, and N. Roy (2012) Estimation, planning, and mapping for autonomous flight using an rgb-d camera in 98-denied environments. The International Journal of Robotics Research31 (11), pp. 1320-1343. Cited by:
Umanned Aerial Vehicles (UAVs) have become increasingly prominence in recent years, finding applications in surveillance, package delivery, among many others. Despite considerable efforts in developing algorithms that enable UAVs to navigate through complex unknown environments autonomously, they often require expensive hardware and sensors, such as RGB-D cameras and 3D-LiDAR, leading to a persistent trade-off between performance and cost. To this end, we propose RELAX, a novel end-to-end autonomous framework that is exceptionally cost-efficient, requiring only a single 2D-LiDAR to enable UAVs operating in unknown environments. Specifically, RELAX comprises three components: a pre-processing _map constructor_; an offline _mission planner_; and a reinforcement learning (RL)-based _online re-planner_. Experiments demonstrate that RELAX offers more robust dynamic navigation compared to existing algorithms, while only costing a fraction of the others. The code will be made public upon acceptance.
Condense the content of the following passage.
194
arxiv-format/1502_06660v2.md
# The dynamical mass ejection from binary neutron star mergers: Radiation-hydrodynamics study in general relativity Yuichiro Sekiguchi\\({}^{1}\\) Kenta Kiuchi\\({}^{1}\\) Koutarou Kyutoku\\({}^{2}\\) Masaru Shibata\\({}^{1}\\) \\({}^{1}\\)Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, 606-8502, Japan \\({}^{2}\\)Department of Physics, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, Wisconsin 53201, USA November 7, 2021 ## I Introduction The merger of binary neutron stars (BNS) is one of the most promising sources of gravitational waves for advanced LIGO [1], advanced VIRGO [2], and KAGRA [3], which will start operation in a few years. The recent statistical studies suggest that these gravitational-wave detectors will observe gravitational waves from merger events as frequently as \\(\\sim 1\\)-\\(100\\)/yr [4; 5]. The merger of BNS is also a promising candidate for the central engine of short-hard gamma-ray bursts. If gravitational waves are observed simultaneously with them, a long-standing puzzle on the central engine of short-hard gamma-ray bursts may be resolved. In addition to these aspects, BNS are attracting attentions as the nucleosynthesis site of heavy elements by the \\(r\\)-process [6], which may proceed in the neutron-rich matter ejected during the merger. Recent observations of metal-poor stars [7] strongly suggest that there should exist'main' \\(r\\)-stars affected by 'universal' \\(r\\)-process events in which the resulting abundance is close to that of solar-abundance pattern for nuclei with the atomic number \\(Z\\gtrsim 38\\) (\\(A\\gtrsim 90\\)). It has recently been revealed [8; 9] that the supernova explosion, which was previously considered to be the most promising candidate for the site of the \\(r\\)-process, may not be a viable origin in this regard, and the BNS mergers is getting attention. Furthermore, a strong electromagnetic emission may accompany the radioactive decay of the \\(r\\)-process elements [10; 11; 12] and it could be an electromagnetic counterpart of gravitational waves from BNS mergers. An infrared transient event associated with GRB 130603B is the first candidate for such events [13]. These facts strongly encourage the community of gravitational-wave astronomy to explore the \\(r\\)-process nucleosynthesis and associated electromagnetic emission in the BNS merger. For the quantitative study of these topics, we have to clarify the merger dynamics, subsequent mass ejection, and physical condition of the ejecta, which are necessary to study the nucleosynthesis, subsequent decay of the heavy elements in the ejecta, and electromagnetic emission from the ejecta. For this purpose, we have to perform BNS merger simulations taking into account both general relativistic gravity and detailed microphysical processes. For the former, recent numerical relativity simulations (e.g., [14]; see also [15] for simulations in approximate general relativistic gravity) have clarified that the general relativistic gravity can be the key for the mass ejection: In general relativity, shock heating plays a prominent role in the merger process, and consequently, the ejecta that is dynamically expelled during the merger (dynamical ejecta) are composed not only of those driven by the tidal interactions but of those driven by the thermal pressure, by contrast with the result in Newtonian simulations (e.g., [16]) for which the tidal component is major. For the latter, we have recently developed a neutrino-radiation hydrodynamics code, and now, we can perform simulations both employing a wide variety of equations of state (EOS) for the nuclear matter in which finite-temperature effects are incorporated and handling neutrino cooling and heating with reasonable sophistication. This is the first study based on these modern aspects of the merger dynamics in general relativity taking into account the microphysics. In this paper, we report the latest result of our simulations for equal-mass BNS mergers of typical neutron-star mass (\\(1.35M_{\\odot}\\)) for three representative EOS, among which the radius of neutron stars is appreciably different. In this paper, we only considerthe case of equal-mass binaries. The dependence on the mass-ratio and the total mass will be studied in a future work. We will show that the physical properties of the dynamical ejecta such as the mass and neutron fraction depend strongly on the EOS. We find that for producing mildly neutron-rich dynamical ejecta of large mass with a broad range of the electron fraction, a relatively soft EOS that yields small-radius (\\(\\lesssim 12\\,\\)km) neutron stars is necessary. Because of such a broad distribution of the electron fraction, the universal [7] solar-abundance pattern of the \\(r\\)-process elements may be reproduced without need for the other contributions [17]. ## II Method, EOS, initial models, and grid setup We solve Einstein's equation by the puncture-BSSN (Baumgarte-Shapiro-Shibata-Nakamura) formalism as before [18; 19]. The 4th-order finite-differencing scheme is applied to discretize the field equations. The radiation hydrodynamics equations are solved by a recently-developed code which is updated from the previous version: In this new code, neutrino transport is computed in a leakage-based scheme [20] incorporating Thorne's moment formalism with a closure relation for a free-streaming component [21]. For neutrino heating, absorption on free nucleons is taken into account. We employ three EOS for nuclear matter derived recently by Hempel and his collaborators, which are referred to as SFHo [22], DD2 [23], and TM1 [24] in the following. TM1 EOS, which is also known as Shen EOS [25], is based on the relativistic mean field theory with a parameter set of Ref. [26] and have been used widely in both supernova and compact-binary merger simulations. SFHo EOS is constructed so that the predicted neutron star radius matches recent neutron star observations by extending the nonlinear Walecka model [22]. DD2 EOS is based on a relativistic mean field model with a density dependent coupling [27]. Some characteristic properties of EOS are listed in Table 1. For all of them, the predicted maximum mass for spherical neutron stars is larger than the largest well-measured mass of neutron stars, \\(\\approx 2M_{\\odot}\\)[28]. For these EOS, the radius of neutron stars with mass \\(1.35M_{\\odot}\\) is \\(R_{1.35}=11.9\\) km (SFHo), \\(13.2\\) km (DD2), \\(14.5\\) km (TM1), respectively (see Table 2). We refer to an EOS with a small neutron star radius (\\(R_{1.35}\\leq 12\\) km) like SFHo as a _soft_ EOS and an EOS with a large radius (\\(R_{1.35}\\gtrsim 13\\) km) as a _stiff_ EOS. The stellar radius plays a key role for determining the merger remnant and the properties of the dynamical ejecta. In numerical simulations, we have to follow the ejecta with velocity \\(0.1\\)-\\(0.3c\\) (\\(c\\) is the speed of light), which expand to \\(>10^{3}\\)km in the simulation time. To follow the ejecta motion as well as to resolve neutron stars, we employ a fixed mesh-refinement algorithm. In this work, we prepare 9 refinement levels with the varying grid spacing as \\(\\Delta x_{l}=2^{9-l}\\Delta x_{9}\\) (\\(l=1,2,\\cdots,9\\)) and all the refinement levels have the same coordinate origin. Here, \\(\\Delta x_{l}\\) is the grid spacing for the \\(l\\)-th level in the Cartesian coordinates. For each level, the computational domain covers the region \\([-N\\Delta x_{l},N\\Delta x_{l}]\\) for \\(x\\)- and \\(y\\)-directions, and \\([0,N\\Delta x_{l}]\\) for \\(z\\)-direction (the reflection symmetry with respect to \\(z=0\\) is imposed). In the highest-resolution run, we assign \\(N=285\\), \\(\\Delta x_{9}=150\\)-\\(200\\,\\)m, and utilize \\(\\approx 7,000\\) CPUs on the K computer. To check that the numerical results depend only weakly on the grid resolution, we also performed lower-resolution simulations. For this case, \\(N=160\\) and \\(\\Delta x_{9}=250\\)-\\(300\\,\\)m. As listed in Table 2, we found that the results such as total ejecta mass and averaged values of \\(Y_{e}\\) depend very weakly on the grid resolution. Furthermore, to confirm the importance of the neutrino heating, we also performed simulations in which the neutrino absorption is switched off (denoted as 'no-heat' in Table 2) and compared the results for the first time. We consider equal-mass BNS with each mass \\(1.35M_{\\odot}\\). Observed neutron stars in BNS typically have the mass ratio close to unity and the mass in the range \\(1.20\\)-\\(1.45M_{\\odot}\\)[29]. Thus, our choice reasonably reflects the observational fact. The initial orbital separation is chosen so that the orbital angular velocity, \\(\\Omega\\), satisfies \\(Gm_{0}\\Omega/c^{3}=0.028\\) where \\(m_{0}=2.7M_{\\odot}\\) is the sum of each mass in isolation and \\(G\\) gravitational constant, respectively. Table 2 lists the key parameters of our models \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline \\multicolumn{1}{c}{} & \\(R_{1.35}\\) (km) & \\(\\Delta x_{9}\\) (m) & \\(N\\) & \\(M_{\\rm ej}\\) (\\(M_{\\odot}\\)) & \\(\\langle Y_{e}\\rangle\\) \\\\ \\hline SFHo (high) & 11.9 & 150 & 285 & \\(1.1\\times 10^{-2}\\) & 0.31 \\\\ SFHo (low) & & 250 & 160 & \\(1.3\\times 10^{-2}\\) & 0.32 \\\\ SFHo (no-heat) & & 250 & 160 & \\(1.0\\times 10^{-2}\\) & 0.29 \\\\ \\hline DD2 (high) & 13.2 & 160 & 285 & \\(2.1\\times 10^{-3}\\) & 0.29 \\\\ DD2 (low) & & 270 & 160 & \\(1.9\\times 10^{-3}\\) & 0.29 \\\\ DD2 (no-heat) & & 270 & 160 & \\(0.9\\times 10^{-3}\\) & 0.26 \\\\ \\hline TM1 (high) & 14.5 & 200 & 285 & \\(1.2\\times 10^{-3}\\) & 0.26 \\\\ TM1 (low) & & 300 & 160 & \\(0.8\\times 10^{-3}\\) & 0.25 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: \\(R_{1.35}\\): the radius of spherical neutron stars of mass \\(1.35M_{\\odot}\\). \\(\\Delta x_{9}\\): the grid spacing in the finest refinement level. \\(N\\): the grid number in one positive direction for each refinement level. \\(M_{\\rm ej}\\) and \\(\\langle Y_{e}\\rangle\\) denote the ejecta mass and the averaged value of \\(Y_{e}\\) measured at the end of the simulations. Model name follows the EOS. \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline EOS & \\(n_{0}\\) (fm\\({}^{-3}\\)) & \\(E_{0}\\) (MeV) & \\(K\\) (MeV) & \\(S\\) (MeV) & \\(L\\) (MeV) \\\\ \\hline SFHo & 0.1583 & 16.19 & 245.5 & 31.57 & 47.10 \\\\ DD2 & 0.1491 & 16.02 & 242.7 & 31.67 & 55.03 \\\\ TM1 & 0.145 & 16.3 & 281 & 36.9 & 110.8 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Characteristic properties of EOS at the nuclear saturation density. \\(n_{0}\\): the nuclear saturation density. \\(E_{0}\\): the binding energy. \\(K\\) : the incompressibility. \\(S\\) : the symmetry energy. \\(L\\) : the logarithmic derivative of the symmetry energy. and simulation setup. ## III Results For all the models, a massive neutron star (MNS) is formed after the onset of merger as expected from our previous results [30]. The MNS are long-lived in the sense that their lifetime is much longer than their rotation period of \\(\\lesssim 1\\,\\mathrm{ms}\\). For SFHo, the MNS eventually collapses to a black hole (BH) in \\(\\sim 10\\,\\mathrm{ms}\\) because the maximum mass of spherical neutron stars is relatively small as \\(\\approx 2.0M_{\\odot}\\). The mass and spin parameter of the BH are \\(M_{\\rm BH}\\approx 2.6M_{\\odot}\\) and \\(a_{\\rm BH}\\approx 0.70\\), and a torus with mass \\(M_{\\rm torus}\\approx 0.05M_{\\odot}\\) is formed around it. Such a system may be a central engine of short-hard gamma-ray bursts. For other two cases, the remnant MNS does not collapse to a BH in our simulation time \\(\\sim 30\\)-\\(40\\,\\mathrm{ms}\\). Because the maximum mass of spherical neutron stars for DD2 and TM1 is \\(\\approx 2.4\\) and \\(2.2M_{\\odot}\\), the formed hot and rapidly rotating MNS with mass \\(\\sim 2.6M_{\\odot}\\) will not collapse to a BH unless a substantial fraction of the angular momentum and thermal energy is dissipated by some transport process and the neutrino emission, respectively (e.g., [19; 30]). Figure 1 plots the evolution of the rest mass \\(M_{\\rm ej}\\) and the characteristic velocity \\(V_{\\rm ej}\\) for the ejecta. Here, \\(t_{M-6}\\) denotes the time at which \\(M_{\\rm ej}\\) exceeds \\(10^{-6}M_{\\odot}\\) (hereafter we will use \\(t_{M-6}\\) as the time at the onset of merger). We specify the matter as the ejecta if the time component of the fluid four velocity \\(u_{t}\\) is smaller than \\(-1\\). Note that another condition [31] for the ejecta \\(hu_{t}<-1\\) where \\(h\\) is the specific enthalpy, which may be more appropriate for the hot matter, gives slightly larger ejecta mass. \\(V_{\\rm ej}\\) is defined by \\(\\sqrt{2E_{\\rm kin}/M_{\\rm ej}}\\) where \\(E_{\\rm kin}\\) is kinetic energy of the ejecta. Figure 1 shows that the ejecta mass depends strongly on the EOS: For softer EOS (i.e., for smaller values of \\(R_{1.35}\\)), the ejecta mass is larger. Remarkable is that with the decrease of \\(R_{1.35}\\) by \\(\\sim 3\\,\\mathrm{km}\\), the ejecta mass increases by more than one order of magnitude and only for \\(R_{1.35}\\lesssim 12\\,\\mathrm{km}\\) the ejecta mass exceeds \\(0.01M_{\\odot}\\), as already indicated in [14; 15]. The averaged ejecta velocity is \\(\\sim 0.1\\)-\\(0.2c\\) as also found in [14; 15]. In the later phase, the total ejecta mass relaxes approximately to a constant, and the ejecta are in a free expansion phase for all the models. There are two major mass ejection mechanisms during the merger phase. One is tidal interaction and the other is shock heating. By the tidal interaction, the matter tends to be ejected near the orbital plane. On the other hand, by the shock heating, the matter is ejected in a quasi-spherical manner. Because both effects play a role, the ejecta usually have a spheroidal morphology. For small values of \\(R_{1.35}\\), the shock heating plays a stronger role and the ejecta in this case have a quasi-spherical morphology. Figure 2 plots the profiles of the electron fraction, \\(Y_{e}\\), (left half) and entropy per baryon, \\(s\\), (right half) of the ejecta on the \\(x\\)-\\(y\\) and \\(x\\)-\\(z\\) planes for DD2 (left panel) and SFHo (middle and right panels). For DD2, the ejecta are composed of (i) tidally-ejected matter with low values of \\(Y_{e}\\) and \\(s\\) near the orbital plane and (ii) shock-heated matter with relatively high values of \\(Y_{e}\\). The shock-heated ejecta are less neutron-rich because the temperature gets much higher than \\(\\sim 1\\,\\mathrm{MeV}\\) as a result of the shock heating, producing copious \\(e^{-}e^{+}\\) pairs that activate \\(e^{-}\\) and \\(e^{+}\\) captures by protons and neutrons, respectively. As a result of \\(e^{-}\\) and \\(e^{+}\\) captures, the luminosities of \\(\ u_{e}\\) and \\(\\bar{\ u}_{e}\\) become quite high as \\(\\gtrsim 10^{53}\\,\\mathrm{ergs/s}\\) (see Fig. 3), as long as the remnant MNS is present. Because the original ejecta are neutron-rich, \\(e^{+}\\) capture dominates \\(e^{-}\\) capture, and hence, the luminosity of \\(\\bar{\ u}_{e}\\) is higher than that of \\(\ u_{e}\\)[19] and the ejecta become less neutron-rich. In addition to the tidal-driven and shock-heated components explained above, we found the third component in a later phase, that is, _neutrino-heated_ component with even higher values of \\(Y_{e}\\) and \\(s\\) in the region above the MNS pole (see the high-entropy region in the left panel (\\(x\\)-\\(z\\) plot) of Fig. 2). Furthermore, some fraction of the material obtains enough energy to be additional _neutrino-driven_ ejecta. Possible existence of such a component was recently reported in a MNS system [32; 33] and a BH and torus system which is expected to be formed after the BNS mergers [34]. We confirmed the existence of the neutrino-driven component in self-consistent numerical-relativity simulations of the merger for the first time. For TM1, the results are basically similar to those for DD2 except for the fact that the tidally-ejected component is more dominant and the \\(e^{+}\\) capture is less efficient. Also, the neutrino-driven wind appears to play Figure 1: Mass (upper panel) and characteristic velocity (lower panel) of the ejecta as functions of time for SFHo (red solid), DD2 (blue dashed), and TM1 (green dotted-dashed). \\(t_{M-6}\\) approximately denotes the time at the onset of merger (see text). a major role for the mass ejection (see the curve for \\(t-t_{M-6}>5\\) ms of Fig. 1) because the total ejecta mass for this EOS is rather small. Here, note that it is not easy to exclude the effect of artificial atmosphere in grid-based simulations, in particular when the ejecta mass is low (\\(\\lesssim 10^{-3}M_{\\odot}\\)) as in the case of TM1. The contamination in mass would be \\(\\sim 10^{-4}M_{\\odot}\\) when the ejecta expand to \\(\\sim 2000\\) km in our setting of the atmosphere with density \\(\\sim 10^{3}\\) g/cm\\({}^{3}\\), while it would be of order of percent if the ejecta is as massive as \\(\\sim 10^{-2}M_{\\odot}\\). The contamination in \\(Y_{e}\\) would be similar level. For this reason, in the following, we will basically consider DD2 as a representative of a stiff (or moderately stiff) EOS. For SFHo, shock waves are formed for several times during the merger phase as the MNS oscillates with a high amplitude, and hence, a certain fraction of matter originally ejected by the tidal interaction is subsequently heated up by shocks (\\(s\\) increases), resulting in the increase of the values of \\(Y_{e}\\) via weak interactions. On the other hand, other parts less influenced by the shock heating preserve the neutron-rich nature of the original neutron stars. As a result of these two facts, the ejecta can have higher values of \\(s\\) and \\(Y_{e}\\) than for DD2 and TM1 even in the orbital plane with an appreciably inhomogeneous distribution of \\(Y_{e}\\) (see the middle panel of Fig. 2). Because a BH is formed at \\(\\sim 10\\) ms after the onset of merger for SFHo, the strong neutrino emission region is swallowed into the BH and neutrino luminosity decreases to \\(\\lesssim 10^{53}\\) ergs/s. Hence, there is less clear neutrino-driven ejecta component for this EOS (see the bottom panel of Fig. 3). The upper panel of Fig. 4 shows the time evolution of averaged values of \\(Y_{e}\\) (\\(\\langle Y_{e}\\rangle\\)) from which the effect on \\(Y_{e}\\) of the shock heating and the resulting positron capture can be seen more clearly. The several distinct changes in \\(\\langle Y_{e}\\rangle\\) observed for SFHo in \\(\\lesssim 5\\) ms after the onset of merger reflect the strong \\(e^{+}\\) capture activated by the shock heating. During this phase, \\(\\langle Y_{e}\\rangle\\) for SFHo increases drastically to be \\(\\approx 0.3\\). After this phase, on the other hand, \\(\\langle Y_{e}\\rangle\\) for SFHo is approximately constant because the \\(e^{-}\\) and \\(e^{+}\\) captures balances and because the neutrino luminosity decreases to be \\(\\sim 10^{52}\\) ergs/s due to the BH formation, which is not sufficient to change \\(\\langle Y_{e}\\rangle\\) of the massive ejecta. Thus, for softer EOS like SFHo, \\(Y_{e}\\) is likely to be increased primarily by the \\(e^{+}\\) capture. On the other hand, \\(\\langle Y_{e}\\rangle\\) for DD2 and TM1 in the early stage is low as \\(Y_{e}\\lesssim 0.1\\)-0.2, while it increases in time. This is simply because the shock heating at the first contact is not strong enough to increase \\(\\langle Y_{e}\\rangle\\) significantly for these stiffer EOS; i.e, the original composition of the ejecta driven by tidal torque, which is composed primarily of neutron-rich matter with low temperature, is temporally preserved as found in [15; 16]. In the later phase, however, the ejecta become less neutron-rich. This is partly due to the positron capture discussed above. In addition, the electron neutrinos emitted from the remnant MNS convert some fraction of neutrons to protons via the electron neutrino capture (see below for a more Figure 2: Contours of the electron fraction, \\(Y_{e}\\), (left half) and the entropy per baryon, \\(s\\), (right half) in \\(x\\)-\\(y\\) (lower) and \\(x\\)-\\(z\\) (upper) planes. _left panel_: for DD2 at 8.5 ms after the merger. _middle panel_: for SFHo at 5.0 ms after the merger. _right panel_: for SFHo at 15.0 ms after the merger. detailed discussion). For stiffer EOS, the importance of the electron neutrino capture in increasing \\(Y_{e}\\) of the ejecta is enhanced because of their lower temperature and the maintained high neutrino luminosity from the long-lived MNS. The lower panel of Fig. 4 plots the mass-distribution histograms for \\(Y_{e}\\) normalized by the total mass of the ejecta at \\(\\approx 25\\) ms after the onset of merger. For all of the models, \\(Y_{e}\\) is distributed in a broad range between \\(\\sim 0.05\\) and \\(0.45\\). This result is completely different from that found in the previous studies [15; 16] in which the distribution of \\(Y_{e}\\) is very narrow with a lower average value \\(\\lesssim 0.1\\). This disparity can be explained as follows. In the previous approximate general relativistic study [15], the weak interaction processes were not taken into account, and hence, the ejecta remain neutron-rich because there is no way to change \\(Y_{e}\\). In the previous Newtonian studies [16], they took into account the neutrino cooling (\\(e^{-}\\) and \\(e^{+}\\) captures). However, as we mentioned already, the effect of the shock heating is underestimated significantly in Newtonian gravity, and hence, the effect of the \\(e^{+}\\) capture would be much weaker than that in our simulations due to the underestimated temperature. In addition, they did not take into account the neutrino heating (absorptions) which is expected to play a role for stiffer EOS in which the positron capture is relatively less important due to lower temperature. To see the effects of the neutrino heating more quantitatively, we performed simulations without (no-heat) neutrino heating for SFHo and DD2. We found that for both EOS, the contribution of the neutrino-driven component in the ejecta mass is \\(\\sim 10^{-3}M_{\\odot}\\) at the end of the simulation (see Table 2), which is consistent with that found in [33]. The amount of the neutrino-driven ejecta is minor for SFHo but comparable to the amount of the dynamical ejecta for DD2. This result suggests that the neutrino heating plays a relatively more important role for stiffer EOS like DD2 and TM1 in which the amount of the dynamical ejecta is \\(\\sim 10^{-3}M_{\\odot}\\). The neutrino heating plays an important role in changing the chemical composition (\\(Y_{e}\\)) of the ejecta. As shown in Fig. 3, the luminosities of \\(\ u_{e}\\) and \\(\\bar{\ u}_{e}\\) are quite high as \\(\\gtrsim 10^{53}\\) ergs/s. Due to the absorption of neutrinos with this high luminosity, the ejecta become more proton-rich because the electron neutrinos convert some fraction of neutrons to protons via the reactions \\(n+\ u_{e}\\leftrightarrow p+e^{-}\\). Note again that \\(\ u_{e}\\) capture is more efficient than \\(\\bar{\ u}_{e}\\) capture since the ejecta are neutron-rich. Figure 5 compares the time evolution of \\(\\langle Y_{e}\\rangle\\) (upper panel) and the mass-distribution histograms for \\(Y_{e}\\) at \\(\\approx 25\\) ms after the onset of merger (lower panel) between simulations with and without neutrino heating for SFHo and DD2. The results indicate that for SFHo, \\(\\langle Y_{e}\\rangle\\) is increased to be \\(\\approx 0.29\\) due to the positron capture and the neutrino heating pushes up it further by \\(\\approx 0.02\\) at the end of the simulations. For DD2, the effect of the positron capture is weaker and the neutrino heating plays a relatively important role, increasing \\(\\langle Y_{e}\\rangle\\) by \\(\\approx 0.03\\). Such enhancements of \\(\\langle Y_{e}\\rangle\\) due to the neutrino heating would be important in considering the \\(r\\)-process nucleosynthesis [17]. The mass-distribution histograms also shift towards the higher \\(Y_{e}\\) side due to the neutrino heating. However, the distributions still show a broad feature even Figure 3: Luminosity curves of \\(\ u_{e}\\) (red solid), \\(\\bar{\ u}_{e}\\) (blue dashed), and heavy (green dotted-dashed) neutrinos for TM1 (top), DD2 (middle), and SFHo (bottom). Figure 4: _Upper panel_: The time evolution of the averaged value of \\(Y_{e}\\) for SFHo (red solid), DD2 (blue dashed), and TM1 (green dotted-dashed). _Lower panel_: The mass-distribution histograms of \\(Y_{e}\\) normalized by the total mass of ejecta measured at \\(\\approx 25\\) ms after the onset of merger for SFHo, DD2, and TM1. without the neutrino heating. This suggests that the positron capture resulting from the strong shock heating due to general relativistic gravity is primarily responsible for making the \\(Y_{e}\\) distribution broad for DD2 and SFHo. For much stiffer EOS like TM1, the neutrino heating would play a relatively major role. Although our treatment for the neutrino transfer is an approximate one, our results indicate that the neutrino heating plays an important role in determining the _chemical_ properties of the ejecta. ## IV Summary and discussion We have reported the first numerical results of radiation hydrodynamics simulations in general relativity focusing on the properties of the dynamical ejecta of the equal-mass BNS merger with typical mass of each neutron star (\\(1.35M_{\\odot}\\)). Three modern finite-temperature EOS are employed to clarify the dependence of the ejecta properties on the EOS. We found that the total mass of the ejecta is larger for softer EOS (giving smaller-radius neutron stars), and it exceeds \\(0.01M_{\\odot}\\) only for the case that \\(R_{1.35}\\lesssim 12\\,\\)km, as indicated in [14]. As shown in [10; 12], the electromagnetic luminosity of the ejecta by the radioactive decay of the \\(r\\)-process elements would depend sensitively on the ejecta mass, and hence, the predicted range of the luminosity spans in a wide range due to the uncertainty of the nuclear-matter EOS. We also found that the averaged value of \\(Y_{e}\\) of the ejecta is higher for softer EOS like SFHo in which \\(R_{1.35}\\) is smaller, reflecting the fact that the shock heating is more efficient. For all of the models, the value of \\(Y_{e}\\) for the ejecta has a broad distribution between \\(\\sim 0.1\\) and \\(0.45\\), by contrast with the previous studies [15; 16]. Here, both the strong shock associated with general relativistic gravity and the weak interactions play crucial roles for this. Such a broad distribution may be well-suited for producing the universal [7] solar-abundance pattern of \\(r\\)-process elements as illustrated in [17]. For the EOS but for SFHo, the _dynamical_ ejecta mass is of order \\(10^{-3}M_{\\odot}\\). In this case, a rather higher merger rate of \\(\\gtrsim 10^{-4}\\) yr\\({}^{-1}\\) than the present estimates of the Galactic rate (a few \\(10^{-5}\\) yr\\({}^{-1}\\)) [35] is necessary to explain the amount of heavy \\(r\\)-process elements [36; 37], if the the dynamical ejecta from binary neutron star mergers is responsible for their production. In regards to this point, SFHo is an attractive EOS. We will study consequences of our results on the synthesis of heavy elements in the forthcoming paper. If EOS is not very soft like SFHo, some other contributions, such as mergers of black hole-neutron star binaries [38], disk winds from accretion torus around a merger remnant black hole [34; 39], and magnetorotational supernova explosions [40] may be necessary. In such cases, however, it is not clear whether the universality requirement can be achieved or not. In this work, we focused only on the equal-mass binary case and did not explore the dependence of the results on the binary parameters such as the total mass and the mass ratio. As reported in [14], the relative importance of the tidal interactions and the shock heating in the dynamical mass ejection depends on the binary parameters. It is interesting to explore the dependence of the results on binary parameters for SFHo and the resulting abundance profile in the future work, because the observed abundance patterns of the metal-poor, \\(r\\)-rich stars show some diversity in the lower mass-number region [7]. Also, we did not continue our simulations beyond 30-40 ms after the onset of merger. For the longer time scales, magnetohydrodynamic processes [41], viscous heating, and nuclear recombination [42] could be important. Self-consistent studies of these effects in the BNS merger also have to be done in the future. ###### Acknowledgements. We are grateful to M. Hempel for providing the EOS table data and S. Wanajo for discussions. Numerical computations were performed on the supercomputer K at AICS, XC30 at CfCA of NAOJ, FX10 at Information Technology Center of Tokyo University, and SR16000 at YITP of Kyoto University. This work was supported by Grant-in-Aid for Scientific Research (24244028, 24740163, 25103510, 25105508), for Scientific Research on Innovative Area (24103001), by HPCI Strategic Program of Japanese MEXT/JSPS (Project No. hpci130025, 140211). Koutarou Kyutoku is supported by JSPS Postdoctoral Fellowship for Research Abroad. Figure 5: Same as Fig. 4 but for simulations with and without (denoted as no-heat) the neutrino heating for SFHo (red and magenta (no-heat)) and DD2 (blue and light blue (no-heat)). ## References * (1) J. Abadie _et al._ (LIGO Scientific Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A **624**, 223 (2010). * (2) T. Accadia _et al._ (Virgo Collaboration), Classical Quantum Gravity **28**, 025005 (2011). * (3) K. Kuroda, (LCGT Collaboration), Class. Quant. Grav. **27**, 084004 (2010). * (4) V. Kalogera _et al._ Phys. Rep. **442**, 75 (2007). * (5) J. Abadie _et al._ (The LIGO Scientific Collaboration and Virgo Collaboration), Classical Quantum Gravity **27**, 173001 (2010). * (6) J. M. Lattimer and D. N. Schramm, Astrophys. J. **192**, L145 (1974). * (7) C. Sneden, J. J. Cowan, and R. Gallino, Annu. Rev. Astron. Astrophys. **46**, 241 (2008); C. Siqueria Mello, _et al._ Astron. Astrophys. **565**, A93 (2014). * (8) L. F. Roberts, S. Reddy, and G. Shen, Phys. Rev. C. **86**, 065803 (2012). * (9) S. Wanajo, H.-T. Janka, and B. Muller, Astrophys. J. **726**, L15 (2011). * (10) L. -X. Li and B. Paczynski, Astrophys. J. **507**, L59 (1998). * (11) D. Kasen, N. R. Badnell, and J. Barnes, Astrophys. J. **774**, 25 (2013); J. Barnes and D. Kasen, Astrophys. J. **775**, 18 (2013). * (12) M. Tanaka and K. Hotokezaka, Astrophys. J. **775**, 113 (2013). * (13) N. R. Tanvir _et al._ Nature, **500**, 547 (2013); E. Berger _et al._ Astrophys. J. **774**, L23 (2013). * (14) K. Hotokezaka _et al._ Phys. Rev. D **87**, 024001 (2013). * (15) R. Oechslin, H.-T. Janka, and A. Marek, Astron. Astrophys. **467**, 395 (2007); A. Bauswein, S. Goriely, H.-T. Janka, Astrophys. J. **773**, 78 (2013). * (16) O. Korobkin _et al._ Mon. Not. Royal Astron. Soc. **426**, 1940 (2012); S. Rosswog _et al._ Mon. Not. Royal Astron. Soc. **439**, 744 (2014). * (17) S. Wanajo, Y. Sekiguchi _et al._ Astrophys. J. **789**, L39 (2014). * (18) M. Shibata and T. Nakamura, Phys. Rev. D **52**, 5428(1995); T. W. Baumgarte and S. L. Shapiro, Phys. Rev. D **59**, 024007(1998); M. Campanelli, C. O. Lousto, P. Marronetti, and Y. Zlochower, Phys. Rev. Lett. **96**, 111101 (2006); J. G. Baker, J. Centrella, D. I. Choi, M. Koppitz, and J. van Meter, Phys. Rev. Lett. **96**, 111102 (2006). * (19) Y. Sekiguchi, K. Kiuchi, K. Kyutoku, and M. Shibata, Phys. Rev. Lett. **107**, 051102; _ibid_, **107**, 211101 (2011). * (20) Y. Sekiguchi, Prog. Theor. Phys. **124**, 331 (2010); Y. Sekiguchi and M. Shibata, Astrophys. J. **737**, 6 (2011); Y. Sekiguchi _et al._ Prog. Theor. Exper. Phys. **01** A301 (2012). * (21) K. S. Thorne, Mon. Not. Royal Astron. Soc. **194**, 439 (1981); M. Shibata, K. Kiuchi, Y. Sekiguchi, and Y. Suwa, Prog. Theor. Phys. **125**, 1255 (2011). * (22) A. Steiner, M. Hempel, and T. Fischer, Astrophys. J. **774**, 17 (2013). * (23) S. Banik, M. Hempel, and D. Bandyophadyay, Astrophys. J. Suppl. **214**, 22 (2014). * (24) M. Hempel _et al._ Astrophys. J. **748**, 70 (2012). * (25) H. Shen, H. Toki, K. Oyamatsu, and K. Sumiyoshi, Nucl. Phys. **A637**, 435 (1998). * (26) Y. Sugahata and H. Toki, Nucl. Phys. **A579**, 557 (1994). * (27) S. Typel, G. Ropke, T. Klahn, D. Blaschke, and H. H. Wolter, Phys. Rev. C **81**, 015803 (2010). * (28) P. Demorest _et al._ Nature **467**, 1081 (2010); J. Antoniadis _et al._, Science **340**, 6131 (2013). * (29) E.g., D. R. Lorimer, Living. Rev. Relativity **11**, 8 (2008). * (30) K. Hotokezaka _et al._ Phys. Rev. D **88**, 044026 (2013). * (31) R. Narayan _et al._ Mon. Not. Royal Astron. Soc. **426**, 3241 (2012). * (32) L. Dessart _et al._ Astrophys. J. **690**, 1681 (2009). * (33) A. Perego _et al._ Mon. Not. Royal Astron. Soc. **443**, 3134 (2014). * (34) O. Just _et al._ arXiv:1406.2687. * (35) M. Dominik _et al._ Astrophys. J. **759**, 52 (2012); M. Dominik _et al._ Astrophys. J. **779**, 72 (2013). * (36) S. Goriely, Astron. Astrophys. **342**, 881 (1999). * (37) Y.-Z. Qian, Astrophys. J. **534**, L67 (2000). * (38) K. Kyutoku, K. Ioka, M. Shibata, Phys. Rev. D **88**, 041503 (2013); F. Foucart _et al._ Phys. Rev. D **90**, 024026 (2014). * (39) R. Surman, G. C. McLaughlin, M. Ruffert, H.-T. Janka, and W. R. Hix, Astrophys. J. **679**, L117 (2008). * (40) C. Winteler _et al._ Astrophys. J. **750**, L22 (2012). * (41) K. Kiuchi, K. Kyutoku, Y. Sekiguchi, M. Shibata, and T. Wada, Phys. Rev. D. **90**, 041502 (2014). * (42) R. Fernandez and B. Metzger, Mon. Not. Royal Astron. Soc. **435**, 502 (2013).
We perform radiation-hydrodynamics simulations of binary neutron star mergers in numerical relativity on the Japanese \"K\" supercomputer, taking into account neutrino cooling and heating by an updated leakage-plus-transfer scheme for the first time. Neutron stars are modeled by three modern finite-temperature equations of state (EOS) developed by Hempel and his collaborators. We find that the properties of the dynamical ejecta of the merger such as total mass, average electron fraction, and thermal energy depend strongly on the EOS. Only for a soft EOS (the so-called SFHo), the ejecta mass exceeds \\(0.01M_{\\odot}\\). In this case, the distribution of the electron fraction of the ejecta becomes broad due to the shock heating during the merger. These properties are well-suited for the production of the solar-like \\(r\\)-process abundance. For the other stiff EOS (DD2 and TM1), for which a long-lived massive neutron star is formed after the merger, the ejecta mass is smaller than \\(0.01M_{\\odot}\\), although broad electron-fraction distributions are achieved by the positron capture and the neutrino heating. pacs: 04.25.D-, 04.30.-w, 04.40.Dg
Provide a brief summary of the text.
266
isprs/ea00e109_1355_43cc_ae7b_eeb48773c16c.md
# Classification Algorithms for Big Data Analysis, a Map Reduce Approach V. A. Ayma1, R. S. Ferreira2, P. Happ1, D. Oliveira1, R. Feitosa2, G. Costa2, A. Plaza3, P. Gamba3 ## 1 Introduction The amount of data generated in all fields of science is increasing extremely fast [1][2][3][1][1][2]. MapReduce frameworks [3], such as Hadoop [1], are becoming a common and reliable choice to tackle the so called _big data_ challenge. Due to its nature and complexity, the analysis of _big data_ raises new issues and challenges [11][1]. Although many machine learning approaches have been proposed so far to analyse small to medium size data sets, in a supervised or unsupervised way, just few of them have been properly adapted to handle large data sets [2][3][4][5]. An overview of some data mining approaches for very large data sets can be found in [1][1][1]. There are two main steps in the supervised classification process. The first is the training step where the classification model is built. The second is the classification itself, which applies the trained model to assign unknown data to one out of a given set of class labels. Although the training step is the one that draws more scientific attention [13][14][15][16], it usually relies on a small representative data set that does not represent an issue for _big data_ applications. Thus, the _big data_ challenge affects mostly the classification step. This work introduces _the ICP: Data Mining Package_, an open-source, MapReduce-based tool for the supervised classification of large amounts of data. The remaining of the paper is organized as follows: Section 2 presents a brief overview of Hadoop; the tool is presented in Section 3; a case study is presented in Section 4 and, finally, the conclusions are discussed in Section 5. ## 2 Hadoop Overview Apache Hadoop is an open-source implementation of the MapReduce framework, proposed by Google (Intel IT Center, 2012). It allows the distributed processing of datasets in the order of petabytes across hundreds or thousands of commodity computers connected to a network [1]. As presented in [3], it has been commonly used to run parallel applications for big data processing and analysis [2][17]. The next two sections present Hadoop's two main components: HDFS and MapReduce. ### Hadoop Distributed File System The Hadoop Distributed File System (HDFS) is the storage component of Hadoop. It is designed to reliably store very large data sets on clusters, and to stream those data at high throughput to user applications [20]. HDFS stores file system metadata and application data separately. By default, it stores three independent copies of each data block (_replication_) to ensure reliability, availability and performance [18]. ### Hadoop MapReduce Haddop MapReduce is a parallel programming technique for distributed processing, implemented on top of HDFS (Groulinger et al., 2014). The Hadoop MapReduce engine consists of a _JobTracker_ and several _TaskTrackers_. When a MapReduce job is executed, the JobTracker splits it into smaller tasks (map and reduce) handled by the TaskTrackers. In the Map step, the master node takes the input, divides it into smaller sub-problems and distributes them to worker nodes. Each worker node processes a sub-problem and writes its results as keyvalue pairs. In the Reduce step, the values with the same key are grouped and processed by the same machine to form the final output (Kiran et al., 2013). ### Pig _Pig_ is a framework for executing data flows in parallel on Hadoop. It has two components: a language and an engine. _Pig's_ language, called Pig Latin, makes it easier for non-technical users to interact with MapReduce by providing a high-level language that is also extensible (Apache PIG, 2014) (Olston et al., 2008). Pig Latin can be extended through the use of User Defined Functions (_UDFs_), which can be written in Java, Jython, Python, JavaScript, Ruby and Groovy. Through _UDFs_, users can create custom functions that meet their specific needs. Pig's engine takes a Pig Latin script and compiles it automatically in MapReduce jobs. ## 3 ICP: Data Mining Package _InterIMAGE Cloud Platform (ICP)_ is an open source, distributed framework for automatic interpretation of remote sensing and medical image data built on top of Hadoop (Ferreira et al., 2014). _ICP: Data Mining Package_ is one of the tools within the scope of this framework, which is an open-source software tool implemented in Java and freely available in [http://www.lvc.ele.puc-rio.br/wp/?p=1831](http://www.lvc.ele.puc-rio.br/wp/?p=1831). Up to now, it embodies four classification algorithms taken from the _WEKA_ (Machine Learning Group at the University Waikato, 2014) java library. Naive Bayes Classifier, Decision Trees, Random Forest and Support Vector Machines (SVM). The parallel procedure works as follows. The data to be classified, henceforth called _big data_ set is stored on HDFS. The training set is stored on an auxiliary storage system. When the execution starts, each HDFS block is processed by a different map task. The map task, firstly, reads the training data set and trains the classifier. After that, the trained classification model is used to classify the _big data set_. The multiple executions of the training step (for each map) should not impact the computational performance substantially because the amount of training data is small compared to the _big data set_, which accounts for most of the processing time. A brief example of a _Pig Latin_ script is presented in Table 1. In this script, a SVM classification process is performed using a given training and testing set, and saving the result in a defined output file. ## 4 Experiments and Results This section reports some experiments conducted upon _ICP: Data Mining Package_. These experiments were carried out on the Mortar platform (Mortar Data, 2014) (Amazon Web Services, 2014), a cloud-computing, open-source framework for organizing, developing, testing, and deploying big data processing applications based on Hadoop. This platform relies on Amazon Elastic MapReduce (Amazon EMR, 2014), which uses the Hadoop framework to distribute the data and processing across a resizable Amazon Elastic Compute Cloud cluster (Amazon EC2, 2014), and on Amazon Simple Storage Service (Amazon S3, 2014). On Mortar one can work directly with _Pig_ on Hadoop and configure the number of cluster nodes on which the _Pig Latin_ script will be executed in a simple and flexible way. The next sections will present the datasets used in this work. ### Urban Hyperspectral Data Set The tests reported henceforward were performed on Pavia hyperspectral data set (Hypercomp Research Group, 2014). It consists of a hyperspectral image collected by the ROSIS optical sensor over the University of Pavia, Italy. The image contains 610-340 pixels at 1.3 meters per pixel resolution over 103 spectral bands (from 0.43 to 0.86 ). The data set has nine ground truth classes of interest, comprising urban, soil and vegetation classes. In our experiments 3921 pixels were selected for training and 42776 pixels for testing, as shown in Figure 1(b) and Figure 1(c) respectively; and a Pavia hyperspectral false color composition image is presented in Figure 1(a). The classes' distribution within each data set is presented in Table 2. The size of the Pavia hyperspectral ground truth is approximately 20Mb. Synthetic data sets were built from it, with 100, 200 and 500 times the original data set size, yielding data files with around 2Gb, 4Gb and 10Gb respectively. Only these three datasets were considered in the experiments since the original dataset is too small for Hadoop. \\begin{table} \\begin{tabular}{l} \\hline _Pig Latin_ script that executes a SVM classification \\\\ \\hline REGISTER /pathTO/weka.jar; \\\\ REGISTER /pathTO/interimage-pig-datamining.jar; \\\\ DEFINE II\\_SVMClassifier \\\\ br.puc\\_rio.ele.lvc.interimage.datamining.udf.SVMClassifier \\\\ ( /pathTo/trainSet.csv’, configurationOptions’); \\\\ dataTest = LOAD /pathTo/testSet.csv’ USING \\\\ org.apache.pig.piggybank.storage.CSVExcelStorage(.; \\\\ 'YES\\_MULTILINE’, 'NOCHANGE’, \\\\ SKIP\\_INPUT\\_HEADER) AS (At\\_1:float, , Att\\_n:float); \\\\ classes = FOREACH dataTest GENERATE \\\\ II\\_SVMClassifier(Att\\_1, , Att\\_n) AS csfoToutcome; \\\\ STORE classes INTO /pathTO/output.csv’ USING \\\\ org.apache.pig.piggybank.storage.CSVExcelStorage(.; \\\\ 'YES\\_MULTILINE’, 'NOCHANGE’); \\\\ \\hline \\end{tabular} \\end{table} Table 1: Pig Latin script that perform a SVM classification. ### Experimental Results The SVM classification algorithm was used to evaluate the tool. WEKA uses the Jhon Platts sequential minimal optimization algorithm for training the SVM (Platt, 1998). In the experiences, a multi-class pairwise (one versus one) SVM classification with a polynomial function kernel was performed, with a complexity parameter C = 1.0 and exponent value \\(\\gamma\\) = 1.0 over a 5-fold cross validation procedure. The SVM had as inputs, the first nine principal components computed from the 103 bands of the Pavia hyperspectral image. Figure 2 shows the outcome and the overall accuracy. The classification algorithm was applied on the 2GB, 4GB and 10GB data sets, in a local mode configuration (used as baseline) and in clusters with 10, 20 and 50 nodes on the Mortar platform. Each node in the cluster had 4 6-bit virtual cores, 15GB of RAM and a high-performance network. ## 5 Conclusions In this paper, _ICP: Data Mining Package_ is presented, a tool able to perform classification processes on huge amounts of data, exploiting the benefits of working on clusters with the Hadoop framework. An experimental analysis indicated that the speedup achieved by the tool increases with the amount of data being processed. Additionally, the results showed that increasing the number of nodes in the cluster does not necessarily provide a corresponding reduction of execution times. Thus, the proper cluster configuration depends not only on the operations to be executed but also on the amount of input data; there must be a balance between the amount of data to be processed and the number of nodes to be used to achieve the best performance. ## Acknowledgements The authors acknowledge the support provided by CNPq (Conselho Nacional de Desenvolvimento e Pesquisas), CAPES (Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior) and FP7 (Seventh Framework Programme) in the scope of the TOLOMEO Project. ## References * Amazon Web Services (2014) Amazon Web Services, 2014. _Amazon EC2_. [http://aws.amazon.com/ec2/](http://aws.amazon.com/ec2/) (3 Nov. 2014). * Amazon Web Services (2014) Amazon Web Services, 2014. _Amazon EMR_. [http://aws.amazon.com/elasticmapreduce/](http://aws.amazon.com/elasticmapreduce/) (3 Nov. 2014). * Amazon Web Services (2014) Amazon Web Services, 2014. _AWS Case Study: Mortar Data_. [http://aws.amazon.com/solutions/case-studies/mortar-data/](http://aws.amazon.com/solutions/case-studies/mortar-data/) (3 Nov. 2014). * Apache Hadoop (2014) Apache Hadoop, 2014. Welcome to Apache(tm) Hadoop(r). [http://hadoop.apache.org](http://hadoop.apache.org) (3 Nov. 2014). * Apache PIG (2014) Apache PIG, 2014. _Welcome to Apache Pig_. [http://pig.apache.org/](http://pig.apache.org/) (7 Nov. 2014). * Bekkerman et al. (2012) Bekkerman, R., Bilenko, M., and Langford, J., 2012. _Scaling up Machine Learning: Parallel and Distributed Approaches_. Cambridge University Press. * Dai and Ji (2014) Dai, W., and Ji, W., 2014. A MapReduce Implementation of C4.5 Decision Tree Algorithm. _International Journal of Database Theory and Application_, 7(1), pp. 49-60. * Dean and Ghemawat (2004) Dean, J., and Ghemawat, S., 2004. MapReduce: Simplified Data Processing on Large Clusters. _Proceedings of the 6th Conference on Symposium on Operating Systems Design and Implementation_, 6, pp. 137-149. * Dhillon and Kaur (2014) Dhillon, S., and Kaur, K., 2014. Comparative Study of Classification Algorithms for Web Usage Mining. _International Journal of Advanced Research in Computer Science and Software Engineering_, 4(7), pp. 137-140. * Ferreira et al. (2014) Ferreira, R., Oliveira, D., Happ, P., Costa, G., Feitosa, R., and Bentes, C., 2014. InterIMAGE 2: The Architecture of an Open Source, High Performance Framework for Automatic, Knowledge-Based Image Interpretation. _International Geographic Object-Based Image Analysis Conference_. Thessaloniki, Greece. * Grolinger et al. (2014) Grolinger, K., Hayes, M., Higashino, W., L'Heureaux, A., Allison, D., and Capretz, M., 2014. Challenges for MapReduce in Big Data. _IEEE 10th World Congress on Services_, pp. 182-189. * Han et al. (2013) Han, J., Liu, Y., and Sun, X., 2013. A Scalable Random Forest Algorithm Based on MapReduce. _4th IEEE International Conference on Software Engineering and Service Science_, pp. 849-852. * He et al. (2010) He, Q., Zhuang, F., Li, J., and Shi, Z., 2010. Parallel Implementation of Classification Algorithms Based on MapReduce. _International Conference on Rough Set and Knowledge Technology_, pp. 655-662. * Hypercomp Research Group (2014) Hypercomp Research Group., 2014. New Digital Repository for Remotely Sensed Hyperspectral Imagery with Unmixing Based Retrieval Functionality. [http://www.hypercomp.es/repository/](http://www.hypercomp.es/repository/) (3 Nov. 2014) * Intel IT Center (2012) Intel IT Center., 2012. _Apache Hadoop Community Spotlight: MapReduce_[http://www.intel.com/content/www/us/en/big-data/hadoop-spotlight-apache-mapreduce-paper.html](http://www.intel.com/content/www/us/en/big-data/hadoop-spotlight-apache-mapreduce-paper.html) (3 Nov. 2014). * Kiran et al. (2013) Kiran, M., Kumar, A., Mukherjee, S., and Prakash, R., 2013. Verification and Validation of MapReduce Program Model for Parallel Support Vector Machine. _International Journal of Computer Science Issues_, 10(3), pp. 317-325. * Kishor (2013) Kishor, D., 2013. Big Data: The New Challenges in Data Mining. _International Journal of Innovative Research in Computer Science & Technology_, 1(2), pp. 39-42. * Li et al. (2014) Li, J., Xu, Z., Jiang, Y., and Zhang, R., 2014. The Overview of Big Data Storage and Management. _International Conference on Cognitive Informatics and Cognitive Computing_, pp. 510-513. * Liu et al. (2013) Liu, B., Blasch, E., Chen, Y., Shen, D., and Chen, G., 2013. Scalable Sentiment Classification for Big Data Analysis Using Naive Bayes Classifier. _IEEE International Conference on Big Data_, pp. 99-104. * Machine Learning Group at the University Waikato (2014) Machine Learning Group at the University Waikato, 2014. _Weka 3: Data Mining Software in Java_. [http://www.cs.waikato.ac.nz/ml/weka/index.html](http://www.cs.waikato.ac.nz/ml/weka/index.html) (3 Nov. 2014) * Mortar Data (2014) Mortar Data, 2014. _Mortar_. [https://www.mortardata.com/](https://www.mortardata.com/) (3 Nov. 2014). * Nandakumar and Yambem (2014) Nandakumar, A., and Yambem, N., 2014. A Survey on Data Mining Algorithms on Apache Hadoop Platform. _International Journal of Emerging Technology and Advanced Engineering_, 4(1), pp. 563-565. * Olston et al. (2008) Olston, C., Reed, B., Srivastava, U., Kumar, R., and Tomkins, A., 2008. Pig Latin: A Not-So-Foreign Language for DataProcessing. _Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data_, pp. 1099-1110. * Pakize and Gandomi (2014) Pakize, S., and Gandomi, A., 2014. Comparative Study of Classification Algorithms Based On MapReduce Model. _International Journal of Innovative Research in Advanced Engineering_. 1(7), pp. 215-254. * Platt (1998) Platt, J., 1998. Fast Training of Support Vector Machines using Sequential Minimal Optimization. _Advances in Kernel Methods_. MIT Press, pp. 185-208. * Sagiroglu and Sinanc (2013) Sagiroglu, S., and Sinanc, D., 2013. Big Data: A Review. _International Conference on Collaboration Technologies and Systems (CTS)_, pp. 42-47. * Shvachko et al. (2010) Shvachko, K., Kuang, H., Radia, S., and Chansler, R., 2010. The Hadoop Distributed File System. _IEEE 26th Symposium on Mass Storage Systems and Technologies_, pp. 1-10. * Suthaharan (2014) Suthaharan, S., 2014. Big Data Classification: Problems and Challenges in Network Intrusion Prediction with Machine Learning. _ACM SIGMETRICS Performance Evaluation Review_, 41(4), pp. 70-73. * A Survey. _International Journal of Computer Science and Network_, 2(3), pp. 37-41. * Zaslavsky et al. (2012) Zaslavsky, A., Perera, C., and Georgakopoulos, D., 2012. Sensing as a Service and Big Data. _Proceedings of the International Conference on Advances in Cloud Computing (ACC)_, pp. 21-29.
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of _InterIMAGE Cloud Platform (ICP)_, which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named _ICP: Data Mining Package_, is able to perform supervised classification procedures on huge amounts of data, usually referred as _big data_, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naive Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance. Big Data, MapReduce Framework, Hadoop, Classification Algorithms, Cloud Computing + Footnote †: Corresponding author
Provide a brief summary of the text.
230
arxiv-format/2108_04643v3.md
# \\(f\\)-mode oscillations of compact stars with realistic equations of state in dynamical spacetime Swarnim Shashank Fatemeh Hossein Nouri Anshu Gupta Inter-University Centre for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007, India Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 200438 Shanghai, China Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw, Poland ## 1 Introduction Asteroseismology that enables us to understand the interior structure of neutron stars is coming to the forefront, with added tools of gravitational wave astronomy applied in detecting gravitational wave signal from two possible binary neuron star mergers (GW170817 [1] & GW190425 [2]) and electromagnetic observations like NICER data. This assists in resolving degenerate parameters, puts stricter bounds on the internal composition of the star [3; 4; 5; 6; 7] and tests universal relations (UR). Some of the parameters of neutron stars which could be inferred and estimated through multi-messenger astronomy are the mass, radius, spin, tidal deformability based on the mode frequencies extracted from the signals. While the joint mass, radius estimates using NICER data has put limits on the possible equations of state (EoS) which describes the internal composition [3; 4; 5], incorporating tidal deformability information based on the GW170817 constraints further, the bounds on the permissible EoS [6; 8]. Some of the earlier works on gravitational asteroseismology includes Ref. [9; 10; 11; 12] (see Ref. [13] for the pulsations of relativistic stars and references therein). Through multiple studies it has been shown that the fundamental oscillation mode (\\(f\\)-mode) of the star has a strong resonance with the orbital frequencies closer to merger of a coalescing binary neutron star. Resonant tidal excitation of oscillation modes in merging binary neutron stars has been carried out in Ref. [14; 15; 16]. Other oscillation modes of the star, like \\(p\\) and \\(g\\)-modes, may alsobecome relevant throughout the inspiral, due to nonlinear coupling to the tide, as discussed in Ref. [17, 18, 19, 20]. Phenomenological models connecting tidal deformability with frequency have been discussed in Ref. [21, 22]. For GW170817 data, \\(f\\)-mode frequency have been estimated in Ref. [23] by using an \\(f\\)-mode tidal model (fmtidal) in the frequency domain described in Ref. [24]. Contribution due to spin during the binary neutron star merger on \\(f\\)-modes and dynamical tides have been recently carried out in Ref. [25, 26]. Gravitational-wave asteroseismology with \\(f\\)-modes from neutron star binaries at the merger phase has been also carried out recently [27], where they use NR BNS simulations carried out in Ref. [28, 29, 30] and find less than one percent difference between BNS merger frequency (based on the merger peak amplitude) and \\(f\\)-modes computed for isolated neutron stars. Frequency deviations in the universal relations of isolated neutron stars and postmerger remnants have been discussed in Ref. [31], whereas UR for damping time have been carried out in Ref. [32]. Universal relations among compactness (\\(\\mathcal{C}=M/R\\)), moment of inertia (\\(I\\)) and tidal deformability, and their validity and deviations have been studied in Ref. [33, 34, 35, 36] for the various choices of equations of state (see Ref. [37] for review). Series of studies have been performed in computing oscillation modes of neutron star using linear perturbation theory [38, 39, 35]. These studies are based on the standard approach laid out in Thorne and Campolattaro (1967) [40] to solve the stellar perturbation equations in curved spacetime by applying the appropriate boundary conditions. The \\(f\\)-mode oscillations have also been studied using numerical simulations, employing the cowling approximation i.e. by evolving hydrodynamic equations in the fixed background of general relativistic spacetime for a single perturbed neutron star [41, 42, 43, 44, 35, 45], also in full GR to study bar mode instability [46], as well as under conformally flatness condition (CFC) [47, 48, 49], where the 3-metric is assumed to be conformally flat and the spacetime dynamics is coupled with fluid dynamics. Recently, Rosofsky et al. [50] carried out studies in the fully dynamical spacetime and extracted the fundamental modes for the polytropic equations of state. There have been studies using piece-wise poly-tropic EoS to impose constraints on the neutron star structure [51, 8] or using the parametrised EoS with continuous sound speed [52] as well. For Asteroseismology studies in curved spacetime, it is important to utilize the full general relativistic hydrodynamical simulations to study neutron stars' EoS. The current standard nuclear EoS used in numerical simulations (tables in [https://stellarcollapse.org](https://stellarcollapse.org) for instance) are in the tabulated form. This means that the status of the fluid is identified by three quantities \\((\\rho,T,Y_{e})\\), where \\(\\rho\\) is the baryonic density, \\(T\\) is the temperature and \\(Y_{e}\\) is the electron fraction. Obviously, the accuracy of the numerical simulations with this type of EoS is affected by the resolution of the three dimensional \\((\\rho,T,Y_{e})\\) table. Moreover, the non-smoothness features of various physical quantities in such EoS can be another source of numerical errors for long-term evolutions. Therefore, it is crucial to perform numerical simulations with tabulated EoS to examine the accuracy of the current general relativistic hydrodynamical codes for measuring the oscillation mode frequencies, and test their results against the analytical results. In the current work, we present a few numerical tests to investigate the accuracy challenges mentioned above, and then we extract the \\(f\\)-mode frequencies by evolving nonrotating neutron star in the dynamical spacetime while considering different tabulated nuclear EoS. We also probe the possibility of using these single star simulations to study the tidal effects expected during a binary inspiral. This is done by perturbing the initial density field of the star with the dominant quadrupolar term in the spherical harmonics expansion of the tidal field \\(Y_{22}\\). Single star simulations are computationally less expensive to carry out, providing the possibility to reach higher resolutions in the future which may be important for studying higher order harmonics. We, further, study the validity of our results using the URs associated with tidal deformability and stellar parameters. This is a step towards extending our study incorporating spin effects which we shall report in the follow up work. Our work provides leeway for future studies of tidal effects of rotating stars in an inspiral and also post-merger remnants with differential rotation. In Sec. 2 we briefly outline mathematical and numerical framework for general relativistic hydro-dynamical system, the initial setup and matter configuration as described by a set of equations of state under consideration. In Sec. 3 we discuss the accuracy challenges of mode frequency measurements for tabulated EoS, and we present the results of the several numerical tests to demonstrate the accuracy of our measurement. We analyse our simulation data and describe the result findings in Sec. 4. We compute fundamental mode (\\(f\\)-mode frequency), its relations as a function of compactness and mass, then compare our fits with some of the recently carried out works. Our findings match well with the universal relations described with a deviation less than of a few percent. Sec. 5 discusses our results and conclusions. ## 2 Basic Numerical Framework We solve the Einstein equations using Numerical Relativity methods of 3+1 decomposition of the spacetime. \\[G_{\\mu\ u}\\equiv R_{\\mu\ u}-\\frac{1}{2}g_{\\mu\ u}R=8\\pi T_{\\mu\ u} \\tag{1}\\] \\[ds^{2}=(-\\alpha^{2}+\\beta_{i}\\beta^{i})dt^{2}+2\\beta_{i}dx^{i}dt+\\gamma_{ij}dx ^{i}dx^{j} \\tag{2}\\] [53; 54; 55; 56] here, \\(\\alpha\\) is the lapse function, \\(\\beta^{i}\\) is the shift vector and \\(\\gamma_{ij}\\) is the spatial metric. The units used are \\(G=c=1\\) unless mentioned otherwise. ### Dynamical evolution The sapcetime evolution is achieved using the Baumgarte-Shapiro-Shibata-Nakamura-Oohara-Kojima (BSSNOK) formalism [57; 58; 59; 60] which is a conformal formulation of the ADM equations [61; 62]. The (1+log) and Gamma-driver gauge conditions are adopted for the evolution of lapse and shift [53; 63]. We use the MacLachlan code [64] for evolution of spacetime variables, which is a publicly available code in the EinsteinToolkit[65; 66; 67; 68] suite. A Kreiss-Oliger dissipation [54; 53; 56] is added to spacetime variables for removing high frequency noise. To model Neutron stars, a relativistic perfect fluid is assumed. The conservation equations for the energy-momentum tensor \\(T_{\\mu\ u}\\) and the matter current density \\(J_{\\mu}\\), are solved numerically after being recast into a flux-conservative formulation [54; 69; 63] \\[\ abla_{\\mu}J^{\\mu}=0,\ abla_{\\mu}T^{\\mu\ u}=0. \\tag{3}\\] The system of equations is complete with an Equation of State of the type \\(p=p(\\rho,Y_{e},T)\\), which for our models is described in the Sec. 2.3. (For more details readers can refer to Ref. [54; 69; 63; 56].) We carry out the hydrodynamics using the publicly available WhiskyTHC code [70; 71; 72] which works within the EinsteinToolkit framework and uses high-resolution shock capturing methods. For the time integration, method of lines is used with fourth-order Runge-Kutta methods. The fifth-order MP5 flux-reconstruction method is used along with Harten-Lax-van Leer-Einfeldt (HLLE) Riemann solver. ### Initial data We use a perturbed Tolman-Oppenheimer-Volkoff (TOV) star for the initial data. The initial data is generated using the PizzaTOV thorn for both polytropic and tabulated equations of state. A perturbation is added for the density as: \\[\\delta\\rho=\\mathcal{A}\\rho(r/R)Y_{22}, \\tag{4}\\] where \\(\\delta\\rho\\) is the perturbed density, \\(\\rho\\) is the density, \\(\\mathcal{A}\\) is the perturbation amplitude, which we set to \\(0.01\\) to introduce a small perturbation, \\(r\\) is the radial distance from centre of the star and \\(R\\) is the radius of the star. We use the \\((2,2)\\) eigenfunction which is expected to be similar to tidal interactions of binary neutron stars [50; 23]. For all the models we choose the artificial background atmosphere density as \\(\\rho_{atm}=10^{-14}\\ M^{-2}\\approx 6.17\\times 10^{3}g\\ cm^{-3}\\). The simulations are performed at minimum grid spacing of \\(0.105M\\approx 155m\\). For the tabulated EoS \\(1.4M_{\\odot}\\) cases we also perform simulations at \\(0.07M\\) and for polytrope at \\(0.06M\\). The convergence test for the DD2 equation of state for three different resolutions is given in Sec. 3.2. ### Equations of State We use several finite-temperature, composition dependent nuclear-theory based equations of state (Fig. 1). Three of them are based on relativistic mean field (RMF) models. These equations of state are publicly available in tabulated form at [https://stellarcollapse.org](https://stellarcollapse.org). The resolution of tables used are around 250-300 points. 1. LS220 [73] (Lattimer & Swesty EoS with the incompressibility K = 220 MeV): contains neutrons, protons, alpha particles and heavy nuclei. It is based on the single nucleus approximation for heavy nuclei. LS220 has been widely used in many supernova simulations. 2. DD2 [74]: contains neutrons, protons, light nuclei such as deuterons, helions, tritons and alpha particles and heavy nuclei. DD2 is an RMF with a density-dependent nucleon-meson coupling for treating high density nuclear matter. 3. SFHo [75]: Another RMF, and similar to DD2 it contains neutrons, protons, light nuclei such as deuterons, helions, tritons and alpha particles and heavy nuclei. However, the RMF parameters are tuned to fit the NS mass-radius observation. 4. BHB [76]: Another RMF similar to DD2 and SFHo with the same particle composition, but BHB EoS additionally includes \\(\\Lambda\\) hyperons and hyperon-hyperon interactions allowed by \\(\\phi\\) mesons. 5. SLy [77]: contains only protons, neutrons and electrons. It is developed out of a refined Skyrme-like effective potential, originating from the shell-model description of the nuclei. 6. APR4: is the complete version of a four-realistic-EoS series developed by Akmal, Pandharipande and Ravenhall [78], named APR1 through 4, obtained from potentials resulting from fits to nucleon-nucleon scattering. APR4 includes the relativistic corrections and the three nucleon interaction potential additionally, which makes it more complete in comparison with the other APRs. This EoS is commonly used in neutron star simulations, as it occurs to be compatible with astronomical observations. 7. SRO-APR [79]: is based on APR potential model. However, similar to Lattimer & Swesty EoS, it assumes compressible liquid droplet model of nuclei. It contains neutrons, protons, a single type of heavy nucleus plus alpha particles representing light nuclei. For testing our methods, we also use a polytropic equation of state model \\[p=\\kappa\\rho^{\\Gamma}. \\tag{5}\\]We use \\(\\Gamma=2\\) and choosing accordingly the values of \\(\\kappa\\) and central density, we create models for stars of masses \\(1.35M_{\\odot}\\) and \\(1.4M_{\\odot}\\). The black dashed line in Fig. 1 represents the M-R relation of a polytrope with \\(\\kappa=94.29\\). This parameter is applied to create the model of the star with \\(1.35M_{\\odot}\\). The brown dotted line in Fig. 1 represents M-R relation for a polytropic star with \\(\\kappa=100\\) for a star with the mass of \\(1.4M_{\\odot}\\). These configurations are considered apart from the other realistic EoSs to compare with the literature and the earlier studies which considered polytropes. We evolve the polytropic models as well as the SLy and APR4 models using a hybrid equation of state approach [80; 81] viz. a gamma-law correction for finite temperature with \\(\\Gamma_{th}=2\\). The evolution equation of state becomes: \\[P(\\rho,\\epsilon)=P_{\\rm cold}(\\rho)+\\rho[\\epsilon-\\epsilon_{\\rm cold}(\\rho)]( \\Gamma_{\\rm th}-1) \\tag{6}\\] We find that the results obtained from the hybrid equation of state method for our \\(1.4M_{\\odot}\\) polytrope to be consistent with the results from Ref. [50]. ### Extracting the \\(f\\)-mode frequencies In order to compute the \\(f\\)-modes, we use the Fourier transform of the time series data for the gravitational waveform generated during the evolution of each simulation (except for Cowling case where the frequency of \\(f\\)-mode is extracted from Fourier transform of central density). For our analysis we consider the waveform that is extracted at the radius \\(r=10M\\) for the \\(l=2\\), \\(m=2\\) mode of the \\(\\Psi_{4}\\) data, which is calculated using the Newman-Penrose formalism [84; 85]. We choose this radius since it has the least amount of noise among all the considered extraction radii while the computed fundamental mode stays the same as could be seen in Fig. 3 (bottom right). ## 3 Numerical accuracy tests Measuring the frequency of the oscillation modes from a fully general relativistic hydrodynamic evolution can encounter several numerical challenges. For an accurate frequency measure Figure 1: A comparison of all the equation of states used. The dotted line represents our polytropic model with \\(\\kappa=100\\) and the dashed line represents the polytropic model \\(\\kappa=94.29\\) (see Eq. 5). The shaded region is the range of observed mass of pulsar \\(J0740+6620\\)[82]. Purple-Blue region in the middle is the parametrised EoS \\(M-R\\) relation obtained from \\(GW170817\\)[83]. ment, the space-time and hydro evolutions are supposed to be done accurately with a reasonably-high grid resolution within the convergence regime. Particularly, for tabulated equations of state, the accuracy might be highly affected by the resolution of the hydro variables from the original EOS table (the number of the divisions over the density range for instance in our case). This accuracy limit appears specifically in the first derivatives' computation during each time step of the hydro evolution [86]. The reconstruction algorithm can be another possible source of errors for these measurements; Despite the fact that all commonly-used reconstruction methods such as WENO, PPM, etc. formally converge at second order in a standard finite volume/difference scheme, though the choice of the numerical method to reconstruct data at cell faces is found to be critical to correctly capture the stellar oscillations, and some of these methods often have significantly smaller errors compared to other ones [50; 71]. To address all these accuracy concerns, we designed several numerical tests including convergence and Cowling tests to investigate the accuracy of our simulations' setup and numerical methods used for frequency measurements. The results of these tests prove that our numerical setup and methods provide a good accuracy and robustness for \\(f\\)-mode frequency studies. ### Testing \\(f\\)-mode frequency in Cowling approximation As a part of the accuracy investigations, we test the hydrodynamic evolution of our numerical setup verses the analytical results from the linear perturbation theory. We run a test simulation in the Cowling approximation i.e. with spacetime evolution turned-off. In the perturbation theory, the Cowling approximation is taken into account by neglecting the perturbation of the metric components. The mode frequencies are derived by solving a set of coupled ODEs for hydro perturbation equations by applying appropriate boundary conditions (see Ref. [87; 20] for more details). We test the accuracy of our code at our standard resolution for LS220 EoS case with mass of \\(1.4M_{\\odot}\\). Applying the fast Fourier transform on the density oscillations at the neutron star's centre to extract modes frequencies, we find our numerical results close enough to the \\(f\\)-mode and the first harmonic \\(H1\\) derived from the perturbation equations. For \\(f\\)-mode we get \\(4.059\\) kHz from simulation and \\(4.065\\) kHz from the perturbation equations and for \\(H1\\)\\(6.959\\) kHz from simulation and \\(7.043\\) kHz from perturbation equations. Fig. 2 illustrates the frequency spectrum from our numerical simulation compared with the results from the analytical solution of the perturbation equations. ### \\(f\\)-mode frequency at different resolutions The performance of WhiskyTHC code has been tested before [70]. Here we only test the accuracy of our results within the selected resolution for our simulations. In order to claim that the resolution is in the convergence regime, we perform a multiple resolutions test for a single case, i.e. DD2 EoS with \\(M=1.4M_{\\odot}\\). We choose our standard resolution with grid spacing equals to \\(0.105M\\) as the intermediate level, and \\(0.07M\\) and \\(0.14M\\) as the higher and lower resolutions respectively. The \\(\\Psi_{4}\\) data extracted at \\(10M\\) for this test. The results of these tests are presented in Fig. 3. The top-left panel shows the lapse function varying with time in the simulation. One can see that increasing the resolution result in the convergence toward one solution. On the top-right panel, we present \\(\\Psi_{4}\\) for different resolutions. This plot shows that increasing the resolution does not change the gravitational wave significantly, but it reduces the noise as expected. The bottom-left panel shows the FFT of \\(\\Psi_{4}\\). This figure shows that the changes in frequency are negligible by varying the resolution, which confirms the accuracy of our \\(f\\)-mode frequency measurements. These results indicate that the observed quantities converge to one solution as we move from low to high resolutions, and the value of \\(f\\)-mode frequencies do not change significantly across different resolutions. This convergence Figure 3: The central lapse for three different resolutions (top left); \\(\\Psi_{4}\\) plotted at three different resolution. The change in resolution does not change the \\(\\Psi_{4}\\) data (top right). The FFT of \\(\\Psi_{4}\\) at three different resolution shows the \\(f\\)-mode value does not change. The black dashed line represents the value of \\(f\\)-mode frequency (bottom left). The FFT of \\(\\Psi_{4}\\) at 5 different radii with a refinement layer between each (bottom right). Figure 2: Test simulation in Cowling approximation to check against analytical values (red lines) of \\(f\\)-mode frequencies. We find that the simulation retrieves accurate frequencies for \\(f\\)-mode and the first harmonic. study confirms that the intermediate resolution used in our numerical simulations, to report the \\(f\\)-mode frequencies, is in the convergence regime. ### \\(f\\)-mode frequency at different \\(\\Psi_{4}\\) extraction radii As the last numerical test, we investigate the accuracy of the \\(\\Psi_{4}\\) functions extracted from different radii. For this test we choose our polytropic case with \\(\\Gamma=2\\), \\(\\kappa=100\\) and \\(M=1.4M_{\\odot}\\), with minimum grid spacing equals to \\(0.06M\\) for the resolution. The \\(\\Psi_{4}\\) outputs are extracted at \\(r=10,40,70,100,130M\\) for the Fourier transform. In Fig. 3 (bottom right), we show that the \\(f\\)-mode frequency does not change with different radii of extraction of \\(\\Psi_{4}\\) for Polytropic equation of state. We also notice that \\(f\\)-mode is most prominent and has less noise for \\(r=10M\\), and hence we chose it for comparison in with other equations of state. ## 4 Analysis and Results We evolve a set of non-rotating configurations in the mass range of \\(1.2-2.0M_{\\odot}\\) for each of the EoS, described in Sec. 2.3 and test our obtained results against the established results from perturbative approach. The mass, radius, central density for these have been listed in the first four columns of Tab. 2. We also notice that change in resolution does not affect the extracted \\(f\\)-mode frequencies (Fig. 3). We run the simulations for \\(2100M\\approx 10.35ms\\). ### fundamental (\\(f\\))-modes We present our models and results of our simulations in Tab. 2. In Fig. 4 (top panels) we plot the \\(f\\)-mode frequency value in terms of mass and compactness for the considered set of equations of state. We notice that the \\(f\\)-mode frequency is much smaller for the stiff EoS such as DD2 and BHB. It increases for the higher masses and softer EoS. Based on the frequency band of the observed gravitational signal and the inferred mass, it is possible to put bounds on the softness/stiffness of the EoS that characterises the interior of the neutron star. For example, if the detected \\(f\\)-mode frequency is below 1.8 kHz for a star having mass above \\(1.4M_{\\odot}\\), then all the considered soft EoS would be ruled out. On the other hand, frequency above 1.8 kHz would permit only some stiff EoS if the object is more massive i.e. \\(\\approx 1.8M_{\\odot}\\) or more and compactness above 0.20. Similar trends we see in terms of effective compactness \\(\\eta\\) and tidal deformability \\(\\lambda_{2}\\) (see bottom panels of Fig. 4). We also observe that while few of the considered EoS show linear trend, some of the other, such as BHB, LS220, SLy and SFHo deviate and show a faster rise in \\(f\\)-mode values for larger parameter values (as could be seen in all the four panels of Fig. 4). This doesn't seem to be dependent only on the softness or stiffness of the matter, but could be due to the finer micro-physics involved. To understand this better, in our follow-up studies, we plan to consider a larger set of EoS, including the ones having hyperons, strange quark matter, etc. In Fig. 5 we present the fits of our models with the relation as given in Ref. [35] : \\[f=a+b\\ \\sqrt{\\frac{M}{R^{3}}} \\tag{7}\\] We find a good linear fit for our data across the EoS used for our study and list the values of \\(a\\) and \\(b\\) in Tab. 1. LS220 EoS shows the steepest change (the red line), followed by BHB (the Beige color line) in Fig. 5. We compare our fit obtained in Tab. 1 to the Table II in Ref. [35] for the LS220 and APR4 equation of states, and find our results to be in agreement (within 10%). \\begin{table} \\begin{tabular}{|c|c|c|} \\hline EoS & a & b \\\\ \\hline DD2 & 0.657 & 32.061 \\\\ BHB & 0.429 & 39.801 \\\\ LS220 & 0.286 & 44.653 \\\\ SFHo & 0.663 & 35.101 \\\\ SRO-APR & 0.881 & 29.958 \\\\ SLy & 0.557 & 36.544 \\\\ APR4 & 0.795 & 30.791 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Data fitted for the various compact star models with Eq. 7. We find that our results are in agreement with the Ref. [35]. Figure 4: \\(f\\)-mode frequency for different EoS is shown in different panels with respect to mass \\(M\\) (top left); compactness \\(\\mathcal{C}=M/R\\), \\(R\\) being the radius of the star (top right); effective compactness \\(\\eta=\\sqrt{M^{3}/I}\\), where \\(I\\) is the moment of inertia (bottom left) and tidal deformability \\(\\lambda_{2}\\) as in Eq. 9 (bottom right). The softer EoS has larger \\(f\\)-mode frequencies as compared to the intermediate (LS220) and stiffer (DD2 and BHB) EoS. ### Universal Relations In order to study, whether the equations of state specific trends that we notice above, contribute to the deviation from the established Universal relations or not, we verify some of the URs. First, we compute the relation between \\(f\\)-mode frequency and the tidal deformability as given in Ref. [23; 34] \\[M\\omega=\\sum_{i}a_{i}\\ (\\xi)^{i}, \\tag{8}\\] where \\(a_{i}\\) are the numerical coefficients presented in Tab. 3. \\(M\\) is the mass of the star and \\(\\omega\\) is the angular \\(f\\)-mode frequency. \\(\\xi=log(\\lambda_{2})\\), where \\(\\lambda_{2}\\) is the dimensionless electric tidal deformability calculated from the tidal Love number \\(k_{2}\\) as [34]: \\[\\lambda_{2}=\\frac{2}{3}\\frac{k_{2}}{(M/R)^{5}} \\tag{9}\\] The tidal Love number \\(k_{2}\\) is calculated by solving the metric perturbation equation as defined in Ref. [88]. For this computation we integrate Eq. (15) from [88] for the metric perturbation function \\(H\\), from center to surface using fourth order Runge-Kutta method. We use the Runge-Kutta ODE solver with adaptive step size routine from Numerical recipe [89]. Finally, the tidal Love number \\(k_{2}\\) is computed from Eq.(23) from [88]. Figure 4 (bottom-right panel) shows the relation of tidal deformability \\(\\lambda_{2}\\) with the \\(f\\)-mode. The comparison and deviation for \\(f\\)-Love UR, Eq. 8 is carried out in two ways: first, using these equations we compute the coefficients for our data. Second, using the same values of coefficients as given in Ref. [34] but, with the \\(f\\), \\(\\lambda_{2}\\) and \\(\\eta\\) that we calculate for each of our simulation. The last three columns of Tab. 2 show comparison and percent deviations with \\(f\\)-Love UR's. Table 3 lists the coefficients for Ref. [34] and the ones computed for our data. In Fig. 6 we compare the our results with that of Ref. [34]. It has been observed that universal behaviour also exist between the \\(f\\)-mode and the effective compactness \\(\\eta\\)[90; 35]. We also test these URs described by the model \\[M\\omega=c_{1}+c_{2}\\ \\eta+c_{3}\\ \\eta^{2}, \\tag{10}\\] Figure 5: Fit from Eq. 7. The \\(a\\) and \\(b\\) values are listed on the figure above and also Tab. 1. where \\(\\eta=\\sqrt{M^{3}/I}\\) and \\(I\\) is the moment of inertia. We obtain the fit as \\(c_{1}=-0.00747\\), \\(c_{2}=0.1471\\) and \\(c_{3}=0.55328\\). We show this universality in the Fig. 7. Our results agree well with the results of the earlier works [90; 35]. The fundamental mode \\(f\\)-mode and effective compactness \\(\\eta\\) that we compute for our configurations are also plotted in Fig 4 (bottom left panel). Universal relation between the compactness and tidal Love numbers have been discussed in A. ### Calculation of damping times We compute fundamental frequency from \\(\\Psi_{4}\\) data of our simulations (see discussion in the beginning of Sec. 4) by evolving each initial configuration for about \\(2100M\\) (\\(\\approx 10\\) ms). This duration is insufficient, and it is required to have a longer simulation and higher resolution to extract reliable damping times. Thus, we choose the recently established relations, which use compactness and effective compactness for calculating the damping times [32; 35]: \\[\\frac{M}{\\tau_{1}}=0.112\\left(\\frac{M}{R}\\right)^{4}-0.53\\left(\\frac{M}{R} \\right)^{5}+0.628\\left(\\frac{M}{R}\\right)^{6}, \\tag{11}\\] \\[\\frac{I^{2}}{M^{5}\\tau_{2}}=0.0068-0.025\\ \\eta^{2}, \\tag{12}\\] where \\(\\tau_{1}\\) and \\(\\tau_{2}\\) are the damping times in terms of compactness \\(\\mathcal{C}=\\frac{M}{R}\\) and effective compactness \\(\\eta\\), respectively. Figure 8 shows damping time vs frequency plot, where the damping time is taken to be the average of \\(\\tau_{1}\\) and \\(\\tau_{2}\\). We find that for the polytrope case one of the relations overestimates the damping time obtained from linear perturbations methods [91; 50] while the other underestimates, taking an average of \\(\\tau_{1}\\) and \\(\\tau_{2}\\) gives us a value close to the expected value for the polytrope, hence we report the average values. In the subsequent study, we intend to extract damping times by evolving our systems for a much longer time and at a much higher resolution as one needs to be sure that the damping obtained is not due to numerical errors. Figure 6: The universality observed between tidal deformability and \\(f\\)-modes. Here, we compare our fit with that of Ref. [34]. Figure 8: \\(f\\)-mode vs the damping time for all the models. The damping time here is calculated as \\((\\tau_{1}+\\tau_{2})/2\\) from Eq. 11 and Eq. 12. Figure 7: The universality observed between \\(\\eta\\) and \\(f\\)-mode. The black dashed line is the fit that we obtained for Eq. 10. The brown dashed line is the one presented in the Ref. [35]. ### Comparison with compact binary merger simulations In the context of a binary system, the tidal force comes from a companion star in circular motion. The spherical harmonic expansion of the full tidal potential is given by Eq.(2.2) from Lai (1994) [14]: \\[U=-GM\\sum_{l,m}W_{lm}\\frac{r^{l}}{A(t)^{l+1}}Y_{lm}(\\theta,\\phi)e^{-im\\Omega t}, \\tag{13}\\] where \\(G\\) is the gravitational constant, \\(M\\) is the mass of the companion star, the coefficients \\(W_{lm}\\) depend on \\((l,m)\\) with equation (2.3) from [14], \\(A(t)\\) is the binary separation and \\(\\Omega\\) is the orbital frequency. The initially seeded perturbation with \\(Y_{22}\\) harmonics given by Eq.(4), corresponds to the leading quadrupole term of the spherical harmonic expansion of the tidal potential in Eq.(13). Therefore, the initial setup of our single neutron star simulation helps us to artificially mimic the perturbation caused by the tidal field of the companion star during inspiral phase up to an acceptable order of accuracy. As a comparison we simulate couple of configuration to compare the \\(f\\)-mode frequency with Ref. [26], where they extract the \\(f\\)-mode frequency using numerical relativity simulation of binary systems. We obtain an excellent match with their results, comparing to their black hole - neutron star binary simulation using \\(1.35M_{\\odot}\\) polytrope, we find a deviation of \\(0.6\\%\\) and with their neutron star - neutron star binary simulation using \\(1.35_{\\odot}M\\) SLy EoS we see a deviation of only \\(0.03\\%\\). We further note that even by interpolating the mass vs \\(f\\)-mode data based on our simulations to get the \\(f\\)-mode frequency at \\(1.35M_{\\odot}\\), the value which we get differs only by \\(0.9\\%\\) for the SLy EoS from the one reported in Ref. [26]. ## 5 Conclusions In this work, we evolve isolated non-spinning neutron star in fully dynamical spacetime with nuclear EoS. We investigate the accuracy of our numerical methods by conducting several tests including the convergence, and Cowling evolution described in Sec. 3. These tests confirm that our selected grid resolution is in the convergence regime, and their results match very well with the linear perturbation theory's solution for \\(f\\)-mode frequency measurement in the Cowling approximation. For the main simulations using full general relativistic hydrodynamic evolution, we consider a set of realistic EoS as listed in Sec. 2.3 to describe internal composition of the star. For each of the considered EoS, we evolve 5-6 configurations having mass in the range of \\(1.2-2.0M_{\\odot}\\). We compute \\(f\\)-mode frequency for each of the case and its dependence on the equation of state. We also compute tidal deformability for each of the case using perturbative approach. Our analysis show evidence of approximate universal relations between \\(f\\)-mode and other neutron star parameters such as tidal deformability, compactness etc. However, there is still a possibility to distinguish and constrain the possible stiff or soft equations of state based on the observed gravitational wave signal, as \\(f\\)-mode frequency value differs almost by \\(0.5\\) kHz for soft EoS (APR4) and stiff EoS (DD2 or BHB) when the mass of the neutron star is \\(1.4M_{\\odot}\\). The difference is higher for the more massive cases, in fact, for \\(M\\geq 1.8M_{\\odot}\\), frequency differs by \\(\\approx 300\\) Hz between DD2 (stiff EoS) and LS220 (having intermediate stiffness). Frequency computed are also well within the bounds reported by Ref. [23] through Bayesian estimates. Further, the computed frequencies match very well with the studies performed by other groups while studying the fundamental modes during the binary neutron star merger [26], as well as the frequencies extracted through perturbative studies [35]. These findings indicate that using singleperturbed star simulations could give us similar results to that of a binary simulation, saving on computational time and resources for measuring \\(f\\)-mode frequency at inspiral phase. We also observe, similar tidal behaviour for SRO-APR and APR4 EoS, where the former is one of the most updated EoS in the family of APR EoS, hence it could be a good choice for binary merger simulations for future studies. Our \\(f\\)-mode study of a perturbed neutron star in this paper is limited in many ways. Our numerical scheme for hydrodynamic evolution is limited to the second-order convergent finite volume method. In future studies, more advance numerical scheme such as discontinuous Galerkin method with higher accuracy and better efficiency can be used to study the perturbed neutron stars' evolution [45; 92]. We considered only non-rotating stars, and our EoS selection includes only one EoS with exotic matter i.e. BHB with hyperons. In the follow-up studies, it is important to perform dynamical spacetime simulations of a perturbed rotating neutron star to investigate the relativistic corrections on \\(f\\)-mode frequency shift due to spin effects [26]. Further, these studies can be enhanced by including a broad range of EoS with hyperons [93], muons [94], and also considering the parametrized EoS with piece-wise polytropic EoS [51; 8] or with continuous sound speed [52] to impose constraints on EoS, as are being considered in some of the recent studies. In addition, we intend to consider longer evolution with higher resolution to have a better accuracy for the higher mode measurements, as well as for the \\(f\\)-mode damping time measurements. Similar asteroseismological studies can also be extended to theories of gravity beyond general relativity. Many models for compact stars in different types of modified gravity theories exist (for example, see Refs. [95; 96; 97; 98]). Large-scale cosmological and gravitational wave observations may help probe these alternative gravity theories [99]. ## Acknowledgments The authors thank Sukanta Bose for frequent helpful discussions, scientific advice and comments over the entire course of this project. We thank D. Radice for help and support with WhiskyTHC code. We are also grateful to Sukanta Bose and P. Pnigouras for their useful inputs on the manuscript. S.S. acknowledges L. Baiotti, I. Hawke and International Centre for Theoretical Sciences (ICTS) for discussions during the program - Gravitational Wave Astrophysics (Code: ICTS/Prog-gws2020/05). Part of this research is supported by the Navajbai Ratan Tata Trust and LIGO-India funds at IUCAA, India. F.H. acknowledges grant No. 2019/35/B/ST9/04000 from the Polish National Science Center, Poland. S.S. also acknowledges support from the China Scholarship Council (CSC), Grant No. 2020GXZ016646. The numerical simulations for this work were performed on Pegasus cluster, which is a part of the high performance computing (HPC) facility at The Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India. ## Appendix A Compactness - Tidal deformability universal relations For our compact star models, we also study another universal relation in addition to the discussion in Sec. 4.2 between the compactness and tidal Love number using the initial data that we use in our simulations. We also use this as a check for our initial data against already established results [33; 100]. The universal relations are given as, \\[\\mathcal{C}=\\sum_{i}b_{i}\\ (log(\\lambda_{2}))^{i} \\tag{10}\\] The universal relation between compactness and tidal deformability provides us the constraint on the radius of the star as a less compact star will be deformed more by a tidal potential for a given mass [100], providing a relation between radius and \\(\\lambda_{2}\\). We present our obtained fit for \\(b_{i}\\) in Tab. 4. In Fig. 9, we compare our results to a recent study by [100]. Our results are consistent with the results of Ref. [33] as presented in Tab. 4. ## References * (1) B. P. Abbott, et al. (LIGO Scientific, Virgo), GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, Phys. Rev. Lett. 119 (2017) 161101. doi:10.1103/PhysRevLett.119.161101. arXiv:1710.05832. * (2) B. P. Abbott, et al. (LIGO Scientific, Virgo), GW190425: Observation of a Compact Binary Coalescence with Total Mass \\(\\sim 3.4M_{\\odot}\\), Astrophys. J. Lett. 892 (2020) L3. doi:10.3847/2041-8213/ab75f5. arXiv:2001.01761. * (3) M. C. Miller, et al., PSR J0030+0451 Mass and Radius from \\(NICER\\) Data and Implications for the Properties of Neutron Star Matter, Astrophys. J. Lett. 887 (2019) L24. doi:10.3847/2041-8213/ab50c5. arXiv:1912.05705. * (4) S. Bogdanov, et al., Constraining the Neutron Star Mass-Radius Relation and Dense Matter Equation of State with \\(NICER\\). I. The Millisecond Pulsar X-Ray Data Set, Astrophys. J. Lett. 887 (2019) L25. doi:10.3847/2041-8213/ab53eb. arXiv:1912.05706. Figure 9: The universality observed between tidal deformability and compactness. Here, we compare our fit (black dashed line) with that of Ref. [100] (brown dashed line). \\begin{table} \\begin{tabular}{|c|c c|c c|} \\hline Fit & Maselli et al. [33] & This work & Godzieba et al. [100] & This work \\\\ \\hline \\(b_{0}\\) & \\(3.71\\times 10^{-1}\\) & \\(3.770\\times 10^{-1}\\) & \\(3.388\\times 10^{-1}\\) & \\(7.951\\times 10^{-1}\\) \\\\ \\(b_{1}\\) & \\(-3.91\\times 10^{-2}\\) & \\(-4.137\\times 10^{-2}\\) & \\(-2.30\\times 10^{-2}\\) & \\(-5.839\\times 10^{-1}\\) \\\\ \\(b_{2}\\) & \\(1.056\\times 10^{-3}\\) & \\(1.149\\times 10^{-3}\\) & \\(-4.651\\times 10^{-4}\\) & \\(2.877\\times 10^{-1}\\) \\\\ \\(b_{3}\\) & - & - & \\(-2.636\\times 10^{-4}\\) & \\(-7.908\\times 10^{-2}\\) \\\\ \\(b_{4}\\) & - & - & \\(5.424\\times 10^{-5}\\) & \\(1.205\\times 10^{-2}\\) \\\\ \\(b_{5}\\) & - & - & \\(-3.188\\times 10^{-6}\\) & \\(-9.628\\times 10^{-4}\\) \\\\ \\(b_{6}\\) & - & - & \\(6.181\\times 10^{-8}\\) & \\(3.155\\times 10^{-5}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 4: Fitting parameters \\(b_{i}\\) from Eq. 1 for the \\(\\mathcal{C}\\)-Love universal relations. Comparing this work with the coefficient values in Ref. [33] and Ref. [100]. The polytrope cases are excluded from \\(b_{i}\\)’s calculations. * Bogdanov et al. (2019) S. Bogdanov, et al., Constraining the Neutron Star Mass-Radius Relation and Dense Matter Equation of State with \\(NICER\\). II. Emission from Hot Spots on a Rapidly Rotating Neutron Star, Astrophys. J. Lett. 887 (2019) L26. doi:10.3847/2041-8213/ab5968. arXiv:1912.05707. * Raaijmakers et al. (2020) G. Raaijmakers, et al., Constraining the dense matter equation of state with joint analysis of NICER and LIGO/Virgo measurements, Astrophys. J. Lett. 893 (2020) L21. doi:10.3847/2041-8213/ab822f. arXiv:1912.11031. * Jiang et al. (2020) J.-L. Jiang, S.-P. Tang, Y.-Z. Wang, Y.-Z. Fan, D.-M. Wei, PSR J0030+0451, GW170817 and the nuclear data: joint constraints on equation of state and bulk properties of neutron stars, Astrophys. J. 892 (2020) 1. doi:10.3847/1538-4357/ab77cf. arXiv:1912.07467. * Miller et al. (2019) M. C. Miller, C. Chirenti, F. K. Lamb, Constraining the equation of state of high-density cold matter using nuclear and astronomical measurements (2019). doi:10.3847/1538-4357/ab4ef9. arXiv:1904.08907. * Andersson and Kokkotas (1998) N. Andersson, K. D. Kokkotas, Towards gravitational wave asteroseismology, Mon. Not. Roy. Astron. Soc. 299 (1998) 1059-1068. doi:10.1046/j.1365-8711.1998.01840.x. arXiv:gr-qc/9711088. * Allen et al. (1998) G. Allen, N. Andersson, K. D. Kokkotas, B. F. Schutz, Gravitational waves from pulsating stars: Evolving the perturbation equations for a relativistic star, Phys. Rev. D 58 (1998) 124012. doi:10.1103/PhysRevD.58.124012. arXiv:gr-qc/9704023. * Pons et al. (2002) J. A. Pons, E. Berti, L. Gualtieri, G. Miniutti, V. Ferrari, Gravitational signals emitted by a point mass orbiting a neutron star: Effects of stellar structure, Phys. Rev. D 65 (2002) 104021. doi:10.1103/PhysRevD.65.104021. arXiv:gr-qc/0111104. * Benhar and Ferrari (2004) O. Benhar, V. Ferrari, L. Gualtieri, Gravitational wave asteroseismology revisited, Phys. Rev. D 70 (2004) 124015. doi:10.1103/PhysRevD.70.124015. arXiv:astro-ph/0407529. * Kokkotas (1995) K. D. Kokkotas, Pulsating relativistic stars, in: Les Houches School of Physics: Astrophysical Sources of Gravitational Radiation, 1995, pp. 89-102. arXiv:gr-qc/9603024. * Lai (1994) D. Lai, Resonant oscillations and tidal heating in coalescing binary neutron stars, Mon. Not. Roy. Astron. Soc. 270 (1994) 611. doi:10.1093/mnras/270.3.611. arXiv:astro-ph/9404062. * Lai and Wu (2006) D. Lai, Y. Wu, Resonant Tidal Excitations of Inertial Modes in Coalescing Neutron Star Binaries, Phys. Rev. D 74 (2006) 024007. doi:10.1103/PhysRevD.74.024007. arXiv:astro-ph/0604163. * Xu and Lai (2017) W. Xu, D. Lai, Resonant Tidal Excitation of Oscillation Modes in Merging Binary Neutron Stars: Inertial-Gravity Modes, Phys. Rev. D 96 (2017) 083005. doi:10.1103/PhysRevD.96.083005. arXiv:1708.01839. * Weinberg et al. (2013) N. N. Weinberg, P. Arras, J. Burkart, An instability due to the nonlinear coupling of p-modes to g-modes: Implications for coalescing neutron star binaries, Astrophys. J. 769 (2013) 121. doi:10.1088/0004-637X/769/2/121. arXiv:1302.2292. * Weinberg (2016) N. N. Weinberg, Growth rate of the tidal p-mode g-mode instability in coalescing binary neutron stars, Astrophys. J. 819 (2016) 109. doi:10.3847/0004-637X/819/2/109. arXiv:1509.06975. * (19) Y. Zhou, F. Zhang, Equation of State Dependence of Nonlinear Mode-tide Coupling in Coalescing Binary Neutron Stars, Astrophys. J. 849 (2017) 114. doi:10.3847/1538-4357/aa906e. arXiv:1801.09675. * (20) F. H. Nouri, S. Bose, M. D. Duez, A. Das, Nonlinear mode-tide coupling in coalescing binary neutron stars with relativistic corrections (2021). arXiv:2107.13339. * (21) N. Andersson, P. Pnigouras, The phenomenology of dynamical neutron star tides, Mon. Not. Roy. Astron. Soc. 503 (2021) 533-539. doi:10.1093/mnras/stab371.arXiv:1905.00012. * (22) N. Andersson, P. Pnigouras, Exploring the effective tidal deformability of neutron stars, Phys. Rev. D 101 (2020) 083001. doi:10.1103/PhysRevD.101.083001. arXiv:1906.08982. * (23) G. Pratten, P. Schmidt, T. Hinderer, Gravitational-Wave Asteroseismology with Fundamental Modes from Compact Binary Inspirals, Nature Commun. 11 (2020) 2553. doi:10.1038/s41467-020-15984-5. arXiv:1905.00817. * (24) P. Schmidt, T. Hinderer, Frequency domain model of \\(f\\)-mode dynamic tides in gravitational waveforms from compact binary inspirals, Phys. Rev. D 100 (2019) 021501. doi:10.1103/PhysRevD.100.021501. arXiv:1905.00818. * (25) S. Ma, H. Yu, Y. Chen, Excitation of f-modes during mergers of spinning binary neutron star, Phys. Rev. D 101 (2020) 123020. doi:10.1103/PhysRevD.101.123020. arXiv:2003.02373. * (26) J. Steinhoff, T. Hinderer, T. Dietrich, F. Foucart, Spin effects on neutron star fundamental-mode dynamical tides: Phenomenology and comparison to numerical simulations, Phys. Rev. Res. 3 (2021) 033129. doi:10.1103/PhysRevResearch.3.033129. arXiv:2103.06100. * (27) H. H.-Y. Ng, P. C.-K. Cheong, L.-M. Lin, T. G. F. Li, Gravitational-wave Asteroseismology with f-modes from Neutron Star Binaries at the Merger Phase, Astrophys. J. 915 (2021) 108. doi:10.3847/1538-4357/ac0141. arXiv:2012.08263. * (28) L. Rezzolla, K. Takami, Gravitational-wave signal from binary neutron stars: a systematic analysis of the spectral properties, Phys. Rev. D 93 (2016) 124051. doi:10.1103/PhysRevD.93.124051. arXiv:1604.00246. * (29) T. Dietrich, M. Ujevic, W. Tichy, S. Bernuzzi, B. Bruegmann, Gravitational waves and mass ejecta from binary neutron star mergers: Effect of the mass-ratio, Phys. Rev. D 95 (2017) 024029. doi:10.1103/PhysRevD.95.024029. arXiv:1607.06636. * (30) T. Dietrich, S. Bernuzzi, M. Ujevic, W. Tichy, Gravitational waves and mass ejecta from binary neutron star mergers: Effect of the stars' rotation, Phys. Rev. D 95 (2017) 044045. doi:10.1103/PhysRevD.95.044045. arXiv:1611.07367. * (31) G. Lioutas, A. Bauswein, N. Stergioulas, Frequency deviations in universal relations of isolated neutron stars and postmerger remnants, Phys. Rev. D 104 (2021) 043011. doi:10.1103/PhysRevD.104.043011. arXiv:2102.12455. * (32) G. Lioutas, N. Stergioulas, Universal and approximate relations for the gravitational-wave damping timescale of \\(f\\)-modes in neutron stars, Gen. Rel. Grav. 50 (2018) 12. doi:10.1007/s10714-017-2331-7. arXiv:1709.10067. * (33) A. Maselli, V. Cardoso, V. Ferrari, L. Gualtieri, P. Pani, Equation-of-state-independent relations in neutron stars, Phys. Rev. D 88 (2013) 023007. doi:10.1103/PhysRevD.88.023007. arXiv:1304.2052. * (34) T. K. Chan, Y. H. Sham, P. T. Leung, L. M. Lin, Multipolar universal relations between f-mode frequency and tidal deformability of compact stars, Phys. Rev. D 90 (2014) 124023. doi:10.1103/PhysRevD.90.124023. arXiv:1408.3789. * (35) C. Chirenti, G. H. de Souza, W. Kastaun, Fundamental oscillation modes of neutron stars: validity of universal relations, Phys. Rev. D 91 (2015) 044034. doi:10.1103/PhysRevD.91.044034. arXiv:1501.02970. * (36) K. Yagi, N. Yunes, Approximate Universal Relations among Tidal Parameters for Neutron Star Binaries, Class. Quant. Grav. 34 (2017) 015006. doi:10.1088/1361-6382/34/1/015006. arXiv:1608.06187. * (37) K. Yagi, N. Yunes, Approximate Universal Relations for Neutron Stars and Quark Stars, Phys. Rept. 681 (2017) 1-72. doi:10.1016/j.physrep.2017.03.002. arXiv:1608.02582. * (38) L. Lindblom, S. L. Detweiler, The quadrupole oscillations of neutron stars, Astrophys. J. Suppl. 53 (1983) 73-92. doi:10.1086/190884. * (39) S. L. Detweiler, L. Lindblom, On the nonradial pulsations of general relativistic stellar models, Astrophys. J. 292 (1985) 12-15. doi:10.1086/163127. * (40) K. S. Thorne, A. Campolattaro, Non-Radial Pulsation of General-Relativistic Stellar Models. I. Analytic Analysis for L?= 2, Astrophys. J. 149 (1967) 591. doi:10.1086/149288. * (41) J. A. Font, N. Stergioulas, K. D. Kokkotas, Nonlinear hydrodynamical evolution of rotating relativistic stars: Numerical methods and code tests, Mon. Not. Roy. Astron. Soc. 313 (2000) 678. doi:10.1046/j.1365-8711.2000.03254.x. arXiv:gr-qc/9908010. * (42) J. A. Font, H. Dimmelmeier, A. Gupta, N. Stergioulas, Axisymmetric modes of rotating relativistic stars in the Cowling approximation, Mon. Not. Roy. Astron. Soc. 325 (2001) 1463. doi:10.1046/j.1365-8711.2001.04555.x. arXiv:astro-ph/0012477. * (43) M. Shibata, S. Karino, Numerical evolution of secular bar-mode instability induced by the gravitational radiation reaction in rapidly rotating neutron stars, Phys. Rev. D 70 (2004) 084022. doi:10.1103/PhysRevD.70.084022. arXiv:astro-ph/0408016. * (44) W. Kastaun, B. Willburger, K. D. Kokkotas, On the saturation amplitude of the f-mode instability, Phys. Rev. D 82 (2010) 104036. doi:10.1103/PhysRevD.82.104036. arXiv:1006.3885. * (45) F. Hebert, L. E. Kidder, S. A. Teukolsky, General-relativistic neutron star evolutions with the discontinuous Galerkin method, Phys. Rev. D 98 (2018) 044041. doi:10.1103/PhysRevD.98.044041. arXiv:1804.02003. * (46) R. De Pietri, A. Feo, L. Franci, F. Loffler, Neutron Star instabilities in full General Relativity using a \\(\\Gamma=2.75\\) ideal fluid, Phys. Rev. D 90 (2014) 024034. doi:10.1103/PhysRevD.90.024034. arXiv:1403.8066. * (47) H. Dimmelmeier, N. Stergioulas, J. A. Font, Non-linear axisymmetric pulsations of rotating relativistic stars in the conformal flatness approximation, Mon. Not. Roy. Astron. Soc. 368 (2006) 1609-1630. doi:10.1111/j.1365-2966.2006.10274.x. arXiv:astro-ph/0511394. * Bucciantini and Del Zanna (2011) N. Bucciantini, L. Del Zanna, GRMHD in axisymmetric dynamical spacetimes: the X-ECHO code, Astron. Astrophys. 528 (2011) A101. doi:10.1051/0004-6361/201015945. arXiv:1010.3532. * Pili et al. (2014) A. G. Pili, N. Bucciantini, L. Del Zanna, Axisymmetric equilibrium models for magnetized neutron stars in General Relativity under the Conformally Flat Condition, Mon. Not. Roy. Astron. Soc. 439 (2014) 3541-3563. doi:10.1093/mnras/stu215. arXiv:1401.4308. * Rosofsky et al. (2019) S. Rosofsky, R. Gold, C. Chirenti, E. A. Huerta, M. C. Miller, Probing neutron star structure via f-mode oscillations and damping in dynamical spacetime models, Phys. Rev. D 99 (2019) 084024. doi:10.1103/PhysRevD.99.084024. arXiv:1812.06126. * Bauswein et al. (2020) A. Bauswein, S. Blacker, V. Vijayan, N. Stergioulas, K. Chatziiioannou, J. A. Clark, N.-U. F. Bastian, D. B. Blaschke, M. Cierniak, T. Fischer, Equation of state constraints from the threshold binary mass for prompt collapse of neutron star mergers, Phys. Rev. Lett. 125 (2020) 141103. doi:10.1103/PhysRevLett.125.141103. arXiv:2004.00846. * O'Boyle et al. (2020) M. F. O'Boyle, C. Markakis, N. Stergioulas, J. S. Read, Parametrized equation of state for neutron star matter with continuous sound speed, Phys. Rev. D 102 (2020) 083027. doi:10.1103/PhysRevD.102.083027. arXiv:2008.03342. * Alcubierre (2008) M. Alcubierre, Introduction to 3+1 Numerical Relativity, Oxford University Press, Oxford, 2008. * Rezzolla and Zanotti (2013) L. Rezzolla, O. Zanotti, Relativistic Hydrodynamics, Oxford University Press, Oxford, 2013. * Baumgarte and Shapiro (2010) T. Baumgarte, S. Shapiro, Numerical Relativity: Solving Einstein's Equations on the Computer, Cambridge University Press, 2010. * Shibata (2015) M. Shibata, Numerical Relativity, World Scientific Publishing Company, 2015. * Nakamura et al. (1987) T. Nakamura, K. Oohara, Y. Kojima, General Relativistic Collapse to Black Holes and Gravitational Waves from Black Holes, Prog. Theor. Phys. Suppl. 90 (1987) 1-218. doi:10.1143/PTPS.90.1. * Shibata and Nakamura (1995) M. Shibata, T. Nakamura, Evolution of three-dimensional gravitational waves: Harmonic slicing case, Phys. Rev. D 52 (1995) 5428-5444. doi:10.1103/PhysRevD.52.5428. * Baumgarte and Shapiro (1998) T. W. Baumgarte, S. L. Shapiro, On the numerical integration of Einstein's field equations, Phys. Rev. D 59 (1998) 024007. doi:10.1103/PhysRevD.59.024007. arXiv:gr-qc/9810065. * Alcubierre et al. (2000) M. Alcubierre, G. Allen, B. Bruegmann, T. Dramlitsch, J. A. Font, P. Papadopoulos, E. Seidel, N. Stergioulas, W.-M. Suen, R. Takahashi, Towards a stable numerical evolution of strongly gravitating systems in general relativity: The Conformal treatments, Phys. Rev. D 62 (2000) 044034. doi:10.1103/PhysRevD.62.044034. arXiv:gr-qc/0003071. * Arnowitt et al. (1959) R. L. Arnowitt, S. Deser, C. W. Misner, Dynamical Structure and Definition of Energy in General Relativity, Phys. Rev. 116 (1959) 1322-1330. doi:10.1103/PhysRev.116.1322. * Arnowitt et al. (2008) R. L. Arnowitt, S. Deser, C. W. Misner, The Dynamics of general relativity, Gen. Rel. Grav. 40 (2008) 1997-2027. doi:10.1007/s10714-008-0661-1. arXiv:gr-qc/0405109. * Baiotti and Rezzolla (2017) L. Baiotti, L. Rezzolla, Binary neutron star mergers: a review of Einstein's richest laboratory, Rept. Prog. Phys. 80 (2017) 096901. doi:10.1088/1361-6633/aa67bb. arXiv:1607.03540. * Brown et al. (2009) J. D. Brown, P. Diener, O. Sarbach, E. Schnetter, M. Tiglio, Turduckening black holes: An Analytical and computational study, Phys. Rev. D 79 (2009) 044023. doi:10.1103/PhysRevD.79.044023. arXiv:0809.3533. * Babiuc-Hamilton et al. (2019) M. Babiuc-Hamilton, et al., The einstein toolkit, 2019. doi:10.5281/zenodo.3522086, to find out more, visit [http://einsteintoolkit.org](http://einsteintoolkit.org). * VECPAR'2002, 5th International Conference, Lecture Notes in Computer Science, Springer, Berlin, 2003. URL: [http://edoc.mpg.de/3341](http://edoc.mpg.de/3341). * Schnetter et al. (2006) E. Schnetter, P. Diener, E. N. Dorband, M. Tiglio, A Multi-block infrastructure for three-dimensional time-dependent numerical relativity, Class. Quant. Grav. 23 (2006) S553-S578. doi:10.1088/0264-9381/23/16/S14. arXiv:gr-qc/0602104. * Schnetter et al. (2004) E. Schnetter, S. H. Hawley, I. Hawke, Evolutions in 3-D numerical relativity using fixed mesh refinement, Class. Quant. Grav. 21 (2004) 1465-1488. doi:10.1088/0264-9381/21/6/014. arXiv:gr-qc/0310042. * Font (2008) J. A. Font, Numerical Hydrodynamics and Magnetohydrodynamics in General Relativity, Living Rev. Rel. 11 (2008) 7. doi:10.12942/lrr-2008-7. * Radice et al. (2014) D. Radice, L. Rezzolla, F. Galeazzi, Beyond second-order convergence in simulations of binary neutron stars in full general-relativity, Mon. Not. Roy. Astron. Soc. 437 (2014) L46-L50. doi:10.1093/mnrasl/slt137. arXiv:1306.6052. * Radice et al. (2014) D. Radice, L. Rezzolla, F. Galeazzi, High-Order Fully General-Relativistic Hydrodynamics: new Approaches and Tests, Class. Quant. Grav. 31 (2014) 075012. doi:10.1088/0264-9381/31/7/075012. arXiv:1312.5004. * Radice and Rezzolla (2012) D. Radice, L. Rezzolla, THC: a new high-order finite-difference high-resolution shock-capturing code for special-relativistic hydrodynamics, Astron. Astrophys. 547 (2012) A26. doi:10.1051/0004-6361/201219735. arXiv:1206.6502. * Lattimer and Swesty (1991) J. M. Lattimer, F. D. Swesty, A Generalized equation of state for hot, dense matter, Nucl. Phys. A 535 (1991) 331-376. doi:10.1016/0375-9474(91)90452-C. * Hempel et al. (2012) M. Hempel, T. Fischer, J. Schaffner-Bielich, M. Liebendorfer, New Equations of State in Simulations of Core-Collapse Supernovae, Astrophys. J. 748 (2012) 70. doi:10.1088/0004-637X/748/1/70. arXiv:1108.0848. * Steiner et al. (2013) A. W. Steiner, M. Hempel, T. Fischer, Core-collapse supernova equations of state based on neutron star observations, Astrophys. J. 774 (2013) 17. doi:10.1088/0004-637X/774/1/17. arXiv:1207.2184. * Banik et al. (2014) S. Banik, M. Hempel, D. Bandyopadhyay, New Hyperon Equations of State for Supernovae and Neutron Stars in Density-dependent Hadron Field Theory, Astrophys. J. Suppl. 214 (2014) 22. doi:10.1088/0067-0049/214/2/22. arXiv:1404.6173. * (77) E. Chabanat, P. Bonche, P. Haensel, J. Meyer, R. Schaeffer, A Skyrme parametrization from subnuclear to neutron star densities. 2. Nuclei far from stablities, Nucl. Phys. A 635 (1998) 231-256. doi:10.1016/S0375-9474(98)00180-8, [Erratum: Nucl.Phys.A 643, 441-441 (1998)]. * (78) A. Akmal, V. R. Pandharipande, D. G. Ravenhall, The Equation of state of nucleon matter and neutron star structure, Phys. Rev. C 58 (1998) 1804-1828. doi:10.1103/PhysRevC.58.1804. arXiv:nucl-th/9804027. * (79) A. S. Schneider, C. Constantinou, B. Muccioli, M. Prakash, Akmal-Pandharipande-Ravenhall equation of state for simulations of supernovae, neutron stars, and binary mergers, Phys. Rev. C 100 (2019) 025803. doi:10.1103/PhysRevC.100.025803. arXiv:1901.09652. * (80) K. Takami, L. Rezzolla, L. Baiotti, Spectral properties of the post-merger gravitational-wave signal from binary neutron stars, Phys. Rev. D 91 (2015) 064001. doi:10.1103/PhysRevD.91.064001. arXiv:1412.3240. * (81) A. Figura, J. J. Lu, G. F. Burgio, Z. H. Li, H. J. Schulze, Hybrid equation of state approach in binary neutron-star merger simulations, Phys. Rev. D 102 (2020) 043006. doi:10.1103/PhysRevD.102.043006. arXiv:2005.08691. * (82) H. T. Cromartie, et al. (NANOGrav), Relativistic Shapiro delay measurements of an extremely massive millisecond pulsar, Nature Astron. 4 (2019) 72-76. doi:10.1038/s41550-019-0880-2. arXiv:1904.06759. * (83) B. P. Abbott, et al. (LIGO Scientific, Virgo), GW170817: Measurements of neutron star radii and equation of state, Phys. Rev. Lett. 121 (2018) 161101. doi:10.1103/PhysRevLett.121.161101. arXiv:1805.11581. * (84) E. Newman, R. Penrose, Errata: An approach to gravitational radiation by a method of spin coefficients, Journal of Mathematical Physics 4 (1963) 998-998. URL: [https://doi.org/10.1063/1.1704025](https://doi.org/10.1063/1.1704025). doi:10.1063/1.1704025. arXiv:[https://doi.org/10.1063/1.1704025](https://doi.org/10.1063/1.1704025). * (85) N. T. Bishop, L. Rezzolla, Extraction of Gravitational Waves in Numerical Relativity, Living Rev. Rel. 19 (2016) 2. doi:10.1007/s41114-016-0001-9. arXiv:1606.02532. * (86) W. Kastaun, J. V. Kalinani, R. Ciolfi, Robust recovery of primitive variables in relativistic ideal magnetohydrodynamics, Phys. Rev. D 103 (2021) 023018. URL: [https://link.aps.org/doi/10.1103/PhysRevD.103.023018](https://link.aps.org/doi/10.1103/PhysRevD.103.023018). doi:10.1103/PhysRevD.103.023018. * (87) L. S. Finn, Relativistic stellar pulsations in the Cowling approximation, Mon. Not. Roy. Astron. Soc. 232 (1988) 259-275. doi:10.1093/mnras/232.2.259. * (88) T. Hinderer, Tidal Love numbers of neutron stars, Astrophys. J. 677 (2008) 1216-1220. doi:10.1086/533487. arXiv:0711.2420. * (89) W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes 3rd Edition: The Art of Scientific Computing, Cambridge University Press, 2007. * (90) H. K. Lau, P. T. Leung, L. M. Lin, Inferring physical parameters of compact stars from their f-mode gravitational wave signals, Astrophys. J. 714 (2010) 1234-1238. doi:10.1088/0004-637X/714/2/1234. arXiv:0911.0131. * (91) L. Baiotti, S. Bernuzzi, G. Corvino, R. De Pietri, A. Nagar, Gravitational-Wave Extraction from Neutron Stars Oscillations: Comparing linear and nonlinear techniques, Phys. Rev. D 79 (2009) 024002. doi:10.1103/PhysRevD.79.024002. arXiv:0808.4002. * (92) N. Deppe, F. Hebert, L. E. Kidder, W. Throwe, I. Anantpurkar, C. Armaza, G. S. Bonilla, M. Boyle, H. Chaudhary, M. D. Duez, N. L. Fischer, F. Foucart, M. Giesler, J. S. Guo, Y. Kim, P. Kumar, I. Legred, D. Li, G. Lovelace, S. Ma, A. Macedo, D. Melchor, M. Morales, J. Moxon, K. C. Nelli, E. O'Shea, H. P. Pfeiffer, T. Ramirez, H. R. Ruter, J. Sanchez, M. A. Scheel, S. Thomas, D. Vieira, N. A. Wittek, T. Wlodarczyk, S. A. Teukolsky, Simulating magnetized neutron stars with discontinuous Galerkin methods, arXiv e-prints (2021) arXiv:2109.12033. arXiv:2109.12033. * (93) B. K. Pradhan, D. Chatterjee, Effect of hyperons on f-mode oscillations in Neutron Stars, Phys. Rev. C 103 (2021) 035810. doi:10.1103/PhysRevC.103.035810. arXiv:2011.02204. * (94) D.-H. Wen, B.-A. Li, H.-Y. Chen, N.-B. Zhang, GW170817 implications on the frequency and damping time of f -mode oscillations of neutron stars, Phys. Rev. C 99 (2019) 045806. doi:10.1103/PhysRevC.99.045806. arXiv:1901.03779. * (95) S. K. Maurya, G. Mustafa, M. Govender, K. Newton Singh, Exploring physical properties of minimally deformed strange star model and constraints on maximum mass limit in \\(f(\\mathscr{Q})\\) gravity, JCAP 10 (2022) 003. doi:10.1088/1475-7516/2022/10/003. arXiv:2207.02021. * (96) S. K. Maurya, K. N. Singh, M. Govender, S. Hansraj, Gravitationally Decoupled Strange Star Model beyond the Standard Maximum Mass Limit in Einstein-Gauss-Bonnet Gravity, Astrophys. J. 925 (2022) 208. doi:10.3847/1538-4357/ac4255. arXiv:2109.00358. * (97) D. Deb, S. V. Ketov, S. K. Maurya, M. Khlopov, P. H. R. S. Moraes, S. Ray, Exploring physical features of anisotropic strange stars beyond standard maximum mass limit in \\(f(R,\\mathcal{T})\\) gravity, Mon. Not. Roy. Astron. Soc. 485 (2019) 5652-5665. doi:10.1093/mnras/stz708. arXiv:1810.07678. * (98) S. K. Maurya, K. N. Singh, M. Govender, S. Ray, Observational constraints on maximum mass limit and physical properties of anisotropic strange star models by gravitational decoupling in Einstein-Gauss-Bonnet gravity, Mon. Not. Roy. Astron. Soc. 519 (2022) 4303-4324. doi:10.1093/mnras/stac3611. * (99) T. Baker, D. Psaltis, C. Skordis, Linking Tests of Gravity On All Scales: from the Strong-Field Regime to Cosmology, Astrophys. J. 802 (2015) 63. doi:10.1088/0004-637X/802/1/63. arXiv:1412.3455. * (100) D. A. Godzieba, R. Gamba, D. Radice, S. Bernuzzi, Updated universal relations for tidal deformabilities of neutron stars from phenomenological equations of state, Phys. Rev. D 103 (2021) 063036. doi:10.1103/PhysRevD.103.063036. arXiv:2012.12151.
In this study, we perform full three dimensional numerical relativity simulations of non-rotating general relativistic stars. Extending the studies for polytropic equation of state, we investigate the accuracy and robustness of numerical scheme on measuring fundamental (\\(f\\))-mode frequency for realistic equations of state (EoS). We use various EoS with varying range of stiffness and numerically evolve perturbed stellar models for several mass configurations (in the range of \\(1.2-2.0~{}M_{\\odot}\\)) for each of these EoS. Using the gravitational waveform obtained from the simulations we extract the \\(f\\)-modes of the stars. The obtained results are tested against the pre-existing perturbation methods and find good agreement. Validity and deviation of universal relations have been carried out and are compared with earlier results under Cowling approximation as well as perturbative approaches. We also show that even using perturbed single star simulations can provide good agreement with \\(f\\)-modes extracted from inspiral phase of binary neutron star simulations which are computationally more expensive. keywords: neutron stars, stellar oscillations, equation of state, simulations + Footnote †: journal: Elsevier
Write a summary of the passage below.
247
arxiv-format/2404_07405v1.md
# Simplifying Two-Stage Detectors for On-Device Inference in Remote Sensing Jaemin Kang\\({}^{1}\\) Hoeseok Yang\\({}^{2}\\) Hyungshin Kim\\({}^{3}\\) \\({}^{1}\\) Jaemin, Email: [email protected]\\({}^{2}\\) Hoeseok, Email: [email protected]\\({}^{3}\\) Hyungshin, Email: [email protected] ## I Introduction Deep learning-based object detection is widely used for remotely sensed images taken from UAVs or satellites. Currently, object detection is performed on the ground once images are transferred to the ground facility. On the ground, powerful GPU clusters can be used to run highly accurate and hence complex deep learning models. However, considering end-to-end delay from the acquisition of the image to the inference, large delay from a few minutes upto a few days incurs until the image is arrived at the ground facility [1, 2, 3]. To curtail this delay and to process in real time, on-board AI is gaining attention. With on-board AI, we run deep learning inference on-board UAVs or satellites. There are difficulties in performing on-board inference of high-precision object detection models due to the constrained computation capability and power budget. State-of-the-art object detection deep learning models for remote sensing imagery have a two-stage structure with an Oriented RPN head [4, 5, 6]. Providing real-time performance is difficult due to the computational complexity of high-accuracy object detection models. Some one-stage detectors provide real-time performance, but they have lower accuracy compared to other high-accuracy detectors [7]. The use of low-accuracy models poses a significant risk when their applications are in mission critical domains. As a result, researchers are working to create more effective models aiming for great accuracy but also very fast. This paper is on simplifying highly accurate two-stage detectors while maintaining accuracy. Several studies have proposed various methods to lighten and speed up the model without compromising its accuracy. Pruning is a technique that lightens a model by deleting redundant weights [8]. They remove weights in structural modules and create a faster model than the original. However, when applying the pruning technique to a two-stage detector, it is generally applicable to the backbone. Thus, even after pruning, there are still many FLOPs left for regression. Figure 1 shows the computational load for each component of a two stage detector. The figure shows the impact of pruning on a two stage detector LSKNet [4], which demonstrates high accuracy with low FLOPs. In LSKNet-S (Small model), the backbone, neck, and RPN head exhibit similar FLOPs. From the LSKNet-S, compressed model LSKNet-T (Tiny model) is created with reduced number of parameters. In LSKNet-T, the FLOPs of the backbone are reduced by 3X than the LSKNet-S, but those of the neck and RPN head remain the same. Computations in the regression part should be further exploited to get extra speed up. The right most chart of Figure 1 shows our compression affects on the regression parts of the detector. YOLOF [9] is the study on simplifying a one-stage detector with single feature. It demonstrates single feature is good enough to detect objects of various size. It addressed the regression overhead problem by employing dilated convolution and uniform matching with the single feature. However, it still remains as a challenge to maintain an acceptable accuracy level for small objects [10]. From the DOTA-v1.0 dataset [11], which has many small objects from remotely sensed images, YOLOF only achieves an accuracy of 66.54%, whereas the RetinaNet [12] model achieves an accuracy of 68.69%. Note that a two-stage detector, Oriented Fig. 1: Computation workload analysis of a two-stage detector, LSKNet [4]. The size of the input image is 1024 pixels by 1024 pixels. LSKNet-T is created by reducing the backbone of the LSTNet-S. Graph on the right shows our method is effective to the regression part of the detector. R-CNN [6], could achieve an accuracy of 75.87% on the same dataset. In this paper, we propose a model compression method by simplifying regression part of two-stage detectors. Figure 2(b) illustrates the schematic representation of the model implemented using our approach. We perform regression using only single feature without constructing a feature pyramid. By removing the feature pyramid structure, we reduce the computational load of the RPN, NMS, and RoIAlign for regression. Simple removal of the feature pyramid incurs large accuracy drop. We have to overcome two challenges to maintain the baseline accuracy when removing the feature pyramid. One is that the IoU (Intersection over Union) between the anchor and the object decreases than the positive anchor threshold. This results in low accuracy since the detector is unable to learn small objects. We choose features that exceed a threshold of positive anchors with small objects, regardless of the anchor position. The other is that the RoIs are generated focusing on large objects. Many RoI are generated focusing on relatively easy-to-detect large objects. When several objects appear in an image, small objects tend to be undetected. As a solution to this challenge, we design a high-pass filter to focus on the RoIs of small objects. Our approach is advantageous because it is applicable to any two-stage detector with a feature pyramid network. In the experiments with state-of-the-art two-stage detectors such as ReDet [13], Oriented-RCNN [6], and LSKNet [4], STDNet [5], our method reduced computation costs upto 55.6% with the accuracy loss within 2.7% on the DOTAv1.5 dataset [11]. ## II Related Work ### _Remote sensing object detectors_ Researches have been conducted to efficiently regress HBB [14, 15, 16]. The approaches have shown excellent efficiency in ground-based detection but have challenges when applied to OBB. Objects such as buildings, vehicles, or natural features in remote sensing images have a larger aspect ratio compared to images captured from the ground [11]. Moreover, objects in remote sensing images may not align perfectly with horizontal or vertical axes, resulting in arbitrary orientations. Traditional horizontal bounding boxes (HBB) may not accurately capture the scope of the object of interest and can include a lot of surrounding image noise when performing object regression. To address these issues, oriented bounding boxes (OBB) are used. The rotated bounding box aligns with the object's orientation, allowing for more precise positioning. The simplest method to convert an HBB detector to an OBB is by regressing the rotation angle along with the HBB. However, this method has relatively lower accuracy compared to other methods. Researchers are conducting a study to efficiently regress OBB. Using the rotational anchors [17] is straightforward for regressing OBB, but it requires an extra computational load compared to HBB. Ding et al, [18] proposed the RoI transformer. It regresses OBB using fully connected layers that makes the network heavy and complex. Xie, Xingxing, et al, [6]. proposed Oriented-RPN, which minimizes additional computations regressing OBB without parameters for anchor and angle. It is similar to HBB's RPN but regresses OBB throughout decoding. Lyu, Chengqi, et al, proposed RTMDet [19] for real-time object detectors. It has high accuracy and fast inference speed with an anchor-free, one-stage detector. However, in situations where multi-scale testing cannot be conducted for real-time purposes. The STD [5] and LSKNet [4] models in the DOTAv1.0 [11] dataset have achieved state-of-the-art accuracy by using an Oriented RPN head in a two-stage detector. We propose a model simplification method applicable to two-stage detectors to achieve real-time performance on high-accuracy models. ### _Detectors using single feature_ The early object detector models that employed deep learning, such as the R-CNN series [20, 21, 22], utilized single feature. In Faster R-CNN [22], the use of anchors with different sizes was introduced to detect objects of varying sizes. One-stage detectors like YOLO [23] and YOLOv2 [24] also used only single feature. In YOLOv2, anchors were extracted based on the dataset to enhance accuracy, but the absence of multi-scale capabilities still resulted in accuracy Fig. 2: A two-stage detector has been implemented using our approach. Layers needed for constructing the feature pyramid have been removed. Furthermore, because both RPN and RoIAlign utilize a single feature, operations that are not included in FLOPs are also reduced. limitations. FPN [25] network is used to achieve high accuracy in subsequent detectors. However, when using FPN, doing regression and classification on each feature that makes up the feature pyramid requires a significant amount of computational and memory resources. Research exists that aims to achieve efficient detection using just a single feature without constructing a feature pyramid. YOLOF [9] used only the C5 feature with the dilated encoder and uniform matching. Zhou et al. [26] proposed CenterNet, which is an anchor-free object detector. It is a detector developed not for efficiency but using a single feature. CenterNet achieved high accuracy while being efficient by using the P2 feature. They have comparable accuracy to FPN-based detectors on the COCO dataset. But they reveal a decrease in accuracy when detecting small objects in aerial datasets. We explain the difficulty in detecting small objects when using just one feature in a detector. We propose a method of using just one feature while minimizing accuracy loss. Our approach maintains accuracy in detecting small objects while maintaining resource efficiency with a single feature. ## III Methods In this paper, we propose a method that performs regression using just only one feature from the existing detector. Our method involves the selection of one feature from the features that constitute the feature pyramid at the neck structure. The selected feature is used to find objects having various sizes. We attach the anchors from the removed features to the selected feature. We achieved this task by increasing the number of anchors used in the feature. It results in an increase in the channel parameter of the convolution layer in the RPN. However, the removed features do not undergo the RPN head. This results in a significant reduction in FLOPs. In this section, we explain our approach for selecting a single feature to accelerate inference while maintaining accuracy. ### _Adjusting the anchor size_ The performance of an anchor-based detector is tied to the size of the anchor. Because the anchor serves as a reference point for the detector, its size influences the detector's ability to accurately identify and classify objects. Unlike two-stage detectors, the YOLO series takes a different approach by extracting anchors based on the dataset. This allows the YOLO series to adapt to the specific characteristics of the dataset, potentially improving detection accuracy. However, two-stage detectors typically utilize anchor sizes that were used in early research. This early research primarily focused on detecting animals and objects on the ground, which are significantly different from the objects found in remote sensing datasets. As a result, anchor sizes do not match well with remote sensing datasets, leading to potential inaccuracies in detection. Figure 3 shows that the pre-generated anchors previously used do not align with the remote sensing dataset. The left graph shows the matched ratio with original anchor sizes, while the right graph represents ratio with the adjusted anchor sizes. In the DOTAv1.5 dataset, it is shown that 65% of objects are smaller than the objects detected by the P2 feature in the original detector. This suggests that the original anchor sizes may not be optimal for this dataset. To address this issue, we adjust the size of anchors used by the detector to match that of the objects. The right graph in Figure 3 displays the number of objects detectable by the adjusted anchor size in each feature. Despite these adjustments, 27% of objects are still smaller than the detectable size at P2. This indicates that further adjustments may be necessary to improve detection accuracy. However, if we continue reducing the anchor size further, the anchor size becomes too small to cover object area for detection. In the next section, we will explain the relationship between features and anchors. Based on this explanation, we modify anchor sizes to enhance the detection of small objects in the DOTAv1.5 dataset. We choose anchor sizes that are multiples of the stride, considering the anchor's stride. Since using smaller anchors necessitates significant computer processing, we have selected a minimum anchor size of 16 for efficient detection. Since the scale of the anchor was divided based on an IoU of 0.5, we maintain the scale as a multiple of 2. Finally, anchor sizes of 16, 32, 64, 128, and 256 are used instead of the previous sizes of 32, 64, 128, 256, and 512. ### _Selecting a single feature for inference acceleration_ Detectors utilize anchors for efficient object detection training. Training on randomly generated boxes results in longer learning times. Thus, the main approach is to use anchors as proposals in the training of the detector, enabling the detector to quickly learn object regression. Calculating the IoU between anchors and objects to choose anchors with high IoU as proposals for the detector, which enhances the efficiency of the learning process. The downsample factor of the feature determines the density of the anchor. Anchors are placed at each pixel of downsampled features. When IoU between objects and anchors is calculated, features with a small down-sampling ratio have more number of pixels to compute due to their larger size. Conversely, features with a high down-sampling ratio have fewer pixels, thus requiring less memory and Fig. 3: Ratio of matched anchors with objects in each feature. The graph shows proportion of objects in the validation dataset of DOTAv1.5 with an IoU of 0.5 or higher as the matched anchor. The left graph is the result from the original model with anchor sizes of 32, 64, 128, 256, and 512. The right graph is the result from the modified anchors of sizes 16, 32, 64, 128, and 256. computation. This is particularly beneficial when the goal is to perform regression using a single feature. A higher downsample factor of the feature allows for constructing the detector with less computational workload, making the process more efficient. However, selecting features with a high downsample factor significantly decreases the accuracy of the detector. The sparsity of anchors results in the IoU between small objects and anchors not surpassing the threshold for being selected as proposals during training. Figure 4 visualizes anchors and objects from features with three different downsampling ratios. Red dots indicate locations of anchors. The blue box represents an anchor. In the P3 feature, anchors have a high IoU with objects in multiple pixels. In the P4 feature, a higher IoU is achieved with fewer pixels used. However, because of the sparse anchors in the P5 feature, the IoU with the object does not exceed the threshold. We aim to select features that can avoid detection failure even when the size of the anchor and the size of the object are the same. For example, in Figure 4, if an object appears between anchors as in the P5 feature, it does not exceed the positive anchor threshold. We observe that the IoU between anchors and objects is always higher than the positive anchor in features with a downsample factor less than the anchor size. Thus, when constructing a detector having a single feature, it is better to use a downsampling factor less than the minimum size of the anchor. However, using a feature with a downsampling factor that is too small compared to the anchor poses another challenge. Using features with a very small downscale factor decreases accuracy. Noise occurs at the NMS stage after RoI generation. Furthermore, RoIs are focused on large objects, neglecting RoIs for small objects. In our implementation, we use features with a downsample factor of 8, which is less than the minimum anchor size determined in the previous section III-A, which was 16. We retain the P3 feature. We remove all other parts except for the layers needed to construct the P3 feature as shown in Figure 2(b). Before using our method, the features of the feature pyramid were used as inputs for each RPN head. The detector extracts a number of RoIs from each feature map after the RPN head. Since we have kept just one feature, the number of feature used as input for the RPN head is reduced. Thus, there is a significant decrease in FLOPs at the RPN head. Furthermore, the number of selected RoIs is also significantly reduced. ### _Applying high-pass filter_ When selecting RoIs in a two-stage detector, RoIs with high scores in the score map after the RPN head are chosen. In a feature pyramid, a large number of RoIs are selected and then regression is performed. Additionally, features are divided according to the size of the item, therefore there is no issue with different-sized objects having different RoI scores. However, when the detector performs regression using just one intermediate feature, the number of RoIs decreases significantly. Additionally, objects are not discriminated by size. Large objects that are relatively easy to learn get high scores and obtain scores from many pixels. On the other hand, small objects tend to have relatively low scores and acquire scores from a small number of pixels. When extracting RoIs based on the score, RoI for small objects can be neglected. Figure 5 visualizes the score map for large objects. The object's score in the classification score map exhibits a bell shape curve. It is seen that several pixels get high scores for a single object. We attempt to narrow the variance of the score map for large objects and increase the peak for small objects. A high-pass filter in image processing emphasizes high-frequency components and reduces low-frequency components in an image. It is commonly used to enhance edges or noise in images. We want to make the filter to sharpen for small objects and blur for large objects. We designed a 5\\(\\times\\)5 filter that shows better performance than a 3\\(\\times\\)3 filter. For a blur effect at the boundaries of a large object, we used the filter with uniform weights throughout Fig. 4: Visualization of the anchor’s stride based on the downscale factor of the features when the image size is 256 x 256. The red dot represents the location where the anchors appear in the feature. To detect a car, an anchor size of \\(32^{2}\\) is sufficient, however it is not trained on the P5 feature due to low IoU. In P3 and P4 features, the IoU between the anchor and object exceeds the threshold for positive anchors. Fig. 5: Classification score map after the picture has passed through the RPN in the detector. The score was measured in the Oriented R-CNN model. The score of graph is passed through a sigmoid function before extracting the RoI from the score. Fig. 6: The Unsharp masking filter we used. The values around the center were set to 1, unlike the typical 5\\(\\times\\)5 unsharp masking. rather than filters with increasing values towards the center. Figure 6 displays the high-pass filter we used. objects. We get an accuracy improvement with fewer computations overhead by applying the filter. ## IV Experiments ### _Experiment setup_ Experiments are performed on the DOTA-v1.5 [11] dataset, a remote sensing dataset, to demonstrate the efficacy of our proposed method. DOTA-v1.5 uses the same images as DOTA-v1.0 but includes annotations for objects that are extremely tiny, less than 10 pixels in size. It has 2,806 photos and 403,318 instances in total, classified into 16 object classes. Since the DOTA dataset doesn't have labels for the test set, accuracy is evaluated with the validation set. The image sizes range from 800 to 4,000 pixels in both width and height. The detector's input image commonly uses square dimensions. We cropped the images to a 1024x1024 size with a stride of 200 pixels for training the detectors. We have implemented our method to state-of-the-art two-stage models, such as ReDet [13], Oriented-RCNN [6], LSKNet [4], and STD [5]. Inference speed is measured in FPS on a NVIDIA 2080ti board. We used the mmrotate [27] learning framework for our experiments. Models were trained using a single 3090 GPU. Hyperparameters other than learning rate remain the same as the existing parameters in the model training. We modified the learning rate values to enable training on a single GPU instead of several GPUs for the model training. ### _Performance evaluation_ Detection accuracy of object classes from various two-stage detectors is shown in Table I. Models with 'Ours' in front of the name are the modified models with our method. Last two columns show the mAP and FPS of the model. For ReDet, FPS has increased 1.5\\(\\times\\) with 0.2% accuracy loss with our method. Anchor size was modified as explained in Section III-A, resulting in a loss of accuracy when detecting relatively large objects such as ground track fields and roundabouts, while accuracy for tiny objects like small vehicles and storage tanks improved. A decrease in mAP of 0.4% is observed with Oriented R-CNN. When LSKNet-T is used as the backbone, accuracy decreases by 2.1%. It is observed that accuracy decreases even more when using a high-pass filter. We are aware that our approach is influenced by the representation power of the backbone. When using the LSKNet-T backbone, our approach showed low accuracy at baseball fields and harbors. Our high-pass filter increases the score of noisy pixels. A decrease in accuracy of 1.4% is observed when using LSKNet-S as the backbone. Overall 1.5\\(\\times\\) speedup in the model's FPS is observed with our method. We have profiled computation complexity of implemented two-stage detectors. Table II shows the FLOPs of each component of the detector. After applying our method, there is no change in FLOPs in the backbone. Because our approach does not contain a feature pyramid, we get reduced FLOPs by 20% in the neck. Furthermore, by reducing the number of features used as input to the RPN head, there is a 20% decrease in FLOPs in the RPN. Our method shows a \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c|c} \\hline Model & Backbone & Neck & RPN & High-pass filter & RoI Head & Total & FPS \\\\ \\hline Orinted R-CNN [6] & 86.1 & 59.4 & 52.0 & - & 13.9 & 211.4 & 19.8 \\\\ Ours - Orinted R-CNN [6] & 86.1 & **13.4** & **10.1** & 0.003 & 13.9 & **123.5** & 30.4 \\\\ \\hline LSKNet-T [4] & 19.0 & 52.4 & 52.0 & - & 13.9 & 137.3 & 22.3 \\\\ Ours - LSKNet-T [4] & 19.0 & **10.2** & **10.1** & 0.003 & 13.9 & **53.2** & 36.4 \\\\ \\hline LSKNet-S [4] & 54.3 & 53.5 & 52.0 & - & 13.9 & 173.7 & 18.2 \\\\ Ours - LSKNet-S [4] & 54.3 & **10.7** & **10.1** & 0.003 & 13.9 & **89.0** & 26.3 \\\\ \\hline STD-HiVi-B [5] & 354.7 & 55.3 & 52.0 & - & 1258.3 & 1720.3 & 1.5 \\\\ Ours - STD-HiVi-B [5] & 354.7 & **11.4** & **10.1** & 0.003 & **1249.0** & **1625.3** & 2.8 \\\\ \\hline \\end{tabular} \\end{table} TABLE II: Measured computation complexity of two-stage detectors in GFLOPs. \\begin{table} \\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \\hline Model & PL. & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC & CC & mAP & FPS \\\\ \\hline ReDet [13] & 90.2 & 78.6 & 50.2 & **67.4** & **50.9** & 76.4 & 81.2 & 90.8 & 68.9 & 69.4 & 64.9 & 74.2 & 76.9 & 64.0 & 55.3 & 0.0 & **66.2** & 14.0 \\\\ Ours - ReDet & 90.2 & 78.5 & 48.3 & **63.4** & **55.9** & 74.9 & 81.2 & 90.7 & 68.4 & 71.1 & 64.4 & 67.6 & 77.1 & 63.7 & 60.7 & 0.0 & **66.0** & 22.0 \\\\ Orinted R-CNN [6] & 90.0 & 76.6 & 55.1 & **74.0** & **50.3** & 76.6 & 88.3 & 90.8 & 72.4 & 62.2 & 57.9 & 72.5 & 75.1 & 65.3 & 51.1 & 0.4 & **66.2** & 19.8 \\\\ Ours - O-RCNN & 89.6 & 76.3 & 46.5 & **67.3** & **54.1** & 75.5 & 87.8 & 90.8 & 63.4 & 69.8 & 62.6 & 71.9 & 75.0 & 64.3 & 57.4 & 0.0 & **65.8** & 30.4 \\\\ \\hline LSKNet-T [4] & 89.4 & 83.6 & 51.2 & **78.4** & **50.5** & 76.0 & 88.4 & 90.6 & 75.2 & 68.9 & 65.1 & 72.4 & 74.2 & 64.2 & 54.0 & 0.1 & **67.7** & 22.3 \\\\ Ours - LSKNet-T & 89.0 & 76.7 & 46.6 & **68.7** & **53.3** & 74.7 & 86.9 & 90.6 & 67.4 & 69.6 & 61.3 & 65.9 & 68.9 & 63.3 & 54.4 & 0.0 & **65.1** & 36.4 \\\\ Ours - LSKNet-T* & 89.1 & 82.0 & 46.7 & **69.0** & **54.1** & 75.4 & 80.4 & 90.7 & 67.5 & 69.7 & 62.2 & 66.6 & 76.1 & 64.3 & 55.4 & 0.0 & **65.6** & 37.1 \\\\ \\hline LSKNet-S [4] & 89.9 & 85.6 & 54.8 & **79.5** & **50.9** & 77.4 & 89.7 & 90.7 & 75.0 & 69.6 & 72.9 & 74.2 & 76.3 & 66.7 & 64.8 & 0.0 & **69.9** & 18.2 \\\\ Ours - LSKNet-S & 89.8 & 85.0 & 55.3 & **72.8** & **55.4** & 76.0 & 88.1 & 90.8 & 74.4 & 70.6 & 71.6 & 67.3 & 76.4 & 63.0 & 60.3 & 0.0 & **68.5** & 26.3 \\\\ \\hline STD-O HiViT-B [5] & 90.2 & 85.6 & 58.6 & **74.2** & **50.6** & 78.1 & 89.5 & 90.8 & 65.7 & 70.2 & 67.8 & 75.4 & 76.6 & 67.0 & 57.0 & 0.1 & **69.2** & 1.5 \\\\ Ours - STD-O [5] & 89.8 & 85.7 & 59.4 & **66.5** & **52.3** & 76.4 & 89.0 & 90.7 & 65.7 & 71.2 & 62.1 & 69.3 & 75.9 & 65.5 & 63.7 & 18.2 & **68.8** & 2.8 \\\\ \\hline \\end{tabular} \\end{table} TABLE I: Detection Accuracy(mAP) of each class of DOTA dataset. The ReDet model uses the ReResNet-50 backbone. The Oriented R-CNN model employs ResNet-50. The LSKNet-T model after applying our method has a decrease in accuracy when sent via the high-pass filter. The models with an asterisk(*) indicate accuracy without being subjected to a high-pass filter. Every model performed single scale training. reduction of up to 61.2% in total FLOPs in the LSKNet-T. We have analyzed the relationship between the number of RoIs and the performance of the model. Table III shows the accuracy and inference speed of our models with varying number of RoIs. A typical two-stage detector has a pyramid network composed of five features, generating 2,000 RoIs from each feature in the feature pyramid, a total of 10,000 RoIs. Our simplified detector uses just one feature, generating 2,000 RoIs, which is five times less than before. The same accuracy of 66.2% as the baseline model is achieved when using 6,000 RoIs with our HPF filtered model. Our method demonstrates a faster frame rate of 25.6 FPS compared to 19.8 FPS of the baseline with 10,000. Though we further increase RoIs upto 10,000, the accuracy does not increase. This is due to the generation of duplicate RoIs for the same object, resulting in lower IoU between objects and regression boxes after NMS. Our designed high-pass filter is effective to restore accuracy when we reduce the number of RoIs. ### _Analysis of the Selected Feature and HPF Filter Design_ The remaining single feature after removing all the other features from the detector impacts overall accuracy of the simplified network. We have measured a detector's accuracy for different scales of features while selecting one feature in the Oriented R-CNN and the result is shown in Table IV. The P5 feature has a larger down sample factor than the P2, and anchors become sparse. This makes anchors difficult to detect small objects of DOTA dataset and it has the worst accuracy of 54.4%. For the P2 feature, the IoU between the object and anchor exceeds the threshold regardless of the anchor's position. However, the selection of the P2 feature leads to the generation of several RoIs from large objects, thereby reducing accuracy. Anchor's sparcity matches with the P3 feature so that it has the highest accuracy of 65.3%. The accuracy before and after applying a high-pass filter to the model using our method is shown in Table V. Improvements in accuracy can be observed across various models. However, the accuracy of the LSKNet-T model decreases. Our high-pass filter enhances accuracy for objects with rectangular shapes, such as ships and bridges while maintaining accuracy for large objects. However, with LSKNet-T, there is a decrease in accuracy at the baseball diamond and harbor class. Figure 7 provides a visual demonstration of how our high-pass filter effectively improves the accuracy. The area corresponding to the red box in Figure 7 has a ship and a harbor. However, the score of the object in that area is low. Our high-pass filter sharpens the score to detect objects.(Blue box). However, it can be seen that the score increases even in areas unrelated to the object. We compare the results of applying several high-pass filters in Table VI. Even when using a Gaussian filter, a slight improvement in accuracy is observed. The Laplacian filter \\begin{table} \\begin{tabular}{c|c|c|c} \\hline Model & High-pass filter & AP & \\(\\Delta\\) \\\\ \\hline ReDet [13] & & 65.5\\% & - \\\\ ReDet [13] & ✓ & 66.0\\% & +0.5 \\\\ Oriented R-CNN [6] & ✓ & 65.3\\% & - \\\\ Oriented R-CNN [6] & ✓ & 65.8\\% & +0.5 \\\\ LSKNet-T [4] & & 65.0\\% & - \\\\ LSKNet-T [4] & ✓ & 64.2\\% & -0.8 \\\\ LSKNet-S [4] & & 68.3\\% & - \\\\ LSKNet-S [4] & ✓ & 68.5\\% & +0.2 \\\\ STD-HvvR-B [5] & ✓ & 68.6\\% & - \\\\ STD-HvR-B [5] & ✓ & 68.8\\% & +0.2 \\\\ \\hline \\end{tabular} \\end{table} TABLE V: Comparison of accuracy before and after using a high-pass filter. All models are applied with 5\\(\\times\\)5 kernel. \\begin{table} \\begin{tabular}{l|c|c|c} \\hline Model & Total \\# of RoIs & Accuracy(mAP) & Speed(FPS) \\\\ \\hline Oriented R-CNN(Baseline) & 10,000 & 66.2 & 19.8 \\\\ Our-Oriented RCNN & 10,000 & 66.3 & 22.9 \\\\ Our-Oriented RCNN + HPF & 10,000 & 66.1 & 21.8 \\\\ Our-Oriented RCNN & 6,000 & 66.2 & 26.6 \\\\ Our-Oriented RCNN + HPF & 6,000 & 66.2 & 25.6 \\\\ Our-Oriented RCNN & 2,000 & 65.3 & 30.8 \\\\ Our-Oriented RCNN + HPF & 2,000 & 65.8 & 30.4 \\\\ \\hline \\end{tabular} \\end{table} TABLE IV: Effectiveness of the selected single feature. The accuracy was measured with Our-Oriented R-CNN model. \\begin{table} \\begin{tabular}{l|c|c|c} \\hline Model & Total \\# of RoIs & Accuracy(mAP) & Speed(FPS) \\\\ \\hline Oriented R-CNN(Baseline) & 10,000 & 66.2 & 19.8 \\\\ Our-Oriented RCNN & 10,000 & 66.3 & 22.9 \\\\ Our-Oriented RCNN + HPF & 10,000 & 66.1 & 21.8 \\\\ Our-Oriented RCNN & 6,000 & 66.2 & 26.6 \\\\ Our-Oriented RCNN + HPF & 6,000 & 66.2 & 25.6 \\\\ Our-Oriented RCNN & 2,000 & 65.3 & 30.8 \\\\ Our-Oriented RCNN + HPF & 2,000 & 65.8 & 30.4 \\\\ \\hline \\end{tabular} \\end{table} TABLE III: Performance measurement with various RoIs at the Our-Oriented R-CNN model. \\begin{table} \\begin{tabular}{c|c|c|c} \\hline Model & High-pass filter & AP & \\(\\Delta\\) \\\\ \\hline ReDet [13] & & 65.5\\% & - \\\\ ReDet [13] & ✓ & 66.0\\% & +0.5 \\\\ Oriented R-CNN [6] & ✓ & 65.3\\% & - \\\\ Oriented R-CNN [6] & ✓ & 65.8\\% & +0.5 \\\\ LSKNet-T [4] & & 65.0\\% & - \\\\ LSKNet-T [4] & ✓ & 64.2\\% & -0.8 \\\\ LSKNet-S [4] & & 68.3\\% & - \\\\ LSKNet-S [4] & ✓ & 68.5\\% & +0.2 \\\\ STD-HvR-B [5] & ✓ & 68.6\\% & - \\\\ STD-HvR-B [5] & ✓ & 68.8\\% & +0.2 \\\\ \\hline \\end{tabular} \\end{table} TABLE V: Comparison of accuracy before and after using a high-pass filter. All models are applied with 5\\(\\times\\)5 kernel. Fig. 7: Visualization of the RPN score map before and after applying a high-pass filter. The anchor size increases from left to right. When one axis has sizes of 32 and 64, our method points out the harbor objects with a low score. results in a significant decrease in accuracy. The Laplacian of Gaussian hybrid method achieved an accuracy of 65.4%. The unsharp masking filter in our approach with 5\\(\\times\\)5 kernel size yielded the most significant improvement, with an accuracy increase of 0.5% compared to the baseline. ## V Conclusion In this paper, we study methods to simplify models for real-time on-board inference from remote sensing images. We reduce computational overhead by not constructing a feature pyramid and instead using a single feature at two-stage detectors that achieve state-of-the-art accuracy. Using a single feature results in a loss of accuracy. We propose a method for selecting features that minimizes loss of accuracy. Additionally, we adjust the anchor size to fit the dataset to recover lost accuracy. To avoid the concentration of RoIs being generated on large objects, a high-pass filter is included. Our method reduces FLOPs by 61.2% with a 2.1% decrease in accuracy in the LSKNet-T model. Similar results are seen in other two-stage models. However, we have limitations in accuracy loss compared to the baseline. Furthermore, the high-pass filter increases the score even in areas where the item does not really exist, leading to the generation of noise. Our approach was evaluated only in the two-stage detector. Research on methods that can also be used in one-stage detectors is needed. We need further research on methods to recover or enhance accuracy with a slight increase in computational workload. ## References * [1] Baogui Qi, Hao Shi, Yin Zhuang, He Chen, and Liang Chen. On-board, real-time preprocessing system for optical remote-sensing imagery. _Sensors_, 18(5):1328, 2018. * [2] Yanyun Shen, Di Liu, Junyi Chen, Zhipan Wang, Zhe Wang, and Qingling Zhang. On-board multi-class geospatial object detection based on convolutional neural network for high resolution remote sensing images. _Remote Sensing_, 15(16):3963, 2023. * [3] Max Ghiglione and Vittorio Serra. Opportunities and challenges of ai on satellite processing units. In _Proceedings of the 19th ACM international conference on computing Frontiers_, pages 221-224, 2022. * [4] Y Li, Q Hou, Z Zheng, MM Cheng, J Yang, and X Li. Large selective kernel network for remote sensing object detection. arxiv 2023. _arXiv preprint arXiv:2303.09030_. * [5] Hongtian Yu, Yunjie Tian, Qixiang Ye, and Yunfan Liu. Spatial transform decoupling for oriented object detection. _arXiv preprint arXiv:2308.10561_, 2023. * [6] Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. Oriented r-cnn for object detection. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 3520-3529, 2021. * [7] Yiming Zhao, Jinzheng Zhao, Chunyu Zhao, Weiyu Xiong, Qingli Li, and Junli Yang. Robust real-time object detection based on deep learning for very high resolution remote sensing images. In _IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium_, pages 1314-1317. IEEE, 2019. * [8] Zonglei Lyu, Tong Yu, Fuxi Pan, Yilin Zhang, Jia Luo, Dan Zhang, Yiren Chen, Bo Zhang, and Guangyao Li. A survey of model compression strategies for object detection. _Multimedia Tools and Applications_, pages 1-72, 2023. * [9] Qiang Chen, Yingming Wang, Tong Yang, Xiangyu Zhang, Jian Cheng, and Jian Sun. You only look one-level feature. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 13039-13048, 2021. * [10] Yu Yi, Xue Yang, Qingyun Li, Feipeng Da, Junchi Yan, Jifeng Dai, and Yu Qiao. Point2rbox: Combine knowledge from synthetic visual patterns for end-to-end oriented object detection with single point supervision. _arXiv preprint arXiv:2311.14758_, 2023. * [11] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3974-3983, 2018. * [12] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In _Proceedings of the IEEE international conference on computer vision_, pages 2980-2988, 2017. * [13] Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2786-2795, 2021. * [14] Xingyi Zhou, Vladlen Koltun, and Philipp Krahenbuhl. Probabilistic two-stage detection. _arXiv preprint arXiv:2103.07461_, 2021. * [15] Juan Terven and Diana Cordova-Esparza. A comprehensive review of yolo: From yolov1 to yolov8 and beyond. _arXiv preprint arXiv:2304.00501_, 2023. * [16] Xiyang Dai, Yinpeng Chen, Bin Xiao, Dongdong Chen, Mengchen Liu, Lu Yuan, and Lei Zhang. Dynamic head: Unifying object detection heads with attentions. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 7373-7382, 2021. * [17] Jianqi Ma, Weiyuan Shao, Hao Ye, Li Wang, \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c|c|c} \\hline Oriented-RCNN & Baseline & Gaussian & Gaussian & Laplacian & Laplacian & LoG & Ours & Ours \\\\ \\hline Filter size & - & 3\\(\\times\\)3 & 5\\(\\times\\)5 & 5\\(\\times\\)5 & 3\\(\\times\\)3 & 3\\(\\times\\)3 & 3\\(\\times\\)3 & 5\\(\\times\\)5 \\\\ \\hline Accuracy(mAP) & 65.3 & 65.4 & 65.5 & 44.6 & 55.2 & 64.0 & 65.4 & **65.8** \\\\ \\hline \\end{tabular} \\end{table} TABLE VI: Study of various high-pass filters. We used an _unsharp masking filter_ in our method. Hong Wang, Yingbin Zheng, and Xiangyang Xue. Arbitrary-oriented scene text detection via rotation proposals. _IEEE transactions on multimedia_, 20(11):3111-3122, 2018. * [18] Jian Ding, Nan Xue, Yang Long, Gui-Song Xia, and Qikai Lu. Learning roi transformer for oriented object detection in aerial images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2849-2858, 2019. * [19] Chengqi Lyu, Wenwei Zhang, Haian Huang, Yue Zhou, Yudong Wang, Yanyi Liu, Shilong Zhang, and Kai Chen. Rtmdet: An empirical study of designing real-time object detectors. _arXiv preprint arXiv:2212.07784_, 2022. * [20] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 580-587, 2014. * [21] Ross Girshick. Fast r-cnn. In _Proceedings of the IEEE international conference on computer vision_, pages 1440-1448, 2015. * [22] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. _Advances in neural information processing systems_, 28, 2015. * [23] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 779-788, 2016. * [24] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 7263-7271, 2017. * [25] Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2117-2125, 2017. * [26] Xingyi Zhou, Dequan Wang, and Philipp Krahenbuhl. Objects as points. _arXiv preprint arXiv:1904.07850_, 2019. * [27] Yue Zhou, Xue Yang, Gefan Zhang, Jiabao Wang, Yanyi Liu, Liping Hou, Xue Jiang, Xingzhao Liu, Junchi Yan, Chengqi Lyu, Wenwei Zhang, and Kai Chen. Mmrotate: A rotated object detection benchmark using pytorch. In _Proceedings of the 30th ACM International Conference on Multimedia_, 2022.
Deep learning has been successfully applied to object detection from remotely sensed images. Images are typically processed on the ground rather than on-board due to the computation power of the ground system. Such offloaded processing causes delays in acquiring target mission information, which hinders its application to real-time use cases. For on-device object detection, researches have been conducted on designing efficient detectors or model compression to reduce inference latency. However, highly accurate two-stage detectors still need further exploitation for acceleration. In this paper, we propose a model simplification method for two-stage object detectors. Instead of constructing a general feature pyramid, we utilize only one feature extraction in the two-stage detector. To compensate for the accuracy drop, we apply a high pass filter to the RPN's score map. Our approach is applicable to any two-stage detector using a feature pyramid network. In the experiments with state-of-the-art two-stage detectors such as ReDet, Oriented-RCNN, and LSKNet, our method reduced computation costs upto 61.2% with the accuracy loss within 2.1% on the DOTAv1.5 dataset. Source code will be released.
Summarize the following text.
232
arxiv-format/2408_07753v1.md
# How to Solve Contextual Goal-Oriented Problems with Offline Datasets? Ying Fan\\({}^{1}\\), Jingling Li\\({}^{2}\\), Adith Swaminathan\\({}^{3}\\), Aditya Modi\\({}^{3}\\), Ching-An Cheng\\({}^{3}\\) \\({}^{1}\\)University of Wisconsin-Madison \\({}^{2}\\)ByteDance Research \\({}^{3}\\)Microsoft Research ## 1 Introduction Goal-oriented problems [16] are an important class of sequential decision-making problems with widespread applications, ranging from robotics [39], game-playing [12], to logistics [24]. In particular, many real-world goal oriented problems are _contextual_, where the objective of the agent is to reach a goal set communicated by a context. For example, consider instructing a truck operator with the context \"Deliver goods to a warehouse in the Bay area\". Given such a context and an initial state, it is acceptable to reach any feasible goal (a reachable warehouse location) in the goal set (warehouse locations including non-reachable ones). We call such problems _Contextual Goal-Oriented_ (CGO) problems, which form an important special case of contextual Markov Decision Process (MDP) [10]. CGO is a practical setup that includes goal-conditioned reinforcement learning (GCRL) as a special case (the context in GCRL is just the target goal), but in general contexts in CGO problem can be more abstract (like high-level task instructions in the above example) and the relationship between contexts and goals are not known beforehand. CGO problems are challenging because 1) the rewards are sparse as in GCRL and 2) the contexts can be difficult to map into feasible goals. Nevertheless, CGO problem has an important structure that the transition dynamics (e.g., navigating a city road network) are independent of the contexts that specify tasks. Therefore, efficient multitask learning can be achieved by sharing dynamics data across tasks. In this paper, we study solving for CGO problems in an offline setup. We suppose access to two datasets -- an (unlabeled) _dynamics_ dataset of trajectories, and a (labeled) _context-goal_ dataset containing pairs of contexts and goal examples. Such datasets are commonly available in practice. The typical contextual datasets for imitation learning (IL) (which has pairs of contexts and expert trajectories) is one example, since we can convert the contextual IL data into dynamics data and context-goal pairs. Generally, this setup also covers scenarios where expert trajectories are _not_ accessible (e.g., because of diverse contexts and initial states), since it does not assume goal examples to appear in the trajectories or the contexts are readily paired with transitions in expert trajectories. Instead, it allows the dynamics datasets and the context-goal datasets to be independently collected. For example, in robotics, task-agnostic play data can be obtained at scale [22; 34] in an unsupervised manner whereas instruction datasets (e.g., [25]) can provide context-goal pairs. In navigation, selfdriving car trajectories (e.g., [35; 32]) also allow us to learn dynamics whereas landmarks datasets (e.g. [24; 9]) provide context-goal pairs. While offline CGO problems as described above are common in practical scenarios, to our knowledge, no algorithms have been specifically designed to solve such problems and CGO has not been formally studied yet. Some baseline methods could be easily conceptualized from the literature, but their drawbacks are equally apparent. One intuitive approach is to extend the goal prediction methods in GCRL [26; 27]: given a test context, we can predict a goal and navigate to it using a goal-conditioned policy, where the goal prediction model can be learned from the context-goal dataset and the goal-conditioned policy can be learned from the trajectory dataset. However, the predicted goal might not always be feasible given the initial state since our context-goal dataset is not necessarily paired with transitions. Alternatively, the offline problem could be formulated as a special case of missing label problems [41] and we can learn a context-conditioned reward model to label the unsupervised transitions when paired with contexts as in [14]. However, this approach ignores the goal-oriented nature of the problem and the fact that here only positive data (i.e. goal examples) are available for reward learning, which poses extra significant challenges. CGO can be framed as an offline reinforcement learning (RL) problem with missing labels; However, existing algorithms [42; 14; 21] in family assume access to both positive data (contexts-goal pairs) and negative data (contexts and non-goal examples), whereas only positive data are available here. In this work, we present the first precise formalization of the CGO setting, and propose a novel Contextual goal-Oriented Data Augmentation (CODA) technique that can provably solve CGO problems subject to natural assumptions on the datasets' quality. The core idea is to **convert the context-goal dataset and the unsupervised dynamics dataset to a fully labeled transition dataset of an equivalent action-augmented MDP**, which circumvents the drawbacks in other baseline methods by fully making use of the CGO structure of the problem. We give a high-level illustration of this idea in Figure 1. In Figure 1, given a randomly sampled context-goal pair from the context-goal dataset, we create fictitious transitions from the corresponding goal example to a fictitious terminal state with a fictitious action and reward 1, and pair with the corresponding context. Also, we label all unsupervised transitions with reward 0 and non-terminal, and pair with the contexts randomly. Combining the two, we then have a fully labeled dataset (of an action-augmented contextual MDP, which this data augmentation and relabeling process effectively creates), making it possible to propagate supervision signals from the context-goal dataset to unsupervised transitions via the Bellman equation. We can then apply any offline RL algorithm based on Bellman updates like CQL [19], IQL [18], PSPI [37], ATAC [4] etc. In comparison with the baseline methods discussed earlier, our method naturally circumvents their intrinsic challenges: 1) CODA directly learns context-conditioned policy and avoids the need to predict goals; 2) CODA effectively uses a fully labeled dataset, avoiding the need to learn a reward model and extra costs from inaccurate reward modeling. ## 2 Related Work Offline RL.Offline RL methods have proven to be effective in goal-oriented problems as it also allows learning a common set of sub-goals/skills [3; 23; 38]. A variety of approaches are used to mitigate the distribution shift between the collected datasets and the trajectories likely to be generated by learned policies: 1) constrain target policies to be close to the dataset distribution [8; 36; 7], 2) incorporate value pessimism for low-coverage or Out-Of-Distribution states and actions [19; 40; 15] and 3) adversarial training via a two-player game [37; 4]. Figure 1: Illustration of CODA: We create fictitious transitions from goal examples to terminal states under the given context in the action-augmented MDP with reward 1, which enables the supervised signal to propagate back to unsupervised transitions via Bellman equation.1 **Offline RL with unlabeled data.** Our CGO setting is a special case of offline RL with unlabeled data, or more broadly the offline policy learning from observations paradigm [21]: There is only a subset of the offline data labeled with rewards (in our setting, that is the contexts dataset, as we don't know which samples in the dynamics dataset are goals.). However, the MAHALO scheme in [21] is much more general than necessary for CGO problems, and we show instead that our CODA scheme has better theoretical guarantees than MAHALO in Section 5. In our experiments, we compare CGO with several offline RL algorithms designed for unlabeled data: UDS [42] where unlabeled data is assigned zero rewards and PDS [14] where a pessimistic reward function is learned from a labeled dataset. **Goal-conditioned RL (GCRL).** GCRL is a special case of our CGO setting, which has been extensively studied since [16]. There are two critical aspects of GCRL: 1) data relabeling to make better use of available data and 2) learning reusable skills to solve long-horizon problems by chaining sub-goals or skills. On the one hand, hindsight relabeling methods [1, 20] are effective by reusing visited states in the trajectories as successful goal examples. For 2), hierarchical methods for determining sub-goals, and training goal reaching policies have been effective in long-horizon problems [28, 30, 3]. Another key objective of GCRL is goal generalization. Popular strategies include universal value function approximators [29], unsupervised representation learning [26, 28, 11], and pessimism-induced generalization in offline GCRL formulations [38]. Our CGO framing enables both data reuse and goal generalization, by using contextual representations and a reduction to offline RL to combine dynamics and context-goal datasets. ## 3 Preliminaries In this section, we introduce the setup of CGO problems, infinite-horizon formulation for CGO, and the offline learning setup with basic assumptions for our offline dataset. CGO SetupA Contextual Goal-Oriented (CGO) problem describes a multi-task goal-oriented setting with a _shared_ transition kernel. We consider a Markovian CGO problem, defined by the tuple \\(\\mathcal{M}=(\\mathcal{S},\\mathcal{A},P,R,\\gamma,\\mathcal{C},d_{0})\\), where \\(\\mathcal{S}\\) is the state space, \\(\\mathcal{A}\\) is the action space, \\(P:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\Delta(\\mathcal{S})\\) is the transition kernel, \\(R:\\mathcal{S}\\times\\mathcal{C}\\rightarrow\\{0,1\\}\\) is the sparse reward function, \\(\\gamma\\in[0,1)\\) is the discount factor, \\(\\mathcal{C}\\) is the context space, and \\(\\Delta\\) denotes the space of distributions. Each context \\(c\\in\\mathcal{C}\\) specifies a goal-reaching task with a goal set \\(G_{c}\\subset\\mathcal{S}\\), and reaching any goal in the goal set \\(G_{c}\\) is regarded as successful, inducing the reward function \\(R(s,c)=\\mathbb{1}(s\\in G_{c})\\). An episode of a CGO problem starts from an initial state \\(s_{0}\\) and a context \\(c\\) sampled from \\(d_{0}(s_{0},c)\\), and terminates when the agent reaches the goal set \\(G_{c}\\). \\(c\\) does not change during the transition; only \\(s_{t}\\) changes according to \\(P(s^{\\prime}|s,a)\\) and the transition kernel is context-independent. Infinite-horizon Formulation for CGO setupA fictitious zero-reward absorbing state \\(s^{+}\ otin\\mathcal{S}\\) can translate termination after reaching the goal to an infinite horizon formulation: _whenever the agent enters \\(G_{c}\\) it transits to \\(s^{+}\\) in the next step (for all actions) and stays at \\(s^{+}\\) forever_. This is a standard technique to convert a goal-reaching problem (with a random problem horizon) to an infinite horizon problem. This translation does _not_ change the problem, but allows cleaner analyses. We adopt this formulation in the following. We give details of this infinite-horizon conversion in the following. First, we extend the reward and the dynamics: Let \\(\\bar{\\mathcal{S}}=\\mathcal{S}\\bigcup\\{s^{+}\\}\\), \\(\\mathcal{X}\\coloneqq\\bar{\\mathcal{S}}\\times\\mathcal{C}\\), and \\(\\bar{\\mathcal{X}}\\coloneqq\\bar{\\mathcal{S}}\\times\\mathcal{C}\\). Define \\(\\mathcal{X}^{+}:=\\{x:x=(s,c),s=s^{+},c\\in\\mathcal{C}\\}\\). With abuse of notation, we define the reward and transition on \\(\\bar{\\mathcal{X}}\\) as \\(R(x)=\\mathbb{1}(s\\in G_{c})\\) where \\(x=(s,c)\\). The transition kernel \\(P(x^{\\prime}|x,a)\\coloneqq P(s^{\\prime}|s,c,a)\\mathbb{1}(c^{\\prime}=c)\\), where \\[P(s^{\\prime}|s,c,a)=\\begin{cases}\\mathbb{1}(s^{\\prime}=s^{+})&\\text{if $s\\in G_{c}$ or $s=s^{+}$,}\\\\ P(s^{\\prime}|s,a)&\\text{otherwise.}\\end{cases}\\] Given a policy \\(\\pi:\\mathcal{X}\\rightarrow\\Delta(\\mathcal{A})\\), the state-action value function (i.e., Q function) is \\(Q^{\\pi}(x,a)\\coloneqq\\mathbb{E}_{\\pi,P}\\left[\\sum_{t=0}^{\\infty}\\gamma^{t}R(x )|x_{0}=x,a_{0}=a\\right].\\)\\(V^{\\pi}(x)\\coloneqq Q^{\\pi}(x,\\pi)\\) is the value function given \\(\\pi\\), where \\(Q(x,\\pi)\\coloneqq\\mathbb{E}_{a\\sim\\pi}[Q(x,a)]\\in[0,1]\\). The return \\(J(\\pi)=V^{\\pi}(d_{0})=Q^{\\pi}(d_{0},\\pi)\\). \\(\\pi^{*}\\) is the optimal policy that maximized \\(J(\\pi)\\) and \\(Q^{*}\\coloneqq Q^{\\pi^{*}}\\), \\(V^{*}\\coloneqq V^{\\pi^{*}}\\). Let \\(G\\) represent the goal set on \\(\\mathcal{X}\\), that is, \\(G\\coloneqq\\{x\\in\\mathcal{X}:x=(s,c),s\\in G_{c}\\}\\). Offline Learning for CGOWe aim to solve CGO problems using offline datasets without additional online environment interactions, namely, by offline RL. We identify two types of data that arecommonly available: \\(D_{\\text{dyn}}\\coloneqq\\{(s,a,s^{\\prime})\\}\\) is an _unsupervised_ dynamics dataset of agent trajectories collected from \\(P(s^{\\prime}|s,a)\\), and \\(D_{\\text{goal}}\\coloneqq\\{(c,s):s\\in G_{c}\\}\\) is a _supervised_ dataset of context-goal pairs, which can be easier to collect than expert trajectories. We suppose that there are two distributions \\(\\mu_{\\text{dyn}}(s,a,s^{\\prime})\\) and \\(\\mu_{\\text{goal}}(s,c)\\), where \\(\\mu_{\\text{dyn}}(s^{\\prime}|s,a)=P(s^{\\prime}|s,a)\\) and \\(\\mu_{\\text{goal}}\\) has support within \\(G_{c}\\), i.e., \\(\\mu_{\\text{goal}}(s|c)>0\\Rightarrow s\\in G_{c}\\). We assume that \\(D_{\\text{dyn}}\\) and \\(D_{\\text{goal}}\\) are i.i.d. samples drawn from the distributions \\(\\mu_{\\text{dyn}}\\) and \\(\\mu_{\\text{goal}}\\), i.e., \\[D_{\\text{dyn}}=\\{(s_{i},a_{i},s^{\\prime}_{i})\\sim\\mu_{\\text{dyn}}\\},D_{\\text {goal}}=\\{(s_{j},c_{j})\\sim\\mu_{\\text{goal}}\\}.\\] Notice that we do not assume the goal states in \\(D_{\\text{goal}}\\) to be in \\(D_{\\text{dyn}}\\), thus we cannot always naively pair transitions in \\(D_{\\text{dyn}}\\) with contexts in \\(D_{\\text{goal}}\\) and assign them with reward \\(1\\). To our knowledge, no existing algorithm can provably learn near-optimal \\(\\pi\\) using only the positive \\(D_{\\text{goal}}\\) data (i.e., without non-goal examples) when combined with \\(D_{\\text{dyn}}\\) data. ## 4 Contextual Goal-Oriented Data Augmentation (CODA) The key idea of CODA is the construction of an _action_-augmented MDP with which the dynamics and context-goal datasets can be combined into a fully labeled offline RL dataset. In the following, we first describe this action-augmented MDP (Section 4.1) and show that it preserves the optimal policies of the original MDP (Appendix B.1). Then we outline a practical algorithm to convert the two datasets of an offline CGO problem into a dataset for this augmented MDP (Section 4.2) such that any generic offline RL algorithm based on Bellman equation can be used as a solver. ### Action-Augmented MDP We propose an action-augmented MDP (shown in Figure 1), which augments the action space of the contextual MDP in Section 3 with _a fictitious action_\\(a^{+}\ otin\\mathcal{A}\\). Let \\(\\bar{\\mathcal{A}}=\\mathcal{A}\\bigcup\\{a^{+}\\}\\). We define the reward of this action-augmented MDP to be _action-dependent_: for \\(x=(s,c)\\in\\mathcal{X}\\), \\(\\bar{R}(x,a)\\coloneqq\\mathbb{1}(s\\in G_{c})\\mathbb{1}(a=a^{+}),\\) which means the reward is 1 only if \\(a^{+}\\) is taken in the goal set, otherwise 0. We also extend the transition upon taking action \\(a^{+}\\): \\(\\bar{P}(x^{\\prime}|x,a^{+})\\coloneqq\\mathbb{1}(s^{\\prime}=s^{+})\\), and maintain the transition with real actions: \\(\\bar{P}(x^{\\prime}|x,a)\\coloneqq P(s^{\\prime}|s,a)\\mathbb{1}(c^{\\prime}=c),\\) which means whenever taking \\(a^{+}\\), the agent would always transit to \\(s^{+}\\), and the transition remains the same as in the original MDP given real actions. Further, we implement \\(s^{+}\\) as \\(\\texttt{terminal}=\\text{True}\\). We define this augmented MDP as \\(\\overline{\\mathcal{M}}\\coloneqq(\\bar{\\mathcal{X}},\\bar{\\mathcal{A}},\\bar{R}, \\bar{P},\\gamma)\\). Policy conversion.For a policy \\(\\pi:\\mathcal{X}\\to\\Delta(\\mathcal{A})\\) in the original MDP, define its extension on \\(\\overline{\\mathcal{M}}\\): \\[\\bar{\\pi}(a|x)=\\begin{cases}\\pi(a|x),&x\ otin G,\\\\ a^{+},&\\text{otherwise}.\\end{cases} \\tag{1}\\] Regret equivalence.An observation that comes with the construction is that if a policy is optimal in the original MDP, we can easily use the extension above to create an optimal policy in the augmented one. If a policy is optimal in the augmented MDP, it must take \\(a^{+}\\) only when \\(x\\in G\\) (otherwise the return is lower, due to entering \\(s^{+}\\) too early), thus we can revert this optimal policy of the augmented MDP to find an optimal policy in the original MDP without changing its behavior and performance. We stated this property below; details can be found as Lemma B.3 in Appendix B.1. **Theorem 4.1** (Informal).: _The regret of a policy extended to the augmented MDP is equal to the regret of the policy in the original MDP, and any policy defined in the augmented MDP can be converted into that in the original MDP without increasing the regret. Thus, solving the augmented MDP can yield correspondingly optimal policies for the original problem._ **Remark 4.2**.: _The benefit of using the equivalent \\(\\overline{\\mathcal{M}}\\) is to avoid missing labels: given contexts in \\(D_{\\text{goal}}\\), the rewards in \\(\\overline{\\mathcal{M}}\\) are known from our dataset setup in Section 3, whereas the rewards of the original MDP \\(\\mathcal{M}\\) are missing._ ### Method CODA is designed based on the observation on regret relationship in Theorem 4.1: As described in Figure 1, given a context-goal pair \\((s,c)\\) from the dataset \\(D_{\\text{goal}}\\), we create a fictitious transition from \\(s\\) to \\(s^{+}\\) with action \\(a^{+}\\), reward \\(1\\) under context \\(c\\). We also label all unsupervised transitions in the dataset \\(D_{\\text{dyn}}\\) with the original action and reward \\(0\\) under \\(c\\). In this way, we can have a fully labeled transition dataset in the augmented MDP given any \\(c\\) from the context-goal dataset and then run offline algorithms (based on the Bellman equation) on this dataset. This CODA algorithm is formally stated in Algorithm 1. It takes two datasets \\(D_{\\text{dyn}}\\) and \\(D_{\\text{goal}}\\) as input, and produces a labeled transition dataset \\(\\bar{D}_{\\text{dyn}}\\bigcup\\bar{D}_{\\text{goal}}\\) that is suitable for use by any offline RL algorithm based on Bellman equation like CQL [19], IQL [18], PSPI [37], ATAC [4], etc. Interpretation.Why would our action augmentation make sense? We consider dynamic programming on the created dataset. Imagine we have a fictious transition from \\(s\\) to \\(s^{+}\\) with \\(a^{+}\\) under context \\(c\\). When we calculate \\(V^{*}(x)\\) via Bellman equation where \\(x=(s,c)\\), it will choose the action with the highest \\(Q^{*}\\) value in the augmented action space. The fictitious action would be the optimal action since it induces the highest \\(Q^{*}\\) value2, meaning \\(s\\) is already in \\(G_{c}\\), and no further action is needed. Then the value of \\(\\bar{V}^{*}(x)\\) would _naturally propagate to some state \\(x_{\\text{prev}}=(s_{\\text{prev}},c)\\) via Bellman equation if \\(x\\) is reachable starting from \\(x_{\\text{prev}}\\)_ as shown in Figure 1, so \\(x_{\\text{prev}}\\) would still have meaningful values even with the intermediate reward 0. For \\(x\\) to be reachable starting from \\(x_{\\text{prev}}\\), we do not require the exact \\(s\\) to appear in the trajectory dataset due to the generalization ability of the value function (details in Section 5). For non-goal states, such fictitious action never appears in the dataset, thus it would not be the optimal action in Bellman equation in pessimistic offline RL. For example, the fictitious action never appears as the candidate in argmax in algorithms like IQL, and would be punished as OOD actions in algorithms like CQL. We will prove this insight formally below in Section 5. Footnote 2: For all \\(a\ eq a^{+}\\)\\(Q^{*}\\), \\(Q^{*}(x,a)<Q^{*}(x,a^{+})\\) when \\(\\gamma<1\\). If \\(\\gamma=1\\), the agent might also learn to travel to other goal states starting from \\(x\\) with some probability, which is also acceptable in CGO. ``` Input: Dynamics dataset \\(D_{\\text{dyn}}\\), context-goal dataset \\(D_{\\text{goal}}\\) for each sample \\((s,c)\\sim D_{\\text{goal}}\\)do Create transition3\\((x,a^{+},1,x^{+})\\), where \\(x=(s,c)\\) and \\(x^{+}=(s^{+},c)\\), add it to \\(\\bar{D}_{\\text{goal}}\\) endfor for each \\((s,a,s^{\\prime})\\sim D_{\\text{dyn}}\\)do for each \\((\\cdot,c)\\sim D_{\\text{goal}}\\)do Create transition \\((x,a^{+},0,x^{\\prime})\\), where \\(x=(s,c)\\) and \\(x^{\\prime}=(s^{\\prime},c)\\), add it to \\(\\bar{D}_{\\text{dyn}}\\) endfor endfor Output: \\(\\bar{D}_{\\text{dyn}}\\) and \\(\\bar{D}_{\\text{goal}}\\) ``` **Algorithm 1** CODA for CGO **Remark 4.3**.: _We do not need to learn to perform \\(a^{+}\\) for the policy in practice since it is only for fictitious transitions which is already inside the goal set in the original MDP. (From the proof of Lemma B.3, we know taking \\(a^{+}\\) is always strictly worse than taking actions in the original action space \\(\\mathcal{A}\\).) Therefore, we simply use the original action space for policy modeling and only use the fictitious transitions in value learning. We note that in practice Algorithm 1 can be implemented as a pre-processing step in the minibatch sampling of a deep offline RL algorithm (as opposed to computing the full \\(\\bar{D}_{\\text{dyn}}\\) and \\(\\bar{D}_{\\text{goal}}\\) once before learning)._ ## 5 CGO is Learnable with Positive Data Only In Section 4, we show that a fully labeled dataset can be created in the augmented MDP without inducing extra approximation errors. But we still have no access to negative data, i.e., context and non-goal pairs. A natural question arises: _Can we learn to solve CGO problems with positive data only? What conditions are needed for CGO to be learnable with offline datasets?_ We show in theory that we do _not_ need negative data to solve CGO problems by conducting a formal analysis for our method, instantiated with PSPI [37] as an example of the base algorithm. We present the detailed algorithm CODA+PSPI in Appendix B.3. This algorithm uses function classes\\(\\mathcal{F}:\\mathcal{S}\\times\\mathcal{A}\\to\\mathbb{R}\\) and \\(\\mathcal{G}:\\mathcal{S}\\to\\mathbb{R}\\) to model value functions and optimizes the policy given a policy class \\(\\Pi\\) based on absolute pessimism defined on initial states. We present our assumptions and the main theoretical result as follows. **Assumption 5.1** (Realizability).: _We assume for any \\(\\pi\\in\\Pi\\), \\(Q^{\\pi}\\in\\mathcal{F}\\) and \\(R\\in\\mathcal{G}\\), where \\(\\mathcal{F},\\mathcal{G}\\) are the function classes for action-value and reward respectively._ **Assumption 5.2** (Completeness).: _We assume: For any \\(f\\in\\mathcal{F}\\), \\(g\\in\\mathcal{G}\\) and \\(\\pi\\in\\Pi\\), \\(\\max(g(x),f(x,\\pi))\\in\\mathcal{F}\\); And for any \\(f\\in\\mathcal{F}\\), \\(\\pi\\in\\Pi\\), \\(\\mathcal{T}^{\\pi}f(x,a)\\in\\mathcal{F}\\), where \\(\\mathcal{T}^{\\pi}\\) is a zero-reward Bellman backup operator with respect to \\(P(s^{\\prime}|s,a)\\): \\(\\mathcal{T}^{\\pi}f(x,a)=\\gamma\\mathbb{E}_{x^{\\prime}\\sim P(s^{\\prime}|s,a) \\mathbbm{1}(c^{\\prime}=c)}[f(x^{\\prime},\\pi)]\\)._ These two assumptions mean that the function classes \\(\\mathcal{F}\\) and \\(\\mathcal{G}\\) are expressive enough, which are standard assumptions in offline RL based on Bellman equation [37]. For deriving our main result, we define the coverage assumption needed below. **Definition 5.3**.: _We define the generalized concentrability coefficients:_ \\[\\mathfrak{C}_{\\text{dyn}}(\\pi)\\coloneqq\\max_{f,J^{\\prime}\\in\\mathcal{F}}\\frac {\\|f-\\mathcal{T}^{\\pi}f\\|_{\\rho^{\\pi}_{\\mathcal{G}}}^{2}}{\\|f-\\mathcal{T}^{ \\pi}f^{\\prime}\\|_{\\rho_{\\text{dyn}}}^{2}}\\quad\\text{ and }\\quad\\mathfrak{C}_{\\text{ goal}}(\\pi)\\coloneqq\\max_{g\\in\\mathcal{G}}\\frac{\\|g-R\\|_{\\rho^{\\pi}_{ \\mathcal{G}}}^{2}}{\\|g-R\\|_{\\rho_{\\text{goal}}}^{2}} \\tag{2}\\] _where \\(\\|h\\|_{\\mu}^{2}\\coloneqq\\mathbb{E}_{x\\sim\\mu}[h(x)^{2}]\\), \\(\\rho^{\\pi}_{\\mathcal{G}}(x,a)=\\mathbb{E}_{\\pi,P}\\left[\\sum_{t=0}^{T-1}\\gamma^{ t}\\mathbbm{1}(x_{t}=x,a_{t}=a)\\right]\\), \\(\\rho^{\\pi}_{\\mathcal{G}}(x)=\\mathbb{E}_{\\pi,P}\\left[\\gamma^{T}\\mathbbm{1}(x_{T }=x)\\right]\\), and \\(T\\) is the first time the agent enters the goal set._ Concentability coefficients is a generalization notion of density ratio: It describes how much the (unnormalized) distribution in the numerator is \"covered\" by that in the denominator in terms of the generalization ability of function approximators [37]. If \\(\\mathfrak{C}_{\\text{dyn}}(\\pi),\\mathfrak{C}_{\\text{goal}}(\\pi)\\) are finite given \\(\\mu_{\\text{goal}},\\mu_{\\text{dyn}},\\mathcal{F},\\mathcal{G}\\) and \\(\\pi\\), then we say \\(\\pi\\) is covered by the data distributions, and conceptually offline RL can learn a policy to be no worse than \\(\\pi\\). We now state our theoretical result, which is proven by a careful reformulation of the Bellman equation of the action-augmented MDP, and construct augmented value function and policy classes in the analysis using the CGO structures (see Appendix B). **Theorem 5.4**.: _Let \\(\\pi^{\\dagger}\\) denote the learned policy of CODA + PSPI with datasets \\(D_{\\text{dyn}}\\) and \\(D_{\\text{goal}}\\), using value function classes \\(\\mathcal{F}=\\{\\mathcal{X}\\times\\mathcal{A}\\to[0,1]\\}\\) and \\(\\mathcal{G}=\\{\\mathcal{X}\\to[0,1]\\}\\). Under Assumption 5.1, 5.2 and 5.3, with probability \\(1-\\delta\\), it holds, for any \\(\\pi\\in\\Pi\\),_ \\[J(\\pi)-J(\\pi^{\\dagger})\\lesssim\\mathfrak{C}_{\\text{dyn}}(\\pi)\\left(\\sqrt{ \\frac{\\log(|\\mathcal{F}||\\mathcal{G}||\\Pi|/\\delta)}{|D_{\\text{dyn}}|}}+\\sqrt{ \\frac{\\log(|\\mathcal{F}||\\mathcal{G}||\\Pi|/\\delta)}{|D_{\\text{goal}}|}}\\right) +\\mathfrak{C}_{\\text{goal}}(\\pi)\\sqrt{\\frac{\\log(|\\mathcal{G}|/\\delta)}{|D_{ \\text{goal}}|}}\\] _where \\(\\mathfrak{C}_{\\text{dyn}}(\\pi)\\) and \\(\\mathfrak{C}_{\\text{goal}}(\\pi)\\) are concentrability coefficients4._ Footnote 4: We state a more general result for non-finite function classes in Theorem B.11 in the appendix Interpretation.We can interpret Theorem 5.4 as follows: The statistical errors in value function estimation would decrease as we have more data from \\(\\mu_{\\text{goal}}\\) and \\(\\mu_{\\text{dyn}}\\); For any comparator \\(\\pi\\) with finite coefficients \\(\\mathfrak{C}_{\\text{dyn}}(\\pi),\\mathfrak{C}_{\\text{goal}}(\\pi)\\), the final regret upper bound would also decrease. Taking \\(\\pi=\\pi^{*}\\) as an example. For the coefficients \\(\\mathfrak{C}_{\\text{dyn}}(\\pi),\\mathfrak{C}_{\\text{goal}}(\\pi)\\) to be finite, it indicates 1) the state-action distribution from the dynamics data \"covers\" the trajectories generated by \\(\\pi^{*}\\), which includes the case of stitching5; 2) the support of \\(\\mu_{\\text{goal}}\\) \"covers\" the goals \\(\\pi^{*}\\) would reach. We note that these conditions are _not_ any stronger than general requirements to solve offline algorithms: The \"coverage\" above is measured based on the generalization ability of \\(f\\) and \\(g\\) respectively as in Definition 5.3; e.g., if \\(f(x_{1})\\) and \\(f(x_{2})\\) are similar for \\(x_{1}\ eq x_{2}\\), then \\(x_{2}\\) is within the coverage of \\(\\mu\\) so long as \\(x_{1}\\) can be generated by \\(\\mu\\) in terms of the generalization ability of \\(f\\). Such a coverage condition is weaker than coverage conditions based on density ratios. Besides, Theorem 5.4 simultaneously apply to all \\(\\pi\\in\\Pi\\) not just \\(\\pi^{*}\\). Therefore, as long as the above \"coverage\" conditions hold for any policy \\(\\pi\\) that can reach the goal set, the agent can learn to reach the goal set. Thus, we show that CODA with PSPI can provably solve CGO without the need for additional non-goal samples, i.e., CGO is learnable with positive data only. **Remark 5.5**.: _Here we only require function approximation assumptions made in the original MDP, without relying on functions defined on the fictitious action or completeness assumptions based on the fictitious transition. As a result, our theoretical results are comparable with those of other approaches._ **Remark 5.6**.: _MAHALO [21] is a SOTA offline RL algorithm that can provably learn from unlabeled data. One version of MAHALO is realized on top of PSPI in theory; however, their theoretical result (Theorem D.1) requires a stronger version concentrability, \\(\\max_{g\\in\\mathcal{G}}\ icefrac{{\\|g-r\\|_{g}^{2}}}{{\\varepsilon_{g}^{2}}} \ icefrac{{\\|g-r\\|_{g}^{2}}}{{\\|g-r\\|_{g}^{2}}}\\), to be small. In other words, it needs negative examples of (context, non-goal state) tuples for learning._ **Intuition for other base algorithms.** Notice that PSPI is just one instantiation. Conceptually, the coverage conditions above also make sense for other pessimistic offline RL instantiations based on the Bellman equation (like IQL), since the key ideas used in the above analyses are that the regret relationship (Theorem 4.1) between the original MDP and the action augmented MDP (which is algorithm agnostic) and that pessimism together with Bellman equations can effectively propagate information from the context-goal dataset (without the need for negative data). However, performing complete theoretical analyses of CODA for all different offline RL algorithms is out of the scope of this paper. ## 6 Experiments In this section, we present the experimental setup and results for CODA. Code is publicly available at: [https://github.com/yingfan-bot/coda](https://github.com/yingfan-bot/coda). For a comprehensive empirical study, we first introduce the diverse spectrum of practical CGO setups. **Diverse spectrum of practical CGO problems.** The main challenge of the CGO problem compared with traditional goal-conditioned RL is the potential complexity in the context-goal relationship. Therefore, to showcase the efficacy of different methods, we construct three levels with _increasing difficulty_ as shown in Figure 2: (a) has a similar complexity as a single-task problem where the context does not play a significant role; (b) requires a context-dependent policy but only has finite contexts; (c) has infinite continuous context, requiring a context-dependent policy and generalization ability to contexts outside the offline data set. We aim to answer the following questions: 1) Does our method work under the data assumptions in Section 3, with different levels of context-goal complexity? 2) Is there any empirical benefit from using CODA, compared with baseline methods including reward learning, goal prediction, etc? ### Environments and Datasets **Dynamics dataset.** For all experiments, we use the original AntMaze-v2 datasets (3 different mazes and 6 offline datasets) of D4RL [6] as dynamics datasets \\(D_{\\text{dyn}}\\), removing all rewards and terminals. **Context-goal dataset.** We construct three levels of context and goal relationships as shown in Figure 2. For each setup, we first define the context set, and then sample a fixed set of states from the offline trajectory dataset that satisfies the context-goal relationship, and then randomly _perturb_ the states such that there would be no way to directly match goal examples to some states in the trajectories given contexts. Notice that this context-goal relationship is only used for dataset Figure 2: Illustration of the context-goal relationship with increasing complexity (Each red boundary defines a goal set with its center location as context). (a) Contexts and goal sets are very similar such that it could be approximately solved by a context-agnostic policy. (b) Contexts are finite, and different contexts map to distinct goal sets, which requires context-dependent policies. (c) Contexts are continuous and infinite. The context-goal mapping is neither one-to-many nor many-to-one, creating a CGO problem with full complexity. construction and is not accessible to the learning algorithm.6 The specific context-goal relationship are discussed in Section 6.3 with the construction/evaluation details in Appendix C.2. Footnote 6: Also note that the state space in Antmaze not only includes the 2D location; it also includes data from robotic arms, etc. We define the context-goal relationship only on the 2D location and ignore other information. ### Method and Baselines For controlled experiments, we use IQL [18] as the same backbone offline algorithm for all the methods with the same set of hyperparameters. Our choice of IQL is motivated by both its benchmarked performance on several RL domains and its structural similarity to PSPI (use of value/policy function classes along with pessimism). Please see Appendix C.1 for hyperparameters. We describe the algorithms compared in the experiments. **CODA.** We apply CODA in Algorithm 1 with IQL as the offline RL algorithm to solve the augmented MDP defined in Section 4.1 More specifically, we set \\(a^{+}\\) to be an extra dimension in the action space of the action-value function, and model the policy with the original action space. Empirically, we found that equally balancing the samples \\(\\bar{D}_{\\text{dyn}}\\) and \\(\\bar{D}_{\\text{goal}}\\) generates the best result7. Then we apply IQL on this labeled dataset. Footnote 7: We study the effect of this sampling ratio on CODA’s performance in Table 5 in Appendix C.1 **Reward prediction.** For this family of baselines, we need to use the learned reward to predict the label of context-goal samples in the randomly sampled context-transition pairs during training, so we need to pre-train a reward model using the context-goal dataset. We use PDS [14] for reward modeling, and learn a _pessimistic_ reward function using ensembles of models on the context-goal dataset. Then we apply the reward model to label the transitions with contexts, run IQL on this labeled dataset, and get a context-dependent policy. Besides PDS, we also test naive reward prediction (RP, which follows the same setup of PDS but without ensembles) and UDS [42] +RP in Section 6.3 (See details in Appendix C.1). Additionally, we add results from training with the oracle reward (marked as \"Oracle Reward\") where we provide the oracle reward for any query context-goal pairs, as a reference of the performance upper bound for reward prediction methods. **Goal prediction.** We consider another GCRL-based baseline. Notice that the relationship between contexts and goals is unknown in CGO, we cannot directly apply traditional GCRL methods to CGO problems. Therefore, we adopt a workaround to use GCRL methods: We learn a conditional generative model as the goal predictor using classifier-free diffusion guidance [13], where the contexts serve as the condition, and the goal examples are used to train the generative model. We also learn a general goal-conditioned policy with the dynamics-only dataset using HER [1]+IQL. Given a test context, the goal predictor samples the goal given the context, which is then passed as the condition to the policy. ### Results **Original AntMaze: Figure 2(a).** In the original AntMaze, 2D goal locations (contexts) are limited to a small area as in Figure 2 (a). To make it a CGO problem, we make the test context visible to the agent. This setting in Figure 2 is approximately a single-task problem. CODA generally achieves better performance than reward learning and goal prediction methods. Comparing the normalized return in each AntMaze environment for all methods, our method consistently achieves equivalent or better performance in each environment compared to other baselines (Table 1). 8 Moreover, the performance of Goal Prediction is rather poor, which mainly comes from not enough goal examples to learn from in this setup due to a limited goal area. Footnote 8: We find un maze is too easy: even if the reward labeling is bad it still has a relatively high reward, so we also omit it in other experiments. We also find UDS and RP are not very effective in our data setup, so we also omit them in other experiments. **Four Rooms: Figure 2(b).** We partition the maze into four rooms as in Figure 2(b), where the discrete room numbers (1,2,3,4) serve as contexts and we uniformly select test contexts. A context-dependent policy is needed, but there is no generalization required for unseen contexts in this setup. We show the normalized return (average success rate in percentage) in each modified Four Rooms environment for our method and baseline methods in Table 2, where our method consistently outperforms the performances of baseline methods. **Random Cells: Figure 2(c).** We use a diverse distribution of contexts as shown in Figure 2(c), where the contexts are randomly sampled from non-wall states. For test contexts, we have two settings: 1) sampling from the training distribution; 2) sampling from a far-away area from the start states. Overall, CODA outperforms the baselines under the setup in Figure 2(c). We show the normalized return (average success rate in percentage) in each modified Random Cells environment in Table 3, which also shows the generalization ability of our method in the context space. CODA also generalizes to a different test context distribution: We also test with a distribution shift of the contexts in Table 4. We can observe that when tested with this different context distribution, CODA still generates better overall results compared to reward learning and goal prediction baselines. **Reference to training with oracle reward.** Notice that training with oracle reward is the skyline performance. From the results, training with oracle reward does not generally improve the performance much compared to CODA, though it generally outperforms PDS and Goal Prediction. This is mainly due to the sparsity of the positive samples in the randomly sampled context-transition pairs. On the other hand, CODA easily uses these positive examples via our augmentation, which is another advantage of our method over reward prediction baselines. **Evaluation of the Reward Model.** We also visualize the learned reward model from reward learning baselines in Appendix C.3: PDS is consistently better at separating positive and negative datasets than UDS and naive RP, but PDS can still fail at fully separating positive and negative examples. Intuitively, our method does not require reward learning thanks to the construction of the augmented MDP, which avoids the extra errors in reward prediction and leads to better performance. ### Discussion and Limitation Our experiments are limited to low-dimensional simulations. Nevertheless, the success of our method with diverse context-goal relationships serves as a first milestone to showcase its effectiveness, and we believe CODA would be useful in real-world settings (e.g., learning visual-language robot policies) for its simplicity and theoretical guarantees. Potential scaling up by incorporating features from large \\begin{table} \\begin{tabular}{l|c c c c|c} \\hline \\hline Env/Method & CODA (Ours) & PDS & Goal Prediction & RP & UDS+RP & Oracle Reward \\\\ \\hline umaze & **94.8\\(\\pm\\)1.3** & 93.0\\(\\pm\\)1.3 & 46.4\\(\\pm\\)6.0 & 50.5\\(\\pm\\)2.1 & 54.3\\(\\pm\\)6.3 & 94.4\\(\\pm\\)0.61 \\\\ umaze diverse & **72.8\\(\\pm\\)7.7** & 50.6\\(\\pm\\)7.8 & 42.8\\(\\pm\\)4.4 & **72.8\\(\\pm\\)2.6** & 71.5\\(\\pm\\)4.3 & 76.8\\(\\pm\\)5.44 \\\\ medium play & **75.8\\(\\pm\\)1.9** & 66.8\\(\\pm\\)4.9 & 43.8\\(\\pm\\)4.7 & 0.5\\(\\pm\\)0.3 & 0.3\\(\\pm\\)0.3 & 80.6\\(\\pm\\)1.56 \\\\ medium diverse & **84.5\\(\\pm\\)5.2** & 22.8\\(\\pm\\)2.4 & 28.6\\(\\pm\\)3.9 & 0.5\\(\\pm\\)0.5 & 0.8\\(\\pm\\)0.5 & 72.4\\(\\pm\\)4.26 \\\\ large play & **60.0\\(\\pm\\)7.6** & 39.6\\(\\pm\\)4.9 & 13.0\\(\\pm\\)0.4 & 0\\(\\pm\\)0 & 0\\(\\pm\\)0 & 41.2\\(\\pm\\)3.58 \\\\ large diverse & **36.8\\(\\pm\\)6.9** & 30.0\\(\\pm\\)5.3 & 12.6\\(\\pm\\)2.7 & 0\\(\\pm\\)0 & 0\\(\\pm\\)0 & 34.2\\(\\pm\\)2.59 \\\\ \\hline average & **70.8** & 50.5 & 31.2 & 20.7 & 21.2 & 66.6 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Average success rate (%) in AntMaze-v2 from all environments. \\begin{table} \\begin{tabular}{l|c c c|c} \\hline \\hline Env/Method & CODA (Ours) & PDS & Goal Prediction & Oracle Reward \\\\ \\hline medium-play & **78.7\\(\\pm\\)0.9** & 46.0\\(\\pm\\)4.47 & 59.3\\(\\pm\\)2.6 & 77.7\\(\\pm\\)2.0 \\\\ medium-diverse & **83.6\\(\\pm\\)1.9** & 51.3\\(\\pm\\)3.6 & 66.7\\(\\pm\\)2.4 & 87.4\\(\\pm\\)1.2 \\\\ large-play & **65.5\\(\\pm\\)2.5** & 13.9\\(\\pm\\)2.4 & 41.4\\(\\pm\\)3.6 & 67.2\\(\\pm\\)2.7 \\\\ large-diverse & **72.2\\(\\pm\\)2.9** & 11.1\\(\\pm\\)3.8 & 42.0\\(\\pm\\)3.0 & 69.6\\(\\pm\\)3.1 \\\\ \\hline average & **75.0** & 30.6 & 52.4 & 75.5 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Average scores from Four Rooms with perturbation. The score for each run is the average success rate (%) of the other three rooms. \\begin{table} \\begin{tabular}{l|c c c|c} \\hline \\hline Env/Method & CODA (Ours) & PDS & Goal Prediction & Oracle Reward \\\\ \\hline medium-play & **76.8\\(\\pm\\)6.1** & 52.0\\(\\pm\\)8.8 & 66.7\\(\\pm\\)7.2 & 71.9\\(\\pm\\)0.1 \\\\ medium-diverse & **78.2\\(\\pm\\)6.5** & 60.9\\(\\pm\\)11.3 & 69.7\\(\\pm\\)8.7 & 79.3\\(\\pm\\)6.1 \\\\ large-play & **57.6\\(\\pm\\)12.4** & 50.6\\(\\pm\\)6.4 & 42.4\\(\\pm\\)8.2 & 49.4\\(\\pm\\)9.3 \\\\ large-diverse & 54.7\\(\\pm\\)8.8 & **58.3\\(\\pm\\)9.2** & 44.2\\(\\pm\\)8.1 & 58.2\\(\\pm\\)3.4 \\\\ \\hline average & **66.8** & 55.5 & 55.8 & 64.7 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Average scores from Random Cells. The score for each run is the average success rate (%) of random test contexts from the same training distribution. pretrained models would be an exciting future direction, which can make our method generalizable to the real world. ## 7 Conclusion We propose CODA for offline CGO problems, and prove CODA can learn near-optimal policies without the need for negative labels with natural assumptions. We also validate the efficacy of CODA experimentally, and find it outperforms other reward-learning and goal prediction baselines across various CGO complexities. We believe our method has the potential to generalize to real-world applications by further scaling up. \\begin{table} \\begin{tabular}{l|c c c|c} \\hline Env/Method & CODA (Ours) & PDS & Goal Prediction & Oracle Reward \\\\ \\hline medium-play & 67.9\\(\\pm\\)8.2 & 50.1\\(\\pm\\)13.4 & **70.5\\(\\pm\\)1.9** & 67.2\\(\\pm\\)7.2 \\\\ medium-diverse & **72.5\\(\\pm\\)6.5** & 57.5\\(\\pm\\)14.8 & 63.0\\(\\pm\\)7.2 & 68.7\\(\\pm\\)7.9 \\\\ large-play & **60.2\\(\\pm\\)4.8** & 48.1\\(\\pm\\)8.0 & 44.3\\(\\pm\\)4.1 & 59.8\\(\\pm\\)4.4 \\\\ large-diverse & **58.0\\(\\pm\\)5.8** & 44.1\\(\\pm\\)9.9 & 55.4\\(\\pm\\)5.7 & 57.6\\(\\pm\\)7.6 \\\\ \\hline average & **64.7** & 49.9 & 58.3 & 63.3 \\\\ \\hline \\end{tabular} \\end{table} Table 4: Average scores from Random Cells with perturbation. The score for each run is the average success rate (%) of random test contexts with a distribution shift. ## References * [1]M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba (2017) Hindsight experience replay. In NeurIPS, Cited by: SS1. * [2]A. Barreto, W. Dabney, R. Munos, J. J. Hunt, T. Schaul, H. van Hasselt, and D. Silver (2017) Successor features for transfer in reinforcement learning. In NeurIPS, Cited by: SS1. * [3]Y. Chebotar, K. Hausman, Y. Lu, T. Xiao, D. Kalashnikov, J. Varley, A. Irpan, B. Eysenbach, R. C. Julian, C. Finn, et al. (2021) Actionable models: unsupervised offline reinforcement learning of robotic skills. In ICML, Cited by: SS1. * [4]C. Cheng, T. Xie, N. Jiang, and A. Agarwal (2022) Adversarially trained actor critic for offline reinforcement learning. In ICML, Cited by: SS1. * [5]C. D'Eramo, D. Tateo, A. Bonarini, M. Restelli, and J. Peters (2020) Sharing knowledge in multi-task deep reinforcement learning. In ICLR, Cited by: SS1. * [6]J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine (2020) D4rl: datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219. Cited by: SS1. * [7]S. Fujimoto and S. Gu (2021) A minimalist approach to offline reinforcement learning. In NeurIPS, Cited by: SS1. * [8]S. Fujimoto, D. Meger, and D. Precup (2019) Off-policy deep reinforcement learning without exploration. In ICML, Cited by: SS1. * [9]M. Hahn, D. Singh Chaplot, S. Tulsiani, M. Mukadam, J. M. Rehg, and A. Gupta (2021) No rl, no simulation: learning to navigate without navigating. In NeurIPS, Cited by: SS1. * [10]A. Hallak, D. Di Castro, and S. Mannor (2015) Contextual markov decision processes. arXiv preprint arXiv:1502.02259. Cited by: SS1. * [11]B. Han, C. Zheng, H. Chan, K. Paster, M. R. Zhang, and J. Ba (2021) Learning domain invariant representations in goal-conditioned block mdps. In NeurIPS, Cited by: SS1. * [12]M. Hessel, H. Soyer, L. Espeholt, W. Czarnecki, S. Schmitt, and H. Van Hasselt (2019) Multi-task deep reinforcement learning with popart. In AAAI, Cited by: SS1. * [13]J. Ho and T. Salimans (2022) Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598. Cited by: SS1. * [14]H. Hu, Y. Yang, Q. Zhao, and C. Zhang (2023) The provable benefit of unsupervised data sharing for offline reinforcement learning. In ICLR, Cited by: SS1. * [15]Y. Jin, Z. Yang, and Z. Wang (2021) Is pessimism provably efficient for offline rl?. In ICML, Cited by: SS1. * [16]L. Pack Kaelbling (1993) Learning to achieve goals. In IJCAI, Cited by: SS1. * [17]D. Kalashnikov, J. Varley, Y. Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, and K. Hausman (2021) Mt-opt: continuous multi-task robotic reinforcement learning at scale. arXiv preprint arXiv:2104.08212. Cited by: SS1. * [18]I. Kostrikov, A. Nair, and S. Levine (2021) Offline reinforcement learning with implicit q-learning. In ICLR, Cited by: SS1. * [19]A. Kumar, A. Zhou, G. Tucker, and S. Levine (2020) Conservative q-learning for offline reinforcement learning. In NeurIPS, Cited by: SS1. * [20]A. C. Li, L. Pinto, and P. Abbeel (2020) Generalized hindsight for reinforcement learning. In NeurIPS, Cited by: SS1. * [21]A. Li, B. Boots, and C. Cheng (2023) Mahalo: unifying offline reinforcement learning and imitation learning from observations. In ICML, Cited by: SS1. * [22]C. Lynch, M. Khansari, T. Xiao, V. Kumar, J. Tompson, S. Levine, and P. Sermanet (2020) Learning latent plans from play. In CORL, Cited by: SS1. [MISSING_PAGE_POST] * [24] Piotr Mirowski, Matthew Koichi Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith Anderson, Denis Teplyashin, Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, and Raia Hadsell. Learning to navigate in cities without a map. In _NeurIPS_, 2018. * [25] Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. Tell me dave: Context-sensitive grounding of natural language to manipulation instructions. _International Journal of Robotics Research_, 35(1-3):281-300, 2016. * [26] Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. In _NeurIPS_, 2018. * [27] Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, and Sergey Levine. Contextual imagined goals for self-supervised robotic learning. In _Conference on Robot Learning_, pages 530-539. PMLR, 2020. * [28] Suraj Nair and Chelsea Finn. Hierarchical foresight: Self-supervised learning of long-horizon tasks via visual subgoal generation. In _ICLR_, 2019. * [29] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In _ICML_, 2015. * [30] Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, and Sergey Levine. Cog: Connecting new skills to past experience with offline reinforcement learning. _arXiv preprint arXiv:2010.14500_, 2020. * [31] Shagun Sodhani, Amy Zhang, and Joelle Pineau. Multi-task reinforcement learning with context-based representations. In _ICML_, 2021. * [32] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Yu Zhang, Jonathon Shlens, Zhifeng Chen, and Dragomir Anguelov. Scalability in perception for autonomous driving: Waymo open dataset. In _CVPR_, 2020. * [33] Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: robust multitask reinforcement learning. In _NeurIPS_, 2017. * [34] Homer Rich Walke, Kevin Black, Tony Z. Zhao, Quan Vuong, Chongyi Zheng, Philippe Hansen-Estruch, Andre Wang He, Vivek Myers, Moo Jin Kim, Max Du, Abraham Lee, Kuan Fang, Chelsea Finn, and Sergey Levine. Bridgedata v2: A dataset for robot learning at scale. In _CORL_, 2023. * [35] Benjamin Wilson, William Qi, Tanmay Agarwal, John Lambert, Jagjeet Singh, Siddhesh Khandelwal, Bowen Pan, Ratnesh Kumar, Andrew Hartnett, Jhony Kaesemodel Pontes, Deva Ramanan, Peter Carr, and James Hays. Argoverse 2: Next generation datasets for self-driving perception and forecasting. In _NeurIPS_, 2021. * [36] Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. _arXiv preprint arXiv:1911.11361_, 2019. * [37] Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. In _NeurIPS_, 2021. * [38] Rui Yang, Lin Yong, Xiaoteng Ma, Hao Hu, Chongjie Zhang, and Tong Zhang. What is essential for unseen goal generalization of offline goal-conditioned rl? In _ICML_, 2023. * [39] Albert Yu and Ray Mooney. Using both demonstrations and language instructions to efficiently learn robotic tasks. In _ICLR_, 2023. * [40] Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. Mopo: model-based offline policy optimization. In _NeurIPS_, 2020. * [41] Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, and Chelsea Finn. Conservative data sharing for multi-task offline reinforcement learning. In _NeurIPS_, 2021. * [42] Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, and Sergey Levine. How to leverage unlabeled data in offline reinforcement learning. In _ICML_, 2022. * [43] Zhuangdi Zhu, Kaixiang Lin, Anil K Jain, and Jiayu Zhou. Transfer learning in deep reinforcement learning: A survey. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2023. Additional Related Work Data-sharing in RLSharing information across multiple tasks is a promising approach to accelerate learning and to identify transferable features across tasks. In RL, both multi-task and transfer learning settings have been studied under varying assumption on the shared properties and structures of different tasks [43, 33, 2, 5]. For data sharing in CGO, we adopt the contextual MDP formulation [10, 31], which enables knowledge transfer via high-level contextual cues. Prior work on offline RL has also shown the utility of sharing data across tasks: hindsight relabeling and manual skill grouping [17], inverse RL [20], sharing Q-value estimates [41, 30] and reward labeling [42, 14]. ## Appendix B Theoretical Analysis In this section, we provide a detailed analysis for the instantiation of CODA using PSPI [37]. We follow the same notation for the value functions, augmented MDP, and extended function classes as stated in Section 3 and Section 4 in the main text. ### Equivalence Relations between Original and Augmented MDP We begin by showing that the optimal policy and any value function in the augmented MDP can be expressed using their analog in the original MDP. With the augmented MDP defined as \\(\\overline{\\mathcal{M}}:=(\\bar{\\mathcal{X}},\\bar{\\mathcal{A}},\\bar{R},\\bar{P},\\gamma)\\) in Section 4.1, we first define the value function in the augmented MDP. For a policy \\(\\bar{\\pi}:\\bar{\\mathcal{X}}\\to\\bar{\\mathcal{A}}\\), we define the Q function for the augmented MDP as \\[\\bar{Q}^{\\bar{\\pi}}(x,a)\\coloneqq\\mathbb{E}_{\\bar{\\pi},\\bar{P}}\\left[\\sum_{t= 0}^{\\infty}\\gamma^{t}\\bar{R}(x,a)|x_{0}=x,a_{0}=a\\right]\\] Notice that we don't have a reaching time random variable \\(T\\) in this definition; instead the agent would enter an absorbing state \\(s^{+}\\) after taking \\(a^{+}\\) in the augmented MDP. We can define similarly \\(\\bar{V}^{\\bar{\\pi}}(s)\\coloneqq\\bar{Q}^{\\bar{\\pi}}(x,\\bar{\\pi})\\). **Remark B.1**.: _Let \\(\\bar{Q}^{\\pi}_{R}\\) be the extension of \\(Q^{\\pi}\\) based on \\(R\\). We have, for \\(x\ otin G\\), \\(\\bar{Q}^{\\pi}_{R}(x,a)=\\bar{Q}^{\\bar{\\pi}}(x,a)\\)\\(\\forall a\\in\\bar{\\mathcal{A}}\\), and for \\(x\\in G\\), \\(\\bar{Q}^{\\pi}_{R}(x,a)=\\bar{Q}^{\\bar{\\pi}}(x,a^{+})=1\\), \\(\\forall a\\in\\bar{\\mathcal{A}}\\)._ By the construction of the augmented MDP, it is obvious that the following is true. **Lemma B.2**.: _Given \\(\\pi:\\mathcal{X}\\to\\Delta(\\mathcal{A})\\), let \\(\\bar{\\pi}\\) be its extension. For any \\(h:\\mathcal{X}\\times\\mathcal{A}\\to\\mathbb{R}\\), it holds_ \\[\\mathbb{E}_{\\pi,P}\\left[\\sum_{t=0}^{T}\\gamma^{t}h(x,a)\\right]=\\mathbb{E}_{\\bar {\\pi},\\bar{P}}\\left[\\sum_{t=0}^{\\infty}\\gamma^{t}\\tilde{h}^{\\pi}(x,a)|x \ otin\\mathcal{X}^{+}\\right]\\] _where \\(T\\) is the goal-reaching time (random variable) and we define \\(\\tilde{h}^{\\pi}(x,a^{+})=h(x,\\pi)\\)._ We can now relate the value functions between the two MDPs. **Proposition B.3**.: _For a policy \\(\\pi:\\mathcal{X}\\to\\Delta(\\mathcal{A})\\), let \\(\\bar{\\pi}\\) be its extension (defined above). We have for all \\(x\\in\\mathcal{X}\\), \\(a\\in\\mathcal{A}\\),_ \\[Q^{\\pi}(x,a) \\geq\\bar{Q}^{\\bar{\\pi}}(x,a)\\] \\[V^{\\pi}(x) =\\bar{V}^{\\bar{\\pi}}(x)\\] _Conversely, for a policy \\(\\xi:\\bar{\\mathcal{X}}\\to\\Delta(\\bar{\\mathcal{A}})\\), define its restriction \\(\\underline{\\xi}\\) on \\(\\mathcal{X}\\) and \\(\\mathcal{A}\\) by translating probability of \\(\\xi\\) originally on \\(a^{+}\\) to be uniform over \\(\\mathcal{A}\\). Then we have for all \\(s\\in\\mathcal{S}\\), \\(a\\in\\mathcal{A}\\)_ \\[Q^{\\underline{\\xi}}(x,a) \\geq\\bar{Q}^{\\xi}(x,a)\\] \\[V^{\\underline{\\xi}}(x) \\geq\\bar{V}^{\\xi}(x)\\] Proof.: The first direction follows from Lemma B.2. For the latter, whenever \\(\\xi\\) takes \\(a^{+}\\) at some \\(x\ otin G\\), it has \\(\\bar{V}^{\\xi}(x)=0\\) but \\(\\bar{V}^{\\underline{\\xi}}(x)\\geq 0\\) since there is no negative reward in the original MDP. By performing a telescoping argument, we can derive the second claim. By this lemma, we know the extension of \\(\\pi^{*}\\) (i.e., \\(\\bar{\\pi}^{*}\\)) is also optimal to the augmented MDP and \\(V^{*}(x)=\\bar{V}^{*}(x)\\) for \\(x\\in\\mathcal{X}\\). Furthermore, we have a reduction that we can solve for the optimal policy in the original MDP by the solving augmented MDP, since \\[V^{\\complement}(d_{0})-V^{*}(d_{0})\\leq V^{\\complement}(d_{0})-\\bar{V}^{*}(d_{0})\\] for all \\(\\xi:\\bar{\\mathcal{X}}\\to\\Delta(\\bar{\\mathcal{A}})\\). In particular, \\[\\text{Regret}(\\pi)\\coloneqq V^{\\pi}(d_{0})-V^{*}(d_{0})=V^{\\bar{\\pi}}(d_{0})- \\bar{V}^{*}(d_{0})\\eqqcolon\\overline{\\text{Regret}}(\\bar{\\pi}) \\tag{3}\\] Since the augmented MDP replaces the random reaching time construction with an absorbing-state version, the Q function \\(\\bar{Q}^{\\pi}\\) of the extended policy \\(\\bar{\\pi}\\) satisfies the Bellman equation \\[\\bar{Q}^{\\bar{\\pi}}(x,a) =\\bar{R}(x,a)+\\gamma\\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)} [\\bar{Q}^{\\pi}(x^{\\prime},\\bar{\\pi})]\\] \\[\\eqqcolon\\bar{\\mathcal{T}}^{\\pi}\\bar{Q}^{\\pi}(x,a) \\tag{4}\\] For \\(x\\in\\mathcal{X}\\) and \\(a\\in\\mathcal{A}\\), we show how the above equation can be rewritten in \\(Q^{\\pi}\\) and \\(R\\). **Proposition B.4**.: _For \\(x\\in\\mathcal{X}\\) and \\(a\\in\\mathcal{A}\\),_ \\[\\bar{Q}^{\\bar{\\pi}}(x,a)=0+\\gamma\\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)} [\\max(R(x^{\\prime}),Q^{\\pi}(x^{\\prime},\\pi))]\\] _For \\(a=a^{+}\\), \\(\\bar{Q}^{\\bar{\\pi}}(x,a^{+})=\\bar{R}(x,a^{+})=R(x)\\). For \\(x\\in\\mathcal{X}^{+}\\), \\(\\bar{Q}^{\\bar{\\pi}}(x,a)=0\\)._ Proof.: The proof follows from Lemma B.5 and the definition of \\(\\bar{P}\\). **Lemma B.5**.: _For \\(x\\in\\mathcal{X}\\), \\(\\bar{Q}^{\\bar{\\pi}}(x,\\bar{\\pi})=\\max(R(x),Q^{\\pi}(x,\\pi))\\)_ Proof.: For \\(x\\in\\mathcal{X}\\), \\[\\bar{Q}^{\\bar{\\pi}}(x,\\bar{\\pi}) =\\begin{cases}\\bar{Q}^{\\bar{\\pi}}(x,a^{+}),&\\text{if }x\\in G\\\\ \\bar{Q}^{\\bar{\\pi}}(x,\\pi),&\\text{otherwise}\\end{cases}\\] (Because of definition of \\[\\bar{\\pi}\\] ) \\[=\\begin{cases}\\bar{Q}^{\\bar{\\pi}}(x,a^{+}),&\\text{if }x\\in G\\\\ Q^{\\pi}(x,\\pi),&\\text{otherwise}\\end{cases}\\] (Because of Proposition B.3 ) \\[=\\begin{cases}\\bar{R}(x,a^{+}),&\\text{if }x\\in G\\\\ Q^{\\pi}(x,\\pi),&\\text{otherwise}\\end{cases}\\] (Definition of augmented MDP ) \\[=\\begin{cases}R(x),&\\text{if }x\\in G\\\\ Q^{\\pi}(x,\\pi),&\\text{otherwise}\\end{cases}\\] \\[=\\max(R(x),Q^{\\pi}(x,\\pi))\\] where in the last step we use \\(\\bar{R}(x)=1\\) for \\(x\\in G\\) and \\(\\bar{R}(x)=0\\) otherwise. ### Function Approximator Assumptions In Theorem 5.4, we assume access to a policy class \\(\\Pi=\\{\\pi:\\mathcal{X}\\to\\Delta(\\mathcal{A})\\}\\). We also assume access to a function class \\(\\mathcal{F}=\\{f:\\mathcal{X}\\times\\mathcal{A}\\to[0,1]\\}\\) and a function class \\(\\mathcal{G}=\\{g:\\mathcal{X}\\to[0,1]\\}\\). We can think of them as approximators for the Q function and the reward function of the original MDP. For an action value function \\(f:\\mathcal{X}\\times\\mathcal{A}\\to[0,1]\\), define its extension: \\[\\bar{f}_{g}(x,a)=\\begin{cases}g(x),&a=a^{+}\\text{ and }x\ otin\\mathcal{X}^{+}\\\\ 0,&x\\in\\mathcal{X}^{+}\\\\ f(x,a),&\\text{otherwise}.\\end{cases} \\tag{5}\\] The extension of \\(f\\) is based on a state value function \\(g:\\mathcal{X}\\to[0,1]\\) which determines the action value of \\(x\\) only at \\(a^{+}\\). One could also view \\(g(x)\\) as a goal indicator: after taking \\(a^{+}\\) the agent would always transit to the zero-reward absorbing state \\(s^{+}\\), so \\(g(x)=\\bar{R}(x,a^{+})\\) which is the indicator of whether \\(s\\in G_{c}\\). Recall the zero-reward Bellman backup operator \\(\\mathcal{T}^{\\pi}\\) with respect to \\(P(s^{\\prime}|s,a)\\) as defined in Assumption 5.2: \\[\\mathcal{T}^{\\pi}f(x,a)\\coloneqq\\gamma\\mathbb{E}_{x^{\\prime}\\sim P_{0}(\\cdot|x,a )}[f(x^{\\prime},\\pi)]\\] where \\(P_{0}(x^{\\prime}|x,a)\\coloneqq P(s^{\\prime}|s,a)\\mathbb{1}(c^{\\prime}=c)\\). Note this definition is different from the one with absorbing state \\(s^{+}\\) in Section 3. Using this modified backup operator, we can show that the following realizability assumption is true for the augmented MDP: **Proposition B.6** (Realizability).: _By Assumption 5.1 and Assumption 5.2, there is \\(f\\in\\mathcal{F}\\) and \\(g\\in\\mathcal{G}\\) such that \\(\\bar{Q}^{\\pi}=\\bar{f}_{g}\\)._ Proof.: By Assumption 5.2, there is \\(h\\in\\mathcal{F}\\) such that \\(h(x,a)=\\max(R(x),Q^{\\pi}(x,a))\\). By Proposition B.4, we have for \\(x\\in\\mathcal{X}\\), \\(a\ eq a^{+}\\) \\[\\bar{Q}^{\\pi}(x,a) =0+\\gamma\\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)}[\\max(R(x^ {\\prime}),Q^{\\pi}(x^{\\prime},\\pi))]\\] \\[=0+\\gamma\\mathbb{E}_{x^{\\prime}\\sim P_{0}(\\cdot|x,a)}[h(x,\\pi)]\\] \\[=\\mathcal{T}^{\\pi}h\\in\\mathcal{F}\\] For \\(a=a^{*}\\), we have \\(\\bar{Q}^{\\pi}(x,a^{*})=\\bar{R}(x,a^{+})=R(x)\\in\\mathcal{G}\\). Finally \\(\\bar{Q}^{\\pi}(x^{+},a)=0\\) for \\(x^{+}\\in\\mathcal{X}^{+}\\). Therefore, \\(\\bar{Q}^{\\pi}=\\bar{f}_{g}\\) for some \\(f\\in\\mathcal{F}\\) and \\(g\\in\\mathcal{G}\\). ### CODA+PSPI Algorithm In this section, we describe the instantiation of PSPI with CODA in detail along with the necessary notation. The main theoretical result and its proof is then given in Section B.4. As discussed in Section 5, our algorithm is based on the idea of reduction, which turns the offline CGO problem into a standard offline RL problem in the augmented MDP. To this end, we construct augmented datasets \\(\\bar{D}_{\\text{dyn}}\\) and \\(\\bar{D}_{\\text{goal}}\\) in Algorithm 1 as follows: \\[\\bar{D}_{\\text{dyn}} =\\{(x_{n},a_{n},r_{n},x_{n}^{\\prime})|r_{n}=0,x_{n}=(s_{i},c_{j}), x_{n}^{\\prime}=(s_{i}^{\\prime},c_{j}),a_{n}=a_{i},(s_{i},a_{i},s_{i}^{ \\prime})\\in D_{\\text{dyn}},(\\cdot,c_{j})\\in D_{\\text{goal}}\\}\\] \\[\\bar{D}_{\\text{goal}} =\\{(x_{n},a^{+},r_{n},x_{n}^{+})|r_{n}=1,x_{n}=(s_{n},c_{n}),x_{n} ^{+}=(s^{+},c_{n}),(s_{n},c_{n})\\in D_{\\text{goal}}\\}\\] With this construction, we have: \\(\\bar{D}_{\\text{dyn}}\\sim\\mu_{\\text{dyn}}(s,a,s^{\\prime})\\mu_{\\text{goal}}(c)\\) and \\(\\bar{D}_{\\text{goal}}\\sim\\mu_{\\text{goal}}(c,s)\\mathbb{1}(a=a^{+})\\mathbb{1}(s ^{\\prime}=s^{+})\\). We use the notation, \\(\\bar{\\mu}_{\\text{dyn}}(x,a,x^{\\prime})=\\mu_{\\text{dyn}}(s,a,s^{\\prime})\\mu_{ \\text{goal}}(c)\\) and \\(\\bar{\\mu}_{\\text{goal}}(x,a,x^{\\prime})=\\mu_{\\text{goal}}(c,s)\\mathbb{1}(a=a ^{+})\\mathbb{1}(s^{\\prime}=s^{+})\\). We will also use the notation \\(x_{ij}\\equiv(s_{i},c_{j})\\), \\(x_{ij}^{\\prime}\\equiv(s_{i}^{\\prime},c_{j})\\) in the above construction. These two datasets have the standard tuple format, so we can run offline RL on \\(\\bar{D}_{\\text{dyn}}\\bigcup\\bar{D}_{\\text{goal}}\\). Also, note that \\(|\\bar{D}_{\\text{dyn}}|=|D_{\\text{dyn}}||D_{\\text{goal}}|\\) and \\(|\\bar{D}_{\\text{goal}}|=|D_{\\text{goal}}|\\). Pspi.We consider the information theoretic version of PSPI [37] which can be summarized as follows: For an MDP \\((\\mathcal{X},\\mathcal{A},R,P,\\gamma)\\), given a tuple dataset \\(D=\\{(x,a,r,x^{\\prime})\\}\\), a policy class \\(\\Pi\\), and a value class \\(\\mathcal{F}\\), it finds the policy through solving the two-player game: \\[\\max_{\\pi\\in\\Pi}\\min_{f\\in\\mathcal{F}}\\quad f(d_{0},\\pi)\\qquad\\text{ s.t. }\\qquad\\ell(f,f;\\pi,D)-\\min_{f^{\\prime}\\in\\mathcal{F}}\\ell(f^{\\prime},f;\\pi,D) \\leq\\epsilon_{b} \\tag{6}\\] where \\(f(d_{0},\\pi)=\\mathbb{E}_{x_{0}\\sim d_{0}}[f(x_{0},\\pi)]\\), \\(\\ell(f,f^{\\prime};\\pi,D)\\coloneqq\\frac{1}{|D|}\\sum_{(x,a,r,x^{\\prime})\\in D}(f (x,a)-r-f^{\\prime}(x^{\\prime},\\pi))^{2}\\). The term \\(\\ell(f,f;\\pi,D)-\\min_{f^{\\prime}}\\ell(f^{\\prime},f;\\pi,D)\\) in the constraint is an empirical estimation of the Bellman error on \\(f\\) with respect to \\(\\pi\\) on the data distribution \\(\\mu\\), i.e. \\(\\mathbb{E}_{x,a\\sim\\mu}[(f(x,a)-\\mathcal{T}^{\\pi}f(x,a))^{2}]\\). It constrains the Bellman error to be small, since \\(\\mathbb{E}_{x,a\\sim\\mu}[(Q^{\\pi}(x,a)-\\mathcal{T}^{\\pi}Q^{\\pi}(x,a))^{2}]=0\\). Coda+PSPI.Below we show how to run PSPI to solve the augmented MDP with offline dataset \\(\\bar{D}_{\\text{dyn}}\\bigcup\\bar{D}_{\\text{goal}}\\). To this end, we extend the policy class from \\(\\Pi\\) to \\(\\bar{\\Pi}\\), and the value class from \\(\\mathcal{F}\\) to \\(\\bar{\\mathcal{F}}_{\\mathcal{G}}\\) using the function class \\(\\mathcal{G}\\) based on the extensions defined in Section 4.1. One natural attempt is to implement equation 6 with the extended policy and value classes \\(\\bar{\\Pi}\\) and \\(\\bar{\\mathcal{F}}\\) and \\(\\bar{D}=\\bar{D}_{\\text{dyn}}\\bigcup\\bar{D}_{\\text{goal}}\\). This would lead to the two player game: \\[\\max_{\\pi\\in\\Pi}\\min_{\\bar{f}_{g}\\in\\mathcal{F}_{\\mathcal{G}}}\\quad\\bar{f}_{g}(d _{0},\\bar{\\pi})\\qquad\\text{ s.t. }\\qquad\\ell(\\bar{f}_{g},\\bar{f}_{g};\\bar{\\pi},\\bar{D})-\\min_{\\bar{f}_{g^{ \\prime}}^{\\prime}\\in\\mathcal{F}_{\\mathcal{G}}}\\ell(\\bar{f}_{g^{\\prime}}^{\\prime}, \\bar{f}_{g};\\bar{\\pi},\\bar{D})\\leq\\epsilon_{b} \\tag{7}\\] However, equation 7 is not a well-defined algorithm, because its usage of the extended policy \\(\\bar{\\pi}\\) in the constraint requires knowledge of \\(G\\), which is unknown to the agent. Fortunately, we show that equation 7 can be slightly modified so that the implementation does not actually require knowing \\(G\\). Here we use a property (Proposition B.4) that the Bellman equation of the augmented MDP: \\[\\bar{Q}^{\\bar{\\pi}}(x,a) =\\bar{R}(x,a)+\\gamma\\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)}[ \\bar{Q}^{\\pi}(x^{\\prime},\\bar{\\pi})]\\] \\[=0+\\gamma\\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)}[\\max(R(x^ {\\prime}),Q^{\\pi}(x^{\\prime},\\pi))]\\] for \\(x\\in\\mathcal{X}\\) and \\(a\ eq a^{+}\\), and \\(\\bar{Q}^{\\bar{\\pi}}(x,a)=1\\) for \\(x\\in G\\) and \\(a=a^{+}\\). We can rewrite the squared Bellman error on these two data distributions, \\(\\bar{D}_{\\text{dyn}}\\) and \\(\\bar{D}_{\\text{goal}}\\), using the Bellman backup defined on the augmented MDP (see eq.4) as below: \\[\\mathbb{E}_{\\mu_{\\text{dyn}}}[(\\bar{Q}^{\\bar{\\pi}}(x,a)-\\bar{ \\mathcal{T}}^{\\bar{\\pi}}\\bar{Q}^{\\bar{\\pi}}(x,a))^{2}] =\\mathbb{E}_{\\mu_{\\text{dyn}}}[(\\bar{Q}^{\\bar{\\pi}}(x,a)-0-\\gamma \\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)}[\\max(R(x),Q^{\\pi}(x,\\pi))])^{2}]\\] \\[\\mathbb{E}_{x,a\\sim\\mu_{\\text{goal}}}[(\\bar{Q}^{\\bar{\\pi}}(x,a)- \\bar{\\mathcal{T}}^{\\bar{\\pi}}\\bar{Q}^{\\bar{\\pi}}(x,a))^{2}]=\\mathbb{E}_{x,a \\sim\\mu_{\\text{goal}}}[(\\bar{Q}^{\\bar{\\pi}}(x,a^{+})-1)^{2}]\\] We can construct an approximator \\(\\bar{f}_{g}(x,a)\\) for \\(\\bar{Q}^{\\bar{\\pi}}(x,a)\\). Substituting the estimator \\(\\bar{f}_{g}(x,a)\\) for \\(\\bar{Q}^{\\bar{\\pi}}(x,a)\\) in the squared Bellman errors above and approximating them by finite samples, we derive the empirical losses below. \\[\\ell_{\\text{dyn}}(\\bar{f}_{g},\\bar{f}_{g^{\\prime}}^{\\prime};\\bar{ \\pi}) \\coloneqq\\frac{1}{|\\bar{D}_{\\text{dyn}}|}\\sum_{(x,a,r,x^{\\prime}) \\in\\bar{D}_{\\text{dyn}}}(f(x,a)-\\gamma\\max(g^{\\prime}(x^{\\prime}),f^{\\prime}( x^{\\prime},\\pi)))^{2} \\tag{8}\\] \\[\\ell_{\\text{goal}}(\\bar{f}_{g}) \\coloneqq\\frac{1}{|\\bar{D}_{\\text{goal}}|}\\sum_{(x,a,r,x^{\\prime} )\\in\\bar{D}_{\\text{goal}}}(g(x)-1)^{2} \\tag{9}\\] where we have \\(\\bar{f}_{g}(x,a)=f(x,a)\\mathbbm{1}(a\ eq a^{+})+g(x)\\mathbbm{1}(a=a^{+})\\) for \\(x\ otin\\mathcal{X}^{+}\\). Using this loss, we define the two-player game of PSPI for the augmented MDP: \\[\\max_{\\pi\\in\\Pi}\\min_{\\bar{f}_{g}\\in\\mathcal{F}}\\bar{f}_{g}(d_{0 },\\bar{\\pi})\\] (10) s.t. \\[\\ell_{\\text{dyn}}(\\bar{f}_{g},\\bar{f}_{g};\\bar{\\pi})-\\min_{\\bar{ f}_{g^{\\prime}}^{\\prime}\\in\\mathcal{F}}\\ell_{\\text{dyn}}(\\bar{f}_{g^{\\prime}}^{ \\prime},\\bar{f}_{g};\\bar{\\pi})\\leq\\epsilon_{\\text{dyn}}\\] \\[\\ell_{\\text{goal}}(\\bar{f}_{g})\\leq 0\\] Notice \\(\\bar{f}_{g}(d_{0},\\bar{\\pi})=f(d_{0},\\pi)\\). Therefore, this problem can be solved using samples from \\(D\\) without knowing \\(G\\). ### Analysis of CODA+PSPI Covering number.We first define the covering number on the function classes \\(\\mathcal{F}\\), \\(\\mathcal{G}\\), and \\(\\Pi\\)9. For \\(\\mathcal{F}\\) and \\(\\mathcal{G}\\), we use the \\(L_{\\infty}\\) metric. We use \\(\\mathcal{N}_{\\infty}(\\mathcal{F},\\epsilon)\\) and \\(\\mathcal{N}_{\\infty}(\\mathcal{G},\\epsilon)\\) to denote the their \\(\\epsilon\\)-covering numbers. For \\(\\Pi\\), we use the \\(L_{\\infty}\\)-\\(L_{1}\\) metric, i.e., \\(\\|\\pi_{1}-\\pi_{2}\\|_{\\infty,1}\\coloneqq\\sup_{x\\in\\mathcal{X}}\\|\\pi_{1}(\\cdot|s )-\\pi_{2}(\\cdot|s)\\|_{1}\\). We use \\(\\mathcal{N}_{\\infty,1}(\\Pi,\\epsilon)\\) to denote its \\(\\epsilon\\)-covering number. Footnote 9: For finite function classes, the resulting performance guarantee will depend on \\(|\\mathcal{F}|,|\\mathcal{G}|\\) and \\(|\\Pi|\\) instead of the covering numbers as stated in Theorem 5.4. High-probability events.In CODA+PSPI (eq. 10), we choose the policy in class \\(\\pi\\) which has the best _pessimistic_ value function estimate. In order to show this, we will need two high probability results (we defer their proofs to Section B.4.1). To that end, we will use the following notation for the expected value of the empirical losses: \\[\\ell_{\\bar{\\mu}_{\\text{dyn}}}(\\bar{f}_{g},\\bar{f}_{g^{\\prime}}^{ \\prime};\\bar{\\pi}) \\coloneqq\\mathbb{E}_{(x,a,x^{\\prime})\\sim\\bar{\\mu}_{\\text{dyn}}}(f(x,a)- \\gamma\\max(g^{\\prime}(x^{\\prime}),f^{\\prime}(x^{\\prime},\\pi)))^{2}\\] \\[\\ell_{\\bar{\\mu}_{\\text{goal}}}(\\bar{f}_{g}) \\coloneqq\\mathbb{E}_{(x,a^{+},x^{+})\\sim\\bar{\\mu}_{\\text{goal}}}(g (x)-1)^{2}\\] First, we show that for any policy \\(\\pi\\in\\Pi\\), the true value function \\(\\bar{Q}^{\\bar{\\pi}}\\) satisfies the two empirical constraints specified in eq. equation 10. **Lemma B.7**.: _With probability at least \\(1-\\delta\\), it holds for all \\(\\pi\\in\\Pi\\),_ \\[\\ell_{dyn}(\\bar{Q}^{\\bar{\\pi}},\\bar{Q}^{\\bar{\\pi}};\\bar{\\pi})-\\min_{\\bar{f}^{ \\prime}_{g^{\\prime}}\\in\\mathcal{F}}\\ell_{dyn}(\\bar{f}^{\\prime}_{g^{\\prime}}, \\bar{Q}^{\\bar{\\pi}};\\bar{\\pi})\\leq O\\left(\\left(\\sqrt{\\frac{\\Box}{|D_{dyn}|}}+ \\sqrt{\\frac{\\Box}{|D_{goal}|}}\\right)^{2}\\right)\\] \\[\\ell_{goal}(\\bar{Q}^{\\bar{\\pi}})\\leq 0\\] _where10\\(\\Box\\equiv\\log\\Big{(}\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F},1/|D_{goal}||D_{ obs}|)\\mathcal{N}_{\\infty}(\\mathcal{G},1/|D_{goal}||D_{obs}|)\\mathcal{N}_{\\infty},1(\\Pi,1/|D_{goal}||D_{obs}|)}{\\delta}\\Big{)}\\)._ Footnote 10: Technically, we can remove \\(\\mathcal{N}_{\\infty}\\left(\\mathcal{G},\\frac{1}{|D_{obs}||D_{goal}|}\\right)\\) in the upper bound, but we include it here for a cleaner presentation. We use the notation \\(\\epsilon_{\\text{dyn}}:=\\left(\\sqrt{\\frac{\\Box}{|D_{dyn}|}}+\\sqrt{\\frac{\\Box} {|D_{goal}|}}\\right)^{2}\\) for the first upper bound in Lemma B.7. Next, we show that for every pair of value function \\(\\bar{f}_{g}\\in\\bar{\\mathcal{F}}\\) and policy \\(\\bar{\\pi}\\in\\bar{\\Pi}\\) which satisfies the constraints in eq. equation 10, the empirical estimates provide a bound on the population error with high probability. **Lemma B.8**.: _For all \\(f\\in\\mathcal{F},g\\in\\mathcal{G}\\) and \\(\\pi\\in\\Pi\\) satisfying_ \\[\\ell_{dyn}(\\bar{f}_{g},\\bar{f}_{g};\\bar{\\pi})-\\min_{\\bar{f}^{\\prime}_{g^{ \\prime}}\\in\\bar{\\mathcal{F}}}\\ell_{dyn}(\\bar{f}^{\\prime}_{g^{\\prime}},\\bar{f }_{g};\\bar{\\pi})\\leq\\epsilon_{dyn}\\] \\[\\ell_{goal}(\\bar{f}_{g})\\leq 0,\\] _with probability at least \\(1-\\delta\\), we have:_ \\[\\left\\|\\bar{f}_{g}(x,a)-\\gamma\\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)} \\left[\\max(g(x^{\\prime}),f(x^{\\prime},\\pi))\\right]\\right\\|_{\\bar{\\mu}_{dyn}} \\leq O\\left(\\sqrt{\\epsilon_{dyn}}\\right)\\] \\[\\left\\|g(x)-1\\right\\|_{\\bar{\\mu}_{goal}}\\leq O\\left(\\sqrt{\\frac{\\log\\frac{ \\mathcal{N}_{\\infty}(\\mathcal{G},1/|D_{goal}|)}{\\delta}}{|D_{goal}|}}\\right) \\eqqcolon\\sqrt{\\epsilon_{goal}}\\] Pessimistic estimate.Our next step is to show that the solution of the constrained optimization problem in equation 10 is pessimistic and that the amount of pessimism is bounded. **Lemma B.9**.: _Given \\(\\pi\\), let \\(\\bar{f}^{\\pi}_{g}\\) denote the minimizer in equation 10. With high probability, \\(\\bar{f}^{\\pi}_{g}(d_{0},\\bar{\\pi})\\leq Q^{\\pi}(d_{0},\\pi)\\)_ Proof.: By Lemma B.7, for any policy \\(\\pi\\in\\Pi\\), we know that \\(\\bar{Q}^{\\bar{\\pi}}_{R}\\) satisfies the constraints in equation equation 10. Therefore, we have \\[\\bar{f}^{\\pi}_{g}(d_{0},\\bar{\\pi})\\leq\\bar{Q}^{\\pi}_{R}(d_{0},\\bar{\\pi})=Q^{ \\pi}(d_{0},\\pi).\\] We will now bound the amount of underestimation for the minimizer \\(\\bar{f}^{\\pi}_{g}\\) in the above lemma. **Lemma B.10**.: _Suppose \\(x_{0}\\sim d_{0}\\) is not in \\(G\\) almost surely. For any \\(\\pi\\in\\Pi\\),_ \\[Q^{\\pi}(d_{0},\\pi)-\\bar{f}^{\\pi}_{g}(d_{0},\\bar{\\pi})\\] \\[\\leq\\mathbb{E}_{\\pi}\\left[\\sum_{t=0}^{T-1}\\gamma^{t}\\left(\\gamma \\max(g^{\\pi}(x_{t+1}),f^{\\pi}(x_{t+1},\\pi))-f^{\\pi}(x_{t},a_{t})\\right)+ \\gamma^{T}(R(x_{T})-g^{\\pi}(x_{T}))\\right]\\] _Note that in a trajectory \\(x_{T}\\in G\\) whereas \\(x_{t}\ otin G\\) for \\(t<T\\) by definition of \\(T\\)._Proof.: Let \\(\\bar{f}_{g}^{\\pi}=(f^{\\pi},g^{\\pi})\\) be the empirical minimizer. By performance difference lemma, we can write \\[(1-\\gamma)Q^{\\pi}(d_{0},\\pi)-(1-\\gamma)\\bar{f}_{g}^{\\pi}(d_{0},\\bar{ \\pi}) =(1-\\gamma)\\bar{Q}^{\\pi}(d_{0},\\bar{\\pi})-(1-\\gamma)\\bar{f}_{g}^{ \\pi}(d_{0},\\bar{\\pi})\\] \\[=\\mathbb{E}_{\\bar{d}^{\\pi}}[\\bar{R}(x,a)+\\gamma\\bar{f}_{g}^{\\pi}( x^{\\prime},\\bar{\\pi})-\\bar{f}_{g}^{\\pi}(x,a)]\\] where with abuse of notation we define \\(\\bar{d}^{\\pi}(x,a,x^{\\prime})\\coloneqq\\bar{d}^{\\pi}(x,a)\\bar{P}(x^{\\prime}|x,a)\\), where \\(\\bar{d}^{\\pi}(x,a)\\) is the average state-action distribution of \\(\\bar{\\pi}\\) in the augmented MDP. In the above expectation, for \\(x\\in G\\), we have \\(a=a^{+}\\) and \\(x^{+}=(s^{+},c)\\) after taking \\(a^{+}\\) at \\(x=(s,c)\\), which leads to \\[\\bar{R}(x,a)+\\gamma\\bar{f}_{g}^{\\pi}(x^{\\prime},\\bar{\\pi})-\\bar{f}_{g}^{\\pi}( x,a)=\\bar{R}(x,a^{+})+\\gamma\\bar{f}_{g}^{\\pi}(x^{+},\\bar{\\pi})-\\bar{f}_{g}^{ \\pi}(x,a^{+})=R(x)-g^{\\pi}(x)\\] For \\(x\ otin G\\) and \\(x\ otin\\mathcal{X}^{+}\\), we have \\(a\ eq a^{+}\\) and \\(x^{\\prime}\ otin\\mathcal{X}^{+}\\); therefore \\[\\bar{R}(x,a)+\\gamma\\bar{f}_{g}^{\\pi}(x^{\\prime},\\bar{\\pi})-\\bar{f }_{g}^{\\pi}(x,a) =R(x)+\\gamma\\bar{f}_{g}^{\\pi}(x^{\\prime},\\bar{\\pi})-f^{\\pi}(x,a)\\] \\[\\leq\\gamma\\max(g^{\\pi}(x^{\\prime}),f^{\\pi}(x^{\\prime},\\pi))-f^{\\pi }(x,a)\\] where the last step is because of the definition of \\(\\bar{f}_{g}^{\\pi}\\). For \\(x\\in\\mathcal{X}^{+}\\), we have \\(x\\in\\mathcal{X}^{+}\\) and the reward is zero, so \\[\\bar{R}(x,a)+\\gamma\\bar{f}_{g}^{\\pi}(x^{\\prime},\\bar{\\pi})-\\bar{f}_{g}^{\\pi}( x,a)=0\\] Therefore, we can derive \\[(1-\\gamma)Q^{\\pi}(x_{0},\\pi)-(1-\\gamma)\\bar{f}_{g}^{\\pi}(x_{0}, \\bar{\\pi})\\] \\[\\leq\\mathbb{E}_{\\bar{d}^{\\pi}}[\\gamma\\max(g^{\\pi}(x^{\\prime}),f^{ \\pi}(x^{\\prime},\\pi))-f^{\\pi}(x,a)|x\ otin G,x\ otin\\mathcal{X}^{+}]+\\mathbb{E }_{\\bar{d}^{\\pi}}[R(x)-g^{\\pi}(x)|x\\in G]\\] Finally, using Lemma B.2 we can have the final upper bound. Main Result: Performance Bound.Let \\(\\pi^{\\dagger}\\) be the learned policy and let \\(\\bar{f}_{g}^{\\pi^{\\dagger}}\\) be the learned function approximators. For any comparator policy \\(\\pi\\), let \\(\\bar{f}_{g}^{\\pi}=(f^{\\pi},g^{\\pi})\\) be the estimator of \\(\\pi\\) on the data. We have. \\[V^{\\pi}(d_{0})-V^{\\pi^{\\dagger}}(d_{0})\\] \\[=Q^{\\pi}(d_{0},\\pi)-Q^{\\pi^{\\dagger}}(d_{0},\\pi^{\\dagger})\\] \\[=Q^{\\pi}(d_{0},\\pi)-\\bar{f}_{g}^{\\pi^{\\dagger}}(d_{0},\\bar{\\pi}^{ \\dagger})+\\bar{f}_{g}^{\\pi^{\\dagger}}(d_{0},\\bar{\\pi}^{\\dagger})-Q^{\\pi^{ \\dagger}}(d_{0},\\pi^{\\dagger})\\] \\[\\leq Q^{\\pi}(d_{0},\\pi)-\\bar{f}_{g}^{\\pi}(d_{0},\\bar{\\pi}^{\\dagger})\\] \\[\\leq Q^{\\pi}(d_{0},\\pi)-\\bar{f}_{g}^{\\pi}(d_{0},\\bar{\\pi})\\] \\[\\leq\\mathbb{E}_{\\pi,P}\\left[\\sum_{t=0}^{T-1}\\gamma^{t}(\\gamma \\max(g^{\\pi}(x_{t+1}),f^{\\pi}(x_{t+1},\\pi))-f^{\\pi}(x_{t},a_{t}))+\\gamma^{T}( R(x_{T})-g^{\\pi}(x_{T}))\\right]\\] \\[\\leq\\mathbb{E}_{\\pi,P}\\left[\\sum_{t=0}^{T-1}\\gamma^{t}|\\gamma \\max(g^{\\pi}(x_{t+1}),f^{\\pi}(x_{t+1},\\pi))-f^{\\pi}(x_{t},a_{t})|+\\gamma^{T}| R(x_{T})-g^{\\pi}(x_{T})|\\right]\\] \\[\\leq\\mathfrak{C}_{\\text{dyn}}(\\pi)\\mathbb{E}_{\\mu_{\\text{obs}}}[ \\gamma\\max(g^{\\pi}(x^{\\prime}),f^{\\pi}(x^{\\prime},\\pi))-f^{\\pi}(x,a)|]+ \\mathfrak{C}_{\\text{goal}}(\\pi)\\mathbb{E}_{\\mu_{\\text{goal}}}[|g(x)-1|]\\] \\[\\lesssim\\mathfrak{C}_{\\text{dyn}}(\\pi)\\sqrt{\\epsilon_{\\text{dyn} }}++\\mathfrak{C}_{\\text{goal}}(\\pi)\\sqrt{\\epsilon_{\\text{goal}}}\\] where \\(\\mathfrak{C}_{\\text{dyn}}(\\pi)\\) and \\(\\mathfrak{C}_{\\text{goal}}(\\pi)\\) are the concentrability coefficients defined in Definition 5.3. **Theorem B.11**.: _Let \\(\\pi^{\\dagger}\\) denote the learned policy of CODA + PSPI with datasets \\(D_{\\text{dyn}}\\) and \\(D_{\\text{goal}}\\), using value function classes \\(\\mathcal{F}=\\{\\mathcal{X}\\times\\mathcal{A}\\to[0,1]\\}\\) and \\(\\mathcal{G}=\\{\\mathcal{X}\\to[0,1]\\}\\). Under realizability and completeness assumptions as stated in Assumption 5.1 and Assumption 5.2 respectively, with probability \\(1-\\delta\\), it holds, for any \\(\\pi\\in\\Pi\\),_ \\[J(\\pi)-J(\\pi^{\\dagger})\\lesssim\\mathfrak{C}_{\\text{dyn}}(\\pi)\\left(\\sqrt{\\frac{ \\square}{|D_{\\text{dyn}}|}}+\\sqrt{\\frac{\\square}{|D_{\\text{goal}}|}}\\right)+ \\mathfrak{C}_{\\text{goal}}(\\pi)\\sqrt{\\frac{\\log\\frac{\\mathcal{N}_{\\infty}( \\mathcal{G},1/|D_{\\text{goal}}|)}{\\delta}}{|D_{\\text{goal}}|}}\\] _where \\(\\square\\equiv\\log\\left(\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F},1/|D_{\\text{goal} }||D_{\\text{dyn}}|)\\mathcal{N}_{\\infty}(\\mathcal{G},1/|D_{\\text{goal}}||D_{ \\text{dyn}}|)\\mathcal{N}_{\\infty,1}(\\Pi,1/|D_{\\text{goal}}||D_{\\text{dyn}}|)}{ \\delta}\\right)\\), and \\(\\mathfrak{C}_{\\text{dyn}}(\\pi)\\) and \\(\\mathfrak{C}_{\\text{goal}}(\\pi)\\) are concentrability coefficients which decrease as the data coverage increases._ #### b.4.1 Proof of Lemmas b.12 and b.13 We first show the following complementary lemma where we use a concentration bound on the constructed datasets \\(\\bar{D}_{\\text{dyn}}\\) and \\(\\bar{D}_{\\text{goal}}\\). Lemmas b.7 and b.8 will follow deterministically from this main auxiliary result. **Lemma B.12**.: _With probability at least \\(1-\\delta\\), for any \\(f,f_{1},f_{2}\\in\\mathcal{F}\\) and \\(g\\in\\mathcal{G}\\), we have:_ \\[\\ell_{\\bar{\\mu}_{\\text{dyn}}}(f_{1},\\bar{f}_{g},\\bar{\\pi})-\\ell_{ \\bar{\\mu}_{\\text{dyn}}}(f_{2},\\bar{f}_{g},\\bar{\\pi})-\\ell_{\\text{dyn}}(f_{1}, \\bar{f}_{g},\\bar{\\pi})+\\ell_{\\text{dyn}}(f_{2},\\bar{f}_{g},\\bar{\\pi})\\] \\[\\leq\\mathcal{O}\\left(\\|f_{1}-f_{2}\\|_{\\bar{\\mu}_{\\text{dyn}}} \\left(\\sqrt{\\frac{\\square}{|D_{\\text{goal}}|}}+\\sqrt{\\frac{\\square}{|D_{\\text {dyn}}|}}\\right)+\\frac{\\square}{\\sqrt{|D_{\\text{goal}}||D_{\\text{dyn}}|}}+ \\frac{\\square}{|D_{\\text{goal}}|}+\\frac{\\square}{|D_{\\text{dyn}}|}\\right)\\] _where \\(\\square\\equiv\\log\\left(\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F},1/|D_{\\text{goal }}||D_{\\text{dyn}}|)\\mathcal{N}_{\\infty}(\\mathcal{G},1/|D_{\\text{goal}}||D_{ \\text{dyn}}|)\\mathcal{N}_{\\infty,1}(\\Pi,1/|D_{\\text{goal}}||D_{\\text{dyn}}|)} {\\delta}\\right)\\)._ Proof.: Our proof is similar to proof of corresponding results in Xie et al. [37] (Lemma A.4) and Cheng et al. [4] (Lemma 10) but we derive the result for the product distribution \\(\\bar{\\mu}_{\\text{dyn}}=\\mu_{\\text{dyn}}\\times\\mu_{\\text{goal}}\\) and its empirical approximation using \\(\\bar{D}_{\\text{dyn}}\\). Throughout this proof, we omit the bar on \\(\\bar{\\pi}\\) as \\(\\ell_{\\text{dyn}}\\) does not use the extended definition of the policy \\(\\pi\\) and further use \\(M,N\\) for the dataset sizes \\(|D_{\\text{goal}}|,|D_{\\text{dyn}}|\\). For any observed context \\((c_{j},s_{j})\\in D_{\\text{goal}}\\), we define the following quantity: \\[\\ell_{\\mu_{\\text{dyn}}}^{j}(f,\\bar{f}^{\\prime}{}_{g^{\\prime}},\\pi)=\\mathbb{E}_ {(s,a,s^{\\prime})\\sim\\mu_{\\text{dyn}}}\\left[(f((s,c_{j}),a)-\\gamma\\max(g^{ \\prime}((s^{\\prime},c_{j}))),f^{\\prime}((s^{\\prime},c_{j}),\\pi)))^{2}\\right]\\] For conciseness, we use notation \\(x_{oj}\\) for \\((s,c_{j})\\) and \\(x^{\\prime}_{oj}\\) for \\((s^{\\prime},c_{j})\\) where \\((s,a,s^{\\prime})\\) is sampled from a dynamics distribution and \\(c_{j}\\in D_{\\text{goal}}\\). We first start with the following: \\[\\ell_{\\bar{\\mu}_{\\text{dyn}}}(f_{1},\\bar{f}_{g},\\pi)-\\ell_{\\bar{ \\mu}_{\\text{dyn}}}(f_{2},\\bar{f}_{g},\\pi)-\\ell_{\\text{dyn}}(f_{1},\\bar{f}_{g },\\pi)+\\ell_{\\text{dyn}}(f_{2},\\bar{f}_{g},\\pi)\\] \\[\\leq\\ell_{\\bar{\\mu}_{\\text{dyn}}}(f_{1},\\bar{f}_{g},\\pi)-\\ell_{ \\bar{\\mu}_{\\text{dyn}}}(f_{2},\\bar{f}_{g},\\pi)-\\frac{1}{M}\\sum_{j=1}^{M}\\ell _{\\mu_{\\text{dyn}}}^{j}(f_{1},\\bar{f}_{g},\\pi)+\\frac{1}{M}\\sum_{j=1}^{M}\\ell _{\\mu_{\\text{dyn}}}^{j}(f_{2},\\bar{f}_{g},\\pi) \\tag{11}\\] \\[\\quad+\\sum_{j=1}^{M}\\ell_{\\mu_{\\text{dyn}}}^{j}(f_{1},\\bar{f}_{g },\\pi)-\\sum_{j=1}^{M}\\ell_{\\mu_{\\text{dyn}}}^{j}(f_{2},\\bar{f}_{g},\\pi)-\\ell_ {\\text{dyn}}(f_{1},\\bar{f}_{g},\\pi)+\\ell_{\\text{dyn}}(f_{2},\\bar{f}_{g},\\pi) \\tag{12}\\] We will derive the final deviation bound by bounding each of these two empirical deviations in lines equation 11,equation 12. First, we will bound the term in line equation 11: \\[\\sum_{j=1}^{M}\\ell_{\\mu_{\\text{dyn}}}^{j}(f_{1},\\bar{f}_{g},\\pi)- \\sum_{j=1}^{M}\\ell_{\\mu_{\\text{dyn}}}^{j}(f_{2},\\bar{f}_{g},\\pi)\\] \\[=\\sum_{j=1}^{M}\\ell_{\\mu_{\\text{dyn}}}^{j}(f_{1},\\bar{f}_{g},\\pi) -\\ell_{\\mu_{\\text{dyn}}}^{j}(f_{2},\\bar{f}_{g},\\pi)\\] \\[=\\sum_{j=1}^{M}\\mathbb{E}_{\\mu_{\\text{dyn}}}\\big{[}(f_{1}(x_{oj},a )-\\gamma\\max(g(x^{\\prime}_{oj}),f(x^{\\prime}_{oj},\\pi)))^{2}-(f_{2}(x_{oj},a) -\\gamma\\max(g(x^{\\prime}_{oj}),f(x^{\\prime}_{oj},\\pi)))^{2}\\big{]}\\] \\[=\\sum_{j=1}^{M}\\mathbb{E}_{\\mu_{\\text{dyn}}}\\left[(f_{1}(x_{oj},a )-f_{2}(x_{oj},a))(f_{1}(x_{oj},a)+f_{2}(x_{oj},a)-2\\gamma\\max(g(x^{\\prime}_{oj }),f(x^{\\prime}_{oj},\\pi)))\\right]\\] \\[=\\sum_{j=1}^{M}\\mathbb{E}_{(s,a,\\cdot)\\sim\\mu_{\\text{dyn}}}\\left[( f_{1}(x_{oj},a)-f_{2}(x_{oj},a))(f_{1}(x_{oj},a)+f_{2}(x_{oj},a)-2\\bar{\\mathcal{T}}^{ \\pi}\\bar{f}_{g})(x_{oj},a)\\right] \\tag{13}\\] \\[=\\sum_{j=1}^{M}\\mathbb{E}_{(s,a,\\cdot)\\sim\\mu_{\\text{dyn}}}\\left[( f_{1}(x_{oj},a)-\\bar{\\mathcal{T}}^{\\pi}\\bar{f}_{g}(x_{oj},a))^{2}-(f_{2}(x_{oj},a)- \\bar{\\mathcal{T}}^{\\pi}\\bar{f}_{g}(x_{oj},a))^{2}\\right]\\] Using a similar argument, we can show that: \\[\\ell_{\\bar{\\mu}_{\\text{dyn}}}(f_{1},\\bar{f}_{g},\\pi)-\\ell_{\\bar{ \\mu}_{\\text{dyn}}}(f_{2},\\bar{f}_{g},\\pi)\\] \\[=\\mathbb{E}_{\\bar{\\mu}_{\\text{dyn}}}\\left[(f_{1}((s,c),a)-\\bar{ \\mathcal{T}}^{\\pi}\\bar{f}_{g}((s,c),a))^{2}-(f_{2}((s,c),a)-\\bar{\\mathcal{T}}^{ \\pi}\\bar{f}_{g}((s,c),a))^{2}\\right] \\tag{14}\\]Let \\(\\mathcal{F}_{\\epsilon}\\), \\(\\mathcal{G}_{\\epsilon}\\) be \\(\\epsilon\\)-cover of \\(\\mathcal{F}\\) and \\(\\mathcal{G}\\), and \\(\\Pi_{\\epsilon}\\) be \\(\\epsilon\\)-cover of \\(\\Pi\\), i.e., \\(\\exists\\tilde{f}_{1},\\tilde{f}_{2},\\tilde{f}\\in\\mathcal{F}_{\\epsilon}\\), \\(\\tilde{g}\\in GG_{\\epsilon}\\) and \\(\\tilde{\\pi}\\in\\Pi_{\\epsilon}\\) such that \\(\\|f-\\tilde{f}\\|_{\\infty},\\|f_{1}-\\tilde{f}_{1}\\|_{\\infty},\\|f_{2}-\\tilde{f}_{ 2}\\|_{\\infty}\\leq\\epsilon\\) and \\(\\|\\pi\\tilde{\\pi}\\|_{\\infty,1}\\leq\\epsilon\\). Then, for any \\(f,f_{1},f_{2}\\in\\mathcal{F}\\), \\(g\\in\\mathcal{G}\\), \\(\\pi\\in\\Pi\\) and their corresponding \\(\\tilde{f},\\tilde{f}_{1},\\tilde{f}_{2}\\in\\mathcal{F}_{\\epsilon}\\), \\(\\tilde{g}\\in\\mathcal{G}_{\\epsilon}\\), \\(\\tilde{\\pi}\\in\\Pi_{\\epsilon}\\): \\[\\ell_{\\tilde{\\mu}_{\\text{dyn}}}(\\tilde{f}_{1},\\bar{\\tilde{f}}_{ \\tilde{g}},\\tilde{\\pi})-\\ell_{\\tilde{\\mu}_{\\text{dyn}}}(\\tilde{f}_{2},\\bar{ \\tilde{f}}_{\\tilde{g}},\\tilde{\\pi})-\\frac{1}{M}\\sum_{j=1}^{M}\\Big{(}\\ell^{j}_ {\\mu_{\\text{dyn}}}(\\tilde{f}_{1},\\bar{\\tilde{f}}_{\\tilde{g}},\\tilde{\\pi})- \\ell^{j}_{\\mu_{\\text{dyn}}}(\\tilde{f}_{2},\\bar{\\tilde{f}}_{\\tilde{g}},\\tilde{ \\pi})\\Big{)}\\] \\[=\\mathbb{E}_{\\tilde{\\mu}_{\\text{dyn}}}\\left[(\\tilde{f}_{1}((s,c), a)-\\tilde{\\mathcal{T}}^{\\tilde{\\pi}}\\bar{\\tilde{f}}_{\\tilde{g}}((s,c),a))^{2}-( \\tilde{f}_{2}((s,c),a)-\\tilde{\\mathcal{T}}^{\\tilde{\\pi}}\\bar{\\tilde{f}}_{ \\tilde{g}}((s,c),a))^{2}\\right]\\] \\[\\quad-\\frac{1}{M}\\sum_{j=1}^{M}\\mathbb{E}_{(s,a,\\cdot)\\sim\\mu_{ \\text{dyn}}}\\left[(\\tilde{f}_{1}(x_{\\circ j},a)-\\tilde{f}_{2}(x_{\\circ j},a))( \\tilde{f}_{1}(x_{\\circ j},a)+\\tilde{f}_{2}(x_{\\circ j},a)-2\\mathcal{T}^{\\tilde {\\pi}}\\bar{\\tilde{f}}_{\\tilde{g}})\\right]\\] \\[\\leq\\sqrt{\\frac{4\\mathbf{V}\\log\\left(\\frac{\\mathcal{N}_{\\infty} (\\mathcal{F},\\epsilon)\\mathcal{N}_{\\infty}(\\mathcal{G},\\epsilon)\\mathcal{N}_{ \\infty,1}(\\Pi,\\epsilon)}{\\delta}\\right)}{M}}+\\frac{2\\log\\left(\\frac{\\mathcal{N }_{\\infty}(\\mathcal{F},\\epsilon)\\mathcal{N}_{\\infty}(\\mathcal{G},\\epsilon) \\mathcal{N}_{\\infty,1}(\\Pi,\\epsilon)}{\\delta}\\right)}{3M}.\\] where the first equation follows from eqs. equation 13 and equation 14, and the last inequality follows from Bernstein's inequality with a union bound over the classes \\(\\mathcal{F}_{\\epsilon},\\mathcal{G}_{\\epsilon},\\Pi_{\\epsilon}\\) where \\(\\mathbf{V}\\) is the variance term as follows: \\[\\text{Var}_{c\\sim\\mu_{\\text{peak}}}\\left[\\mathbb{E}_{(s,a,\\cdot) \\sim\\mu_{\\text{dyn}}}\\left[(f_{1}((s,c),a)-f_{2}((s,c),a))(f_{1}((s,c),a)+f_{2 }((s,c),a)-2\\mathcal{T}^{\\pi}\\bar{f}_{g}((s,c),a))\\right]\\right]\\] \\[\\leq\\mathbb{E}_{c\\sim\\mu_{\\text{peak}}}\\left[(s,a,\\cdot)\\sim\\mu_{ \\text{dyn}}\\left[(f_{1}((s,c),a)-f_{2}((s,c),a))(f_{1}((s,c),a)+f_{2}((s,c),a)- 2\\mathcal{T}^{\\pi}\\bar{f}_{g}((s,c),a))\\right]^{2}\\right]\\] \\[\\leq 4\\mathbb{E}_{\\tilde{\\mu}_{\\text{dyn}}}\\left[(f_{1}((s,c),a)-f _{2}((s,c),a))^{2}\\right]\\] where we used that fact that \\(f,g\\in[0,1]\\). Thus, with probability \\(1-\\delta\\), \\[\\ell_{\\tilde{\\mu}_{\\text{dyn}}}(\\tilde{f}_{1},\\bar{\\tilde{f}}_{ \\tilde{g}},\\tilde{\\pi})-\\ell_{\\tilde{\\mu}_{\\text{dyn}}}(\\tilde{f}_{2},\\bar{ \\tilde{f}}_{\\tilde{g}},\\tilde{\\pi})-\\frac{1}{M}\\sum_{j=1}^{M}\\Big{(}\\ell^{j}_{ \\mu_{\\text{dyn}}}(\\tilde{f}_{1},\\bar{\\tilde{f}}_{\\tilde{g}},\\tilde{\\pi})- \\ell^{j}_{\\mu_{\\text{dyn}}}(\\tilde{f}_{2},\\bar{\\tilde{f}}_{\\tilde{g}},\\tilde{ \\pi})\\Big{)}\\] \\[\\leq 2\\|\\tilde{f}_{1}-\\tilde{f}_{2}\\|_{\\tilde{\\mu}_{\\text{dyn}}} \\sqrt{\\frac{\\log\\left(\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F},\\epsilon)\\mathcal{N}_ {\\infty}(\\mathcal{G},\\epsilon)\\mathcal{N}_{\\infty,1}(\\Pi,\\epsilon)}{\\delta} \\right)}{M}}+\\frac{2\\log\\left(\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F}, \\epsilon)\\mathcal{N}_{\\infty}(\\mathcal{G},\\epsilon)\\mathcal{N}_{\\infty,1}(\\Pi, \\epsilon)}{\\delta}\\right)}{3M}.\\] Using the property of the set covers of \\(\\mathcal{F},\\mathcal{G},\\Pi\\), we can easily conclude that: \\[\\ell_{\\tilde{\\mu}_{\\text{dyn}}}(f_{1},\\bar{f}_{g},\\pi)-\\ell_{ \\tilde{\\mu}_{\\text{dyn}}}(f_{2},\\bar{f}_{g},\\pi)-\\frac{1}{M}\\sum_{j=1}^{M} \\Big{(}\\ell^{j}_{\\mu_{\\text{dyn}}}(f_{1},\\bar{f}_{g},\\pi)-\\ell^{j}_{\\mu_{ \\text{dyn}}}(f_{2},\\bar{f}_{g},\\pi)\\Big{)}\\] \\[\\lesssim\\|f_{1}-f_{2}\\|_{\\tilde{\\mu}_{\\text{dyn}}}\\sqrt{\\frac{\\log \\left(\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F},\\epsilon)\\mathcal{N}_{\\infty}( \\mathcal{G},\\epsilon)\\mathcal{N}_{\\infty,1}(\\Pi,\\epsilon)}{\\delta}\\right)}{M}}+ \\frac{\\log\\left(\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F},\\epsilon)\\mathcal{N}_{\\infty }(\\mathcal{G},\\epsilon)\\mathcal{N}_{\\infty,1}(\\Pi,\\epsilon)}{\\delta}\\right)}{M}\\] \\[\\quad+\\epsilon\\sqrt{\\frac{\\log\\left(\\frac{\\mathcal{N}_{\\infty}( \\mathcal{F},\\epsilon)\\mathcal{N}_{\\infty}(\\mathcal{G},\\epsilon)\\mathcal{N}_{ \\infty,1}(\\Pi,\\epsilon)}{\\delta}\\right)}{M}}+\\epsilon. \\tag{15}\\] Now, we bound the second deviation term in eq. line equation 12: \\[\\frac{1}{M}\\sum_{j=1}^{M}\\Big{(}\\ell^{j}_{\\mu_{\\text{dyn}}}\\big{(} f_{1},\\bar{f}_{g},\\pi)-\\ell^{j}_{\\mu_{\\text{dyn}}}(f_{2},\\bar{f}_{g},\\pi)\\Big{)}- \\big{(}\\ell_{\\text{dyn}}(f_{1},\\bar{f}_{g},\\pi)-\\ell_{\\text{dyn}}(f_{2},\\bar{ f}_{g},\\pi)\\big{)}\\] \\[=\\frac{1}{M}\\sum_{j=1}^{M}\\Bigg{[}\\mathbb{E}_{\\mu_{\\text{dyn}}} \\left[\\big{(}f_{1}(x_{\\circ j},a)-\\gamma\\max(g(x^{\\prime}_{\\circ j}),f(x^{ \\prime}_{\\circ j},\\pi))\\big{)}^{2}-\\big{(}f_{2}(x_{\\circ j},a)-\\gamma\\max(g(x ^{\\prime}_{\\circ j}),f(x^{\\prime}_{\\circ j},\\pi))\\big{)}^{2}\\right]\\] \\[\\quad-\\frac{1}{N}\\sum_{i=1}^{N}\\Big{[}\\big{(}f_{1}(x_{ij},a)- \\gamma\\max(g(x^{\\prime}_{ij}),f(x^{\\prime}_{ij},\\pi))\\big{)}^{2}-\\big{(}f_{2}(x_ {ij},a)-\\gamma\\max(g(x^{\\prime}_{ij}),f(x^{\\prime}_{ij},\\pi))\\big{)}^{2}\\Big{]} \\Bigg{]} \\tag{16}\\] Combining eqs. equation 15 and equation 18 with \\(\\epsilon=\\mathcal{O}(\\frac{1}{MN})\\), we get the final result. **Lemma B.13**.: _With probability at least \\(1-\\delta\\), for any \\(g,g,g_{1},g_{2}\\in\\mathcal{G}\\) and \\(f\\in\\mathcal{F}\\), we have:_ \\[\\ell_{\\bar{\\mu}_{\\text{goal}}}(\\bar{f}_{g_{1}})-\\ell_{\\bar{\\mu}_{ \\text{goal}}}(\\bar{f}_{g_{2}})-\\ell_{\\text{goal}}(\\bar{f}_{g_{1}})+\\ell_{ \\text{goal}}(\\bar{f}_{g_{2}})\\] \\[\\leq\\mathcal{O}\\left(\\|g_{1}-g_{2}\\|_{\\bar{\\mu}_{\\text{goal}}} \\sqrt{\\frac{\\log\\left(\\frac{\\mathcal{N}_{\\infty}(\\mathcal{F},1/|D_{\\text{goal }}|)}{\\delta}\\right)}{|D_{\\text{goal}}|}}+\\frac{\\log\\left(\\frac{\\mathcal{N}_{ \\infty}(\\mathcal{F},1/|D_{\\text{goal}}|)}{\\delta}\\right)}{|D_{\\text{goal}}|} \\right).\\] Proof.: This result can be proven using the same arguments as used in Lemma B.12 using a covering argument just over \\(\\mathcal{G}\\). Using these two main concentration results, we can now prove Lemmas B.7 and B.8. Proof of Lemma b.7.: Note \\(\\bar{Q}^{\\bar{\\pi}}=\\bar{f}_{g}\\) for some \\(f\\in\\mathcal{F}\\) and \\(g\\in\\mathcal{G}\\) (Proposition B.6) and \\[0 =\\mathbb{E}_{x,a\\sim\\mu_{\\text{goal}}}[(\\bar{Q}^{\\bar{\\pi}}(x,a) -\\bar{\\mathcal{T}}^{\\bar{\\pi}}Q^{\\bar{\\pi}}(x,a))^{2}]\\] \\[=\\mathbb{E}_{x,a\\sim\\mu_{\\text{goal}}}[(\\bar{Q}^{\\bar{\\pi}}(x,a) -0-\\gamma\\mathbb{E}_{x^{\\prime}\\sim\\bar{P}(\\cdot|x,a)}[\\phi(\\bar{Q}^{\\bar{\\pi} })(x^{\\prime},\\pi)])^{2}]\\] The lemma can now be proved by following a similar proof of Theorem A.1 of Xie et al. [37]. The key difference is the use of our concentration bounds in Lemmas B.12 and B.13 instead of Lemma A.4 in the proof of Xie et al. [37]. On the other hand, \\(\\ell_{\\text{goal}}(\\bar{f}_{g})=0\\) because the reward \\(R(x)\\) is deterministic which results in the second inequality. Proof of Lemma b.8.: This result can again be proved using the same steps as in Lemma A.5 from Xie et al. [37] based on the concentration bound in Lemmas B.12 and B.13. ## Appendix C Experimental Details ### Hyperparameters and Experimental Settings Iql.For IQL, we keep the hyperparameter of \\(\\gamma=0.99\\), \\(\\tau=0.9\\), \\(\\beta=10.0\\), and \\(\\alpha=0.005\\) in [18], and tune other hyperparameters on the antmaze-medium-play-v2 environment and choose batch size = 1024 from candidate choices {256, 512, 1024, 2046}, learning rate = \\(10^{-4}\\) from candidate choices {\\(5\\cdot 10^{-5},10^{-4},3\\cdot 10^{-4}\\)} and 3 layer MLP with RuLU activating and 256 hidden units for all networks. We use the same set of IQL hyperparameters for both our methods and all the baseline methods included in Section 6.2, and apply it to all environments. In the experiments, we follow the convention of the \\(-1/0\\) reward in the IQL implementation for Antmaze, which can be shown to be the same as the \\(0/1\\) reward notion in terms of ranking policies under the discounted MDP setting. Reward Prediction (RP).For naive reward prediction, we use the full context-goal dataset as positive data, and train a reward model with 3-layer MLP and ReLU activations, learning rate = \\(10^{-4}\\), batch size = 1024, and training for 100 epochs for convergence. To label the transition dataset, we need to find some appropriate threshold to label states predicted as goals given contexts. We choose the percentile as 5% in the reward distribution evaluated by the context-goal set as the threshold to label goals (if a reward is larger than the threshold than it is labeled as terminal), from candidate choices {0%, 5%, 10%}. Then we apply it to all environments. Another trick we apply for the reward prediction is that instead of predicting 0 for the context-goal dataset, we let it predict 1 but shift the reward prediction by -1 during reward evaluation, which prevents the model from learning all 0 weights. Similar tricks are also used in other reward learning baselines. Uds+rp.We use the same structure and training procedure for the reward model as RP, except that we also randomly sample a minibatch of \"negative\" contextual transitions with the same batch size for a balanced distribution, which is constructed by randomly sampling combinations of a state in the trajectory-only dataset and a context from the context-goal dataset. To create a balanced distribution of positive and negative samples, we sample from each dataset with equal probability. For the threshold, we choose the percentile as 5% in the reward distribution evaluated by the context-goal set as the threshold to label goals in the antmaze-medium-play-v2 environment, from candidate choices {0%, 5%, 10%}. Then we apply it to all environments. Pds.We use the same structure and training procedure for the reward model as RP, except that we train an ensemble of \\(10\\) networks as in [14]. To select the threshold percentile and the pessimistic weight \\(k\\), we choose the percentile as 15% in the reward distribution evaluated by the context-goal set as the threshold to label goals from candidate choices {0%, 5%, 10%, 15%, 20%}, and \\(k=15\\) from the candidate choices {5,10,15,20} in the antmaze-medium-play-v2 environment. Then we apply them to all environments. CODA (ours).We do not require extra parameters other than the possibility of sampling from the real and fake transitions. Intuitively, we should sample from both datasets with the same probability to create an overall balanced distribution. We ran additional experiments to study the effect of this sampling ratio hyperparameter: ratio of samples from the context-goal dataset \\(D_{\\text{goal}}\\) to total samples in each minibatch. Table 5 shows that CODA well as long as the ratio is roughly balanced in sampling from both dataset. Compute Resources.For all methods, each training run takes about 8h on a NVIDIA T4 GPU. ### Context-Goal dataset Construction and Environmental Evaluation. Here we introduce the context-goal dataset in the three levels of context-goal setup mentioned in Section 6 and how to evaluate in each setup. We also include our code implementation for reference. Original Antmaze.We extract the 2D locations from the states in the trajectory dataset with terminal=True as the context (in original antmaze, it suffices to reach the \\(L_{2}\\) ball with radius 0.5 around the center), where the contexts are distributed very closely as visualized in Figure 2(a), and the corresponding states serve as the goal examples with Gaussian perturbations \\(N(0,0.05)\\) on the dimensions other than the 2D location. Four Rooms.For each maze map, we partition 4 rooms like Figure 2(b) and use the room number as the context. To construct goal examples, we create a copy of all states in the trajectory dataset, perturb the states in the copy by \\(N(0,0.05)\\) on each dimension, and then randomly select the states (up to 20K) according to the room partition. Random Cells.For each maze map, we construct a range of non-wall 2D locations in the maze map and uniformly sample from it to get the training contexts. To construct the goal set given context, we randomly sample up to 20K states with the 2D locations within the \\(L_{2}\\) ball with radius \\(2\\). Figure 2(C) is a intuitive visualization of the corresponding context-goal sets. For test distributions, we have two settings: 1) the same as the training distribution; 2) test contexts are drawn from a limited area that is far away from the starting point of the agent. Evaluation.We follow the conventional evaluation procedure in [18], where the success rate is normalized to be 0-100 and evaluated with 100 trajectories. We report the result with standard error across 5 random seeds. The oracle condition we define in each context-goal setup is used to evaluate whether the agent has successfully reached the goal and also defines the termination of an episode. \\begin{table} \\begin{tabular}{l|c c c c c} \\hline \\hline Env/Ratio & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 \\\\ \\hline unaze & 91.6\\(\\pm\\)1.3 & 92.4\\(\\pm\\)1.0 & 94.8\\(\\pm\\)1.3 & 86.4\\(\\pm\\)1.8 & 84.8\\(\\pm\\)3.0 \\\\ umaze diverse & 76.8\\(\\pm\\)1.9 & 79.2\\(\\pm\\)1.6 & 72.8\\(\\pm\\)7.7 & 76.6\\(\\pm\\)2.3 & 65.4\\(\\pm\\)8.8 \\\\ medium play & 82.3\\(\\pm\\)2.1 & 85.0\\(\\pm\\)1.8 & 75.8\\(\\pm\\)1.9 & 72.8\\(\\pm\\)1.3 & 76.6\\(\\pm\\)1.3 \\\\ medium diverse & 79.4\\(\\pm\\)1.6 & 76.6\\(\\pm\\)3.0 & 84.5\\(\\pm\\)5.2 & 75.6\\(\\pm\\)2.0 & 72.0\\(\\pm\\)3.5 \\\\ large play & 50.8\\(\\pm\\)2.0 & 45.2\\(\\pm\\)3.7 & 60.0\\(\\pm\\)7.6 & 43.6\\(\\pm\\)2.3 & 46.6\\(\\pm\\)2.3 \\\\ large diverse & 35.8\\(\\pm\\)5.7 & 37.4\\(\\pm\\)4.7 & 36.8\\(\\pm\\)6.9 & 34.4\\(\\pm\\)2.4 & 27.0\\(\\pm\\)2.1 \\\\ \\hline average & 69.5 & 68.9 & 70.8 & 64.9 & 62.1 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Average success rate (%) in AntMaze-v2 from all environments, with different sampling ratios from the context-goal dataset. Figure 6: Reward model evaluation for the Four Rooms environment. Green dots are outliers. Figure 7: Reward evaluation for Random Cells environment (the test context distribution is the same as training). Green dots are outliers.
We present a novel method, Contextual goal-Oriented Data Augmentation (CODA), which uses commonly available unlabeled trajectories and context-goal pairs to solve Contextual Goal-Oriented (CGO) problems. By carefully constructing an action-augmented MDP that is equivalent to the original MDP, CODA creates a fully labeled transition dataset under training contexts without additional approximation error. We conduct a novel theoretical analysis to demonstrate CODA's capability to solve CGO problems in the offline data setup. Empirical results also showcase the effectiveness of CODA, which outperforms other baseline methods across various context-goal relationships of CGO problem. This approach offers a promising direction to solving CGO problems using offline datasets.
Provide a brief summary of the text.
146
arxiv-format/2103_13523v1.md
# Analysis of truncated orthogonal iteration for sparse eigenvector problems+ Footnote †: Submitted to the editors DATE. This work was funded by Hexuan Liu Department of Applied Mathematics, University of Washington, Seattle, WA ([email protected], [email protected]). Aleksandr Aravkin Department of Applied Mathematics, University of Washington, Seattle, WA ([email protected], [email protected]). ## 1 Introduction Sparse eigenvector problems arise in many applications where localized and structured eigenvectors are desired, such as sparse principal component analysis (sparse PCA), sparse dictionary learning and densest \\(k-\\)subgraphs recovery. In sparse PCA, sparse loading vectors have better interpretability, since each principal component is a linear combination of only a few of the original features. The goal of sparse coding/dictionary learning is to represent the input signal as a sparse linear combination of the dictionary elements. The densest \\(k-\\)subgraph can also be formulated as a sparse eigenvector problem [25] to find a set of \\(k\\) vertices with maximum average degree in the subgraph induced by the set. To formalize the problem, we assume that the leading eigenvectors corresponding to the largest \\(m\\) eigenvalues of a positive semidefinite matrix \\(\\bar{A}\\) are sparse, i.e. \\[\\bar{P}=\\operatorname*{arg\\,max}_{P^{T}P=I_{m}}\\operatorname{Tr}(P^{T}\\bar{A}P ),\\quad\\bar{P}\\text{ is sparse.} \\tag{1}\\] Here, _sparse_ means that each column of \\(P\\) has a lot of entries that are either exactly zero or sufficiently close to zero. In practice, \\(\\bar{A}\\) is often unknown and we are given a perturbed positive semidefinite matrix \\(A\\): \\(A=\\bar{A}+E\\). Our goal is to recover the true eigenvectors \\(\\bar{P}\\) from \\(A\\). A straightforward formulation for the sparse eigenvector problem given \\(A\\) is as follows: \\[\\bar{Q}=\\operatorname*{arg\\,max}_{Q^{T}Q=I_{m}}\\operatorname{Tr}(Q^{T}AQ), \\quad\\text{subject to }\\|q_{i}\\|_{0}\\leq k_{i},\\ i=1,\\dots,m, \\tag{2}\\] where \\(q_{i}\\) is the \\(i^{th}\\) column vector of \\(Q\\) and \\(\\|\\cdot\\|_{0}\\) denotes the \\(\\ell_{0}\\) \"norm\" which is the number of nonzeros in a vector. To find a solution to (2) is an NP-hard problem. Moreover, the solution for (2) is not a good approximation for (1) unless some additional assumptions are imposed on \\(E\\), e.g. when \\(E=\\sigma^{2}I\\). In this work we do not impose additional assumptions on \\(A\\) and do not aim to solve (2). We focus instead on the orthogonal iteration for solving the standard eigenvalue problems, and present two algorithms. In the first algorithm, we relax the sparsity constraint,while in the second, we relax the orthogonality constraint. We analyze whether truncation at each iteration yields a better approximation to the true eigenvectors than those of the standard orthogonal iteration. Many existing algorithms for sparse eigenvector recovery focus on recovering a single eigenvector and then using a deflation scheme to generalize to multiple components [25, 4, 19, 13, 27]. The downside of this approach is that deflation adds extra perturbation error to the original problem, with estimates of latter components accumulating errors from each deflation. The deflation step itself can also be a computational bottleneck. When several leading eigenvalues are clustered, it is also difficult to identify the corresponding eigenvectors, and sometimes a subspace is preferred over individual eigenvectors. To avoid these issues, we use the orthogonal iteration, which is a block generalization of the power method and outputs an orthonormal basis of the subspace spanned by the leading eigenvectors. One of the most widely used applications of sparse eigenvectors is sparse PCA, where \\(A\\) is the empirical covariance matrix and \\(\\bar{A}\\) is the true covariance matrix. PCA [17, 7, 12] is one of the most widely used dimensionality reduction techniques. It is computed by doing an eigendecomposition on the sample covariance matrix, and finds a sequence of orthogonal vectors that estimate principal directions of the data variance. In high-dimensional settings, classical PCA suffers from inconsistency [11] and poor interpretability [2], and over the last two decades many algorithms and theory for sparse PCA have been proposed to mitigate these issues, see e.g. the survey [28]. In this paper we analyze the problem (1) in the general sparse eigenvector setting and then focus on the sparse PCA application. Another line of relevant work focuses on finding the row-sparse principal subspace, assuming multiple eigenvectors share the same sparsity pattern (or \"support set\") [23, 24, 21]. This is equivalent to first selecting a sparse subset of features (an NP-hard problem [21]) and then applying PCA. A practical limitation of the row-sparse formulation is that we cannot always assume the same support set across all leading eigenvectors of interest. For example, in face recognition tasks [9], brain imaging [3], and natural language processing [26], the goal is often to find different localized and interpretable patterns in different eigenvectors. Here, we consider the general problem of column-sparse subspace estimation, without assuming a common support set. **Contributions.** We propose a general framework for estimating sparse eigenvectors and differentiate between the deflation scheme and block scheme. We then develop two new algorithms based on the orthogonal iteration to obtain several leading eigenvectors with sparsity constraints. We provide a deterministic convergence analysis for methods within the block framework, without additional assumptions on the matrix \\(A\\), and extend the analysis to other sparse eigenvector algorithms. Finally, we demonstrate the accuracy and efficiency of the proposed algorithms by applying them to simulated and real-world datasets, including the pitprops, sea surface temperature, MNIST, and 20 newsgroup datasets. **Notation:** Let \\(\\mathbb{S}^{p}=\\{A\\in\\mathbb{R}^{p\\times p}|A=A^{T}\\}\\) denote the set of symmetric matrices. For any \\(A\\in\\mathbb{S}^{p}\\), we denote its eigenvalues by \\(\\lambda_{\\min}(A)=\\lambda_{p}(A)\\leq\\cdots\\leq\\lambda_{1}(A)=\\lambda_{\\max}(A)\\). We use \\(\\rho(A)\\) and \\(\\|A\\|_{2}\\) to denote the spectral norm of \\(A\\), which is \\(\\max\\{|\\lambda_{\\max}(A)|,|\\lambda_{\\min}(A)|\\}\\). For vectors, \\(\\|\\cdot\\|_{2}\\) or \\(\\|\\cdot\\|\\) will denote the 2-norm, while \\(\\|\\cdot\\|_{0}\\) denotes the \\(\\ell_{0}\\) \"norm\" which is the number of nonzeros in a vector. Let \\(\\lambda_{\\max}(A,k)=\\max_{x\\in\\mathbb{R}^{p}}x^{T}Ax\\), such that \\(\\|x\\|=1\\), \\(\\|x\\|_{0}\\leq k\\). Define \\(\\rho(E,k):=\\sqrt{\\lambda_{\\max}(E^{T}E,k)}\\). We use \\(\\|A\\|_{F}\\) to denote the Frobenius norm of \\(A\\). ## 2 Methodology Many methods for finding an approximate solution to (2) use a power iteration-like scheme, either using deflation to obtain the leading vectors sequentially [4, 25], or using block approaches to compute several vectors at once [14, 16, 5]. The deflation approach works best if the eigenvectors of interest correspond to eigenvalues that are all separated by large gaps compared to the remaining eigenvalues. If several leading eigenvectors are desired and their corresponding eigenvalues are clustered, the block approach is preferable since it recovers a principal subspace instead of identifying individual eigenvectors separately. For the single vector case, power iteration-based methods iterate the following steps: 1. Update the current vector: \\(\\tilde{v}_{t+1}=Av_{t}\\). 2. Truncate or threshold the vector based on some penalty, usually \\(\\ell_{1}\\) or \\(\\ell_{0}\\). 3. Re-normalize the vector: \\(v_{t+1}=\\frac{\\tilde{v}_{t+1}}{\\|\\tilde{v}_{t+1}\\|}\\). To recover several eigenvectors at once, one natural extension of the truncated power method is the truncated orthogonal iteration, see e.g. ITSPCA [16]. However, it cannot enforce orthogonality and sparsity at the same time. Performing an orthogonalization step (by computing a QR factorization or a singular value decomposition) is crucial to the algorithm stability, but destroys the sparsity pattern. On the other hand, performing truncation afterwards gives a sparse solution, but loses orthogonality. We propose the following framework based on the orthogonal iteration, where a post-processing step may be used to enforce sparsity. 1. Update the current vectors: \\(Q^{\\prime}_{t+1}=AQ_{t}\\). 2. Truncate or threshold (usually process each column of \\(Q^{\\prime}_{t+1}\\) separately). 3. Re-orthogonalize: QR: \\(Q_{t+1}=\\mathbf{qr}(Q^{\\prime}_{t+1})\\). SVD: \\(U,S,V=\\mathbf{svd}(Q^{\\prime}_{t+1})\\), \\(Q_{t+1}=UV^{T}\\). 4. (Optional) Post-processing: in each iteration, truncate or threshold \\(Q_{t+1}\\). We propose two variations of the Truncated Orthogonal Iteration in this framework. The first approach, formalized in Algorithm 2.1, is similar to the ITSPCA algorithm [16], but with key differences in implementation and analysis. First, we replace the thresholding step with truncation, as discussed in Section 5. Second, we give a deterministic numerical analysis in Theorem 3.2, while for ITSPCA, [16, 1] established statistical convergence analyses under the spiked covariance model [10] and did not analyze the numerical convergence of the algorithm. Third, we use a different initialization scheme, i.e. warm initialization as discussed in Section 6, as opposed to the \"diagonal thresholding\" initialization [11], which also relies on the spiked covariance model and requires extra parameters. Our second approach, formalized in Algorithm 4.1, uses a greedy approach to get sparse vectors after each iteration of Algorithm 2.1. ``` Input: Symmetric positive semidefinite matrix \\(A\\in\\mathbb{S}^{p}\\), initial vectors \\(Q_{0}\\in\\mathbb{R}^{p\\times m}\\) Output: An orthogonal (but possibly dense) matrix \\(Q_{t}\\) Parameters: Cardinalities for each column vector \\(K=[k_{1},\\cdots,k_{m}]\\) repeat Compute \\(P_{t}=AQ_{t-1}\\). Denote the \\(i^{th}\\) column of \\(P_{t}\\) as \\(p_{i}\\). for\\(i=1,\\cdots,m\\)do Let \\(F_{i}=\\text{supp}(p_{i},k_{i})\\) be the indices of \\(p_{i}\\) with the largest \\(k_{i}\\) absolute values. Compute \\(\\hat{p}_{i}=\\text{Truncate}(p_{i},F_{i})\\). \\(\\hat{P}_{t}[:,i]=\\hat{p}_{i}\\). endfor Reorthogonalize \\(Q_{t}=\\mathbf{qr}(\\hat{P}_{t})\\). \\(t\\gets t+1\\) until Convergence ``` **Algorithm 2.1** Truncated Orthogonal Iteration (TOrth) **Complexity Analysis.** For sparse PCA problems, suppose that we are given a data matrix \\(X\\in\\mathbb{R}^{n\\times p}\\). The covariance matrix is calculated by \\(\\Sigma=X^{T}X\\). In high-dimensional settings, \\(p\\gg n\\geq m\\). In each iteration, matrix-matrix multiplication is \\(\\mathcal{O}(npm)\\). Sorting \\(m\\) vectors of length \\(p\\) in order to identify the largest entries is \\(\\mathcal{O}(mp\\log p)\\). Since the dimension of matrix \\(\\hat{P}_{t}\\) is \\(p\\times m\\), QR factorization is \\(\\mathcal{O}(pm^{2})\\). Compared to the single vector case, QR factorization is more expensive than normalizing \\(m\\) single vectors (requiring \\(\\mathcal{O}(mp)\\) operations), but the deflation step can be avoided (saving \\(\\mathcal{O}(np)\\) operations). ## 3 Analysis In this section, we analyze what happens in each iteration of Algorithm 1 when the matrix is perturbed and a truncation step is performed. ### Preliminaries We use the standard \\(\\sin\\Theta\\) definition [20, 22] to measure the distance between subspaces: **Definition 3.1**: _Let \\(\\mathcal{X}\\) and \\(\\mathcal{Y}\\) be two \\(m\\)-dimensional subspaces of \\(\\mathbb{R}^{p}\\). Let the columns of \\(X\\) form an orthonormal basis for \\(\\mathcal{X}\\) and the columns of \\(Y\\) form an orthonormal basis for \\(\\mathcal{Y}\\). We use \\(\\|\\sin\\Theta(\\mathcal{X},\\mathcal{Y})\\|_{F}\\) to measure the distance between \\(\\mathcal{X}\\) and \\(\\mathcal{Y}\\), where_ \\[\\Theta(\\mathcal{X},\\mathcal{Y})=\\text{diag}(\\theta_{1}(\\mathcal{X},\\mathcal{Y }),\\ldots,\\theta_{m}(\\mathcal{X},\\mathcal{Y})). \\tag{1}\\] _Here, \\(\\theta_{j}(\\mathcal{X},\\mathcal{Y})\\)'s denote the canonical angles between \\(\\mathcal{X}\\) and \\(\\mathcal{Y}\\) [p. 43][20], which is defined as_ \\[0\\leq\\theta_{j}(\\mathcal{X},\\mathcal{Y})\\triangleq\\arccos\\sigma_{j}\\leq\\frac{ \\pi}{2}\\quad\\text{for }1\\leq j\\leq m, \\tag{2}\\] _where \\(\\sigma_{j}\\)'s are the singular values of \\(X^{T}Y\\). Note that this definition is independent of which orthonormal bases \\(X\\) and \\(Y\\) are chosen for the spaces \\(\\mathcal{X}\\) and \\(\\mathcal{Y}\\)._ For the ease of notation, we use \\(\\Theta(X,Y)=\\Theta(\\mathcal{X},\\mathcal{Y})\\), where \\(X\\), \\(Y\\) are the orthonormal bases for the subspaces \\(\\mathcal{X}\\), \\(\\mathcal{Y}\\), respectively. It has been shown [20, 22] that the following relations holds: \\[\\|\\cos\\Theta(X,Y)\\|_{ui} =\\|X^{T}Y\\|_{ui}, \\tag{3}\\] \\[\\|\\sin\\Theta(X,Y)\\|_{ui} =\\|{X^{\\perp}}^{T}Y\\|_{ui}=\\|X^{T}Y^{\\perp}\\|_{ui},\\] (4) \\[\\|\\sin\\Theta(X,Y)\\|_{2} =\\|XX^{T}-YY^{T}\\|_{2},\\] (5) \\[\\|\\sin\\Theta(X,Y)\\|_{F} =\\frac{1}{\\sqrt{2}}\\|XX^{T}-YY^{T}\\|_{F}=\\sqrt{p-\\|X^{T}Y\\|_{F}^{ 2}}. \\tag{6}\\] where \\(\\|\\cdot\\|_{ui}\\) denotes any unitary invariant norm such as the 2-norm and the Frobenius norm. \\(X^{\\perp}\\) denotes the orthogonal complement of \\(X\\). Throughout the paper, we use the following well-known properties of matrix norms: \\[\\|A\\|_{2}\\leq\\|A\\|_{F}\\leq\\text{rank}(A)\\|A\\|_{2}, \\tag{7}\\] \\[\\|AB\\|_{2}\\leq\\|A\\|_{2}\\|B\\|_{2},\\] (8) \\[\\|AB\\|_{F}\\leq\\|A\\|_{F}\\|B\\|_{2}. \\tag{9}\\] ### Convergence Analysis We now establish our main result and key consequences. **Theorem 3.2**: _Let \\(P\\) be the matrix of eigenvectors corresponding to the \\(m\\) largest eigenvalues of \\(\\bar{A}\\), with \\(\\lambda_{1}(\\bar{A})\\geq\\lambda_{2}(\\bar{A})\\geq\\cdots\\geq\\lambda_{m}(\\bar{A}) >\\lambda_{m+1}(\\bar{A})>0\\). Let \\(A=\\bar{A}+E\\). Assume \\(\\lambda_{1}(\\bar{A})=1\\)_Define \\(\\gamma:=\\frac{\\lambda_{m+1}}{\\lambda_{m}}<1\\). Let \\(Q_{t}\\) be the matrix obtained at iteration \\(t\\) by Algorithm 2.1. Then_ \\[\\|P^{T}Q_{t}\\|_{2}^{2}\\geq\\frac{\\|P^{T}Q_{t-1}\\|_{F}^{2}}{(1-\\gamma^{2})\\|P^{T} Q_{t-1}\\|_{F}^{2}+m\\gamma^{2}}-\\delta_{E}-\\delta_{\\text{Truncate}}, \\tag{3.10}\\] _where_ \\[\\delta_{E}=\\frac{4\\rho(E,K)}{\\lambda_{m}^{2}(1-\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{ 2})},\\quad\\delta_{\\text{Truncate}}=2m\\sqrt{\\frac{\\min\\{\\bar{k}_{\\max},p-k_{\\min }\\}}{p}}.\\] \\[\\rho(E,K)=\\max_{Q^{T}Q=I_{m}}\\|EQ\\|_{2}\\text{ subject to }\\|q_{i}\\|_{0}\\leq k_{i},\\] \\[\\bar{k}_{\\max}=\\max_{i}\\{\\bar{k}_{i}\\}=\\max_{i}\\{\\|p_{i}\\|_{0}\\},\\ k_{\\min}= \\min_{i}\\{k_{i}\\}.\\] _Assume that \\(\\|P^{T}Q_{t}\\|_{F}^{2}=c\\|P^{T}Q_{t}\\|_{2}^{2}\\), where \\(c\\in[1,m]\\). Then we have:_ \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2}\\leq\\frac{\\gamma^{2}\\|\\sin\\Theta(P,Q_{t-1})\\|_{ F}^{2}+\\frac{m-c}{m}\\|P^{T}Q_{t-1}\\|_{F}^{2}}{1-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{ 2}^{2}}+c\\delta_{E}+c\\delta_{\\text{Truncate}}. \\tag{3.11}\\] When \\(c\\approx m\\), we have: \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2}\\lessapprox\\frac{\\gamma^{2}\\|\\sin\\Theta(P,Q_{ t-1})\\|_{F}^{2}}{1-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2}}+m\\delta_{E}+m \\delta_{\\text{Truncate}}. \\tag{3.12}\\] For \\(m=1\\), the inequality (3.12) holds exactly and we can derive a uniform convergence bound for \\(\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2}\\). For \\(m>1\\), as \\(Q_{t}\\to P\\), \\(c\\approx m\\) and \\(\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2}\\) converges at the asymptotic rate \\(\\gamma=\\frac{\\lambda_{m+1}}{\\lambda_{m}}\\). _Remark 3.3_.: A natural question that arises is whether truncating the vector only at the last step would give a better result. Besides computational concerns, truncating at each step helps to reduce the perturbation error, which is proportional to \\(\\rho(E,k)\\). At each step, if we do not truncate, i.e. \\(k=p\\), then there is no truncation error, but \\(\\rho(E,k)=\\rho(E)\\) can be large. On the other hand, if we truncate to \\(k\\) nonzeros with \\(k\\approx\\bar{k}\\ll p\\), then the truncation error could potentially be large but \\(\\rho(E,k)\\approx\\rho(E,\\bar{k})\\ll\\rho(E)\\). We recommend keeping \\(k\\) close to \\(p\\) in the first few iterations to avoid truncating the true nonzeros. At later steps, when the nonzero indices of \\(x_{t}\\) include the nonzero indices of \\(\\bar{x}\\), it is safe to truncate to a smaller \\(k\\) without much truncation error and the perturbation error is kept low at the same time. To prove Theorem 3.2, we need the following lemmas: Lemma 3.4 measures the progress made by each standard orthogonal iteration, Lemma 3.5 accounts for the perturbation error, and Lemma 3.7 analyzes the truncation step. We first measure the progress made by the standard orthogonal iteration without any truncation or perturbation. In [22] it has been shown that the distance between the \\(t^{th}\\) updated matrix \\(Q_{t}\\in\\mathbb{R}^{p\\times m}\\) and the matrix of first \\(m\\) eigenvectors \\(P\\) converges at a rate \\(\\gamma=|\\lambda_{m+1}/\\lambda_{m}|\\): \\[\\|\\sin\\Theta(Q_{t},P)\\|_{2}\\leq\\gamma^{t}\\frac{\\|\\sin\\Theta(Q_{0},P)\\|_{2}}{ \\sqrt{1-\\|\\sin\\Theta(Q_{0},P)\\|_{2}^{2}}}. \\tag{3.13}\\] When \\(m=1\\), define \\(\\theta_{t}\\in[0,\\pi/2]\\) by \\(\\cos(\\theta_{t})=|p^{T}q_{t}|\\) and this reduces to \\[\\sin\\theta_{t}\\leq\\gamma^{t}\\tan\\theta_{0}. \\tag{3.14}\\]An equivalent bound measured in Frobenius norm can be derived from [22] (the proof can be found in the Appendix): \\[\\|\\sin\\Theta(Q_{t},P)\\|_{F}\\leq\\gamma^{t}\\frac{\\|\\sin\\Theta(Q_{0},P)\\|_{F}}{\\sqrt {1-\\|\\sin\\Theta(Q_{0},P)\\|_{2}^{2}}}. \\tag{3.15}\\] A one-step bound can also be derived from [22]: \\[\\|\\sin\\Theta(Q_{t},P)\\|_{F}\\leq\\gamma\\frac{\\|\\sin\\Theta(Q_{t-1},P)\\|_{F}}{ \\sqrt{1-\\|\\sin\\Theta(Q_{t-1},P)\\|_{2}^{2}}}. \\tag{3.16}\\] We provide the following lemma for a similar approximation of the distance update in each iteration: **Lemma 3.4**: _Let \\(P\\) be the matrix of eigenvectors corresponding to the largest \\(m\\) (in absolute value) eigenvalues of a symmetric matrix \\(\\bar{A}\\), and \\(\\Lambda_{m}=\\mathrm{diag}(\\lambda_{1},\\cdots,\\lambda_{m})\\), and let \\(\\gamma=|\\lambda_{m+1}/\\lambda_{m}|\\). Given any \\(Q_{t-1}\\in\\mathbb{R}^{p\\times m}\\) such that \\(Q_{t-1}^{T}Q_{t-1}=I\\), let \\(Q_{t}\\) be the orthogonal matrix obtained by QR factorization of \\(\\bar{A}Q_{t-1}\\), i.e. \\(Q_{t}R_{t}=\\bar{A}Q_{t-1}\\), then_ \\[\\|P^{T}Q_{t}\\|_{2}^{2}\\geq\\frac{\\|P^{T}Q_{t-1}\\|_{F}^{2}}{(1-\\gamma^{2})\\|P^{ T}Q_{t-1}\\|_{F}^{2}+m\\gamma^{2}}. \\tag{3.17}\\] _Assume that \\(\\|P^{T}Q_{t}\\|_{F}^{2}=c\\|P^{T}Q_{t}\\|_{2}^{2}\\), where \\(c\\in[1,m]\\). Then we have:_ \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2}\\leq\\frac{\\gamma^{2}\\|\\sin\\Theta(P,Q_{t-1})\\|_{ F}^{2}+\\frac{m-c}{m}\\|P^{T}Q_{t-1}\\|_{F}^{2}}{1-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1}) \\|_{2}^{2}}. \\tag{3.18}\\] We can decompose \\(Q_{t-1}\\) as \\(Q_{t-1}=PX+P^{\\perp}Y\\), where \\(P^{\\perp}\\) is the orthogonal complement of \\(P\\) and its columns are the eigenvectors of \\(\\bar{A}\\) corresponding to the \\(m+1,\\cdots p\\) eigenvalues, i.e. \\(\\bar{A}P=P\\Lambda_{m}\\), \\(\\bar{A}P^{\\perp}=P^{\\perp}\\Lambda^{\\prime}\\), where \\(\\Lambda^{\\prime}=\\mathrm{diag}(\\lambda_{m+1},\\cdots,\\lambda_{p})\\). We have the following equations: \\[Q_{t-1}^{T}Q_{t-1}=X^{T}X+Y^{T}Y=I\\Rightarrow\\|X\\|_{F}^{2}+\\|Y\\|_ {F}^{2}=m, \\tag{3.19}\\] \\[\\|P^{T}Q_{t}R_{t}\\|_{F}^{2}=\\|P^{T}\\bar{A}Q_{t-1}\\|_{F}^{2}=\\|P^{ T}\\bar{A}(PX+P^{\\perp}Y)\\|_{F}^{2}=\\|\\Lambda_{m}X\\|_{F}^{2}\\geq\\lambda_{m}^{2} \\|X\\|_{F}^{2}. \\tag{3.20}\\] Since \\(\\|Q_{t}R_{t}\\|=\\|R_{t}\\|=\\|\\bar{A}Q_{t-1}\\|\\), and \\[\\|\\bar{A}Q_{t-1}\\|_{F}^{2}=\\|P\\Lambda_{m}X+P^{\\perp}\\Lambda^{\\prime}Y\\|_{F}^{2 }=\\|\\Lambda_{m}X\\|_{F}^{2}+\\|\\Lambda^{\\prime}Y\\|_{F}^{2}, \\tag{3.21}\\] We have \\[\\|P^{T}Q_{t}\\|_{2}^{2} \\geq\\frac{\\|P^{T}Q_{t}R_{t}\\|_{F}^{2}}{\\|R_{t}\\|_{F}^{2}}=\\frac{ \\|P^{T}Q_{t}R_{t}\\|_{F}^{2}}{\\|\\bar{A}Q_{t-1}\\|_{F}^{2}}\\ \\text{by (\\ref{eq:P})} \\tag{3.22}\\] \\[\\geq\\frac{\\|\\Lambda_{m}X\\|_{F}^{2}}{\\|\\Lambda_{m}X\\|_{F}^{2}+\\| \\Lambda^{\\prime}Y\\|_{F}^{2}}\\ \\text{by (\\ref{eq:P}) and (\\ref{eq:P})}\\] (3.23) \\[\\geq\\frac{\\lambda_{m}^{2}\\|X\\|_{F}^{2}}{\\lambda_{m}^{2}\\|X\\|_{F}^{ 2}+\\lambda_{m+1}^{2}\\|Y\\|_{F}^{2}}\\ \\text{by (\\ref{eq:P})}\\] (3.24) \\[=\\frac{\\lambda_{m}^{2}\\|X\\|_{F}^{2}}{\\lambda_{m}^{2}\\|X\\|_{F}^{ 2}+\\lambda_{m+1}^{2}(m-\\|X\\|_{F}^{2})}\\ \\text{by (\\ref{eq:P})}\\] (3.25) \\[=\\frac{\\|P^{T}Q_{t-1}\\|_{F}^{2}}{(1-\\gamma^{2})\\|P^{T}Q_{t-1}\\|_ {F}^{2}+m\\gamma^{2}}. \\tag{3.26}\\]Assume that \\(\\|P^{T}Q_{t}\\|_{F}^{2}=c\\|P^{T}Q_{t}\\|_{2}^{2}\\), where \\(c\\in[1,m]\\), then we have \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2} =m-\\|P^{T}Q_{t}\\|_{F}^{2}=m-c\\|P^{T}Q_{t}\\|_{2}^{2} \\tag{3.27}\\] \\[\\leq\\frac{m\\gamma^{2}\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}+(m-c)\\|P^{ T}Q_{t-1}\\|_{F}^{2}}{m-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}}\\text{ by \\eqref{eq:P}}\\] (3.28) \\[\\leq\\frac{\\gamma^{2}\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}+\\frac{m-c}{ m}\\|P^{T}Q_{t-1}\\|_{F}^{2}}{1-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2}}\\text{ by \\eqref{eq:P}}. \\tag{3.29}\\] When \\(c\\approx m\\), \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2}\\lessapprox\\frac{\\gamma^{2}\\|\\sin\\Theta(P,Q_{ t-1})\\|_{F}^{2}}{1-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2}}. \\tag{3.30}\\] Lemma 4 measures the progress made by the orthogonalization step, and the QR factorization can be replaced by other factorization methods, as long as the updated matrix \\(Q\\) is orthogonal. **Lemma 5**: _Suppose that \\(A=\\bar{A}+E\\), where \\(A\\) and \\(\\bar{A}\\) are symmetric positive semidefinite matrices. Suppose that \\(P\\) is the matrix of eigenvectors of \\(\\bar{A}\\) corresponding to the largest \\(m\\) eigenvalues. Assume \\(\\lambda_{1}(\\bar{A})=1\\). Let \\(Q\\in\\mathbb{R}^{p\\times m}\\) be any sparse matrix such that \\(Q^{T}Q=I\\), \\(\\|q_{i}\\|_{0}\\leq k_{i}\\). Then_ \\[\\frac{\\|P^{T}AQ\\|_{F}^{2}}{\\|AQ\\|_{F}^{2}}\\geq\\frac{\\|P^{T}\\bar{A}Q\\|_{F}^{2}} {\\|\\bar{A}Q\\|_{F}^{2}}-\\frac{4m\\rho(E,K)}{\\|\\bar{A}Q\\|_{F}^{2}}, \\tag{3.31}\\] _where \\(\\rho(E,K)=\\max_{Q^{T}Q=I_{m}}\\|EQ\\|_{2}\\) subject to \\(\\|q_{i}\\|_{0}\\leq k_{i},\\ K=[k_{1},\\cdots,k_{m}]\\). \\({}_{\\Box}\\)_ Proof: Since \\(A=\\bar{A}+E\\), using triangle inequality of norms, we have \\[\\frac{\\|P^{T}AQ\\|_{F}}{\\|AQ\\|_{F}} =\\frac{\\|P^{T}(\\bar{A}+E)Q\\|_{F}}{\\|(\\bar{A}+E)Q\\|_{F}}\\geq\\frac{ \\|P^{T}\\bar{A}Q\\|_{F}-\\|P^{T}EQ\\|_{F}}{\\|\\bar{A}Q\\|_{F}+\\|EQ\\|_{F}} \\tag{3.32}\\] \\[\\geq\\frac{\\|P^{T}\\bar{A}Q\\|_{F}-\\|P^{T}EQ\\|_{F}-\\|EQ\\|_{F}}{\\|\\bar {A}Q\\|_{F}}\\] (3.33) \\[\\geq\\frac{\\|P^{T}\\bar{A}Q\\|_{F}-\\sqrt{m}(\\|P^{T}EQ\\|_{2}+\\|EQ\\|_{2 })}{\\|\\bar{A}Q\\|_{F}}\\] (3.34) \\[\\geq\\frac{\\|P^{T}\\bar{A}Q\\|_{F}-2\\sqrt{m}\\rho(E,K)}{\\|\\bar{A}Q\\|_ {F}}\\] (3.35) \\[\\Rightarrow\\frac{\\|P^{T}AQ\\|_{F}^{2}}{\\|AQ\\|_{F}^{2}} \\geq\\frac{\\|P^{T}\\bar{A}Q\\|_{F}^{2}-4\\sqrt{m}\\|P^{T}\\bar{A}Q\\|_{F} \\rho(E,K)+4m\\rho(E,K)^{2}}{\\|\\bar{A}Q\\|_{F}^{2}}\\] (3.36) \\[\\geq\\frac{\\|P^{T}\\bar{A}Q\\|_{F}^{2}}{\\|\\bar{A}Q\\|_{F}^{2}}-\\frac{ 4m\\rho(E,K)}{\\|\\bar{A}Q\\|_{F}^{2}}. \\tag{3.37}\\] \\({}_{\\Box}\\) **Remark 3**: _In Algorithm 1, \\(Q_{t}R_{t}=\\operatorname{Truncate}(AQ_{t-1})\\), and in theory \\(Q_{t}\\) can be dense. If the columns of \\(\\operatorname{Truncate}(AQ_{t-1})\\) mostly have nonzeros in nonoverlapping sets of rows, then the columns of this matrix will be almost orthogonal, and \\(Q_{t}\\) will be similarly sparse. The first column of \\(Q_{t}\\) will definitely have the same sparsity as the first column of \\(\\operatorname{Truncate}(AQ_{t-1})\\), but later columns are orthogonalized against more vectors and so may become denser. To measure the loss incurred during truncation, we establish the lemma below. **Lemma 3.7**: _Consider a unit vector \\(\\bar{x}\\in\\mathbb{R}^{P}\\) with support set \\(supp(\\bar{x})=\\bar{F}\\), and \\(\\bar{k}=|\\bar{F}|\\). Consider a vector \\(y\\) with the \\(k-\\)largest absolute values with indices in set \\(F\\). Then_ \\[\\frac{|\\text{Truncate}(y,F)^{T}\\bar{x}|}{\\|\\text{Truncate}(y,F)\\|}\\geq\\frac{|y ^{T}\\bar{x}|}{\\|y\\|}-\\sqrt{\\frac{\\min\\{\\bar{k},p-k\\}}{p}}. \\tag{38}\\] Let \\(F_{1}=\\bar{F}\\backslash F\\), \\(F_{2}=\\bar{F}\\cap F\\), \\(F_{3}=F\\backslash\\bar{F}\\), then \\[\\bar{x}=\\bar{x}_{F_{1}}+\\bar{x}_{F_{2}},\\text{ Truncate}(y,F)=y_{F_{2}}+y_{F_{3}}.\\] Let \\(\\bar{\\alpha}=\\|\\bar{x}_{F_{1}}\\|\\leq 1\\), \\(\\alpha=\\|y_{F_{1}}\\|\\), \\(k_{1}=|F_{1}|\\). Note that if \\(\\bar{F}\\subseteq F\\), then \\(F_{1}=\\emptyset\\) and \\(k_{1}=0\\), \\(\\bar{\\alpha}=0\\). If \\(k_{1}\ eq 0\\) we have: \\[\\frac{\\alpha^{2}}{k_{1}}=\\frac{\\sum_{i\\in F_{i}}y_{i}^{2}}{|F_{1}|}\\leq\\frac{ \\|y\\|^{2}}{p}, \\tag{39}\\] since \\(y_{F_{1}}\\) contains the \\(k1-\\)smallest entries in \\(y\\). We know \\(k_{1}=|\\bar{F}\\backslash F|\\leq\\min\\{\\bar{k},p-k\\}\\), therefore \\[\\alpha\\leq\\sqrt{\\frac{k_{1}}{p}}\\|y\\|\\leq\\min\\{\\sqrt{\\frac{\\bar{k }}{p}},\\sqrt{\\frac{p-k}{p}}\\}\\|y\\|, \\tag{40}\\] \\[|\\text{Truncate}(y,F)^{T}\\bar{x}|\\geq|y^{T}\\bar{x}|-\\bar{\\alpha} \\alpha\\geq|y^{T}\\bar{x}|-\\alpha\\] (41) \\[\\Rightarrow|\\text{Truncate}(y,F)^{T}\\bar{x}|\\geq|y^{T}\\bar{x}|- \\min\\{\\sqrt{\\frac{\\bar{k}}{p}},\\sqrt{\\frac{p-k}{p}}\\}\\|y\\|. \\tag{42}\\] **Truncation error of the matrix product \\(P^{T}Q\\).** We denote each column of \\(Q\\) by \\(q_{i}\\) and each column of \\(P\\) by \\(p_{j}\\). Let \\(|F_{i}|=k_{i}\\) and \\(\\|p_{j}\\|_{0}=\\bar{k}_{j}\\). Then the truncation error of the matrix product \\(P^{T}Q\\) is given by: \\[(\\text{Truncate}(q_{i},F_{i})^{T}p_{j})^{2} \\geq(q_{i}^{T}p_{j})^{2}-2\\sqrt{\\frac{\\min\\{\\bar{k}_{j},p-k_{i} \\}}{p}}\\|q_{i}\\|^{2}, \\tag{43}\\] \\[\\sum_{i,j}(\\text{Truncate}(q_{i},F)^{T}p_{j})^{2} \\geq\\sum_{i,j}(q_{i}^{T}p_{j})^{2}-2\\sum_{i,j}\\sqrt{\\frac{\\min\\{ \\bar{k}_{j},p-k_{i}\\}}{p}}\\|q_{i}\\|^{2}\\] (44) \\[\\Rightarrow\\frac{\\|\\text{Truncate}(Q)^{T}P\\|_{F}^{2}}{\\|\\text{ Truncate}(Q)\\|_{F}^{2}}\\geq\\frac{\\|Q^{T}P\\|_{F}^{2}}{\\|Q\\|_{F}^{2}}-2m\\sqrt{ \\frac{\\min\\{\\bar{k}_{\\max},p-k_{\\min}\\}}{p}}, \\tag{45}\\] where \\(\\bar{k}_{\\max}=\\max_{j}\\{\\bar{k}_{j}\\}\\) and \\(k_{\\min}=\\min_{i}\\{k_{i}\\}\\). _Remark 3.8_: This bound is not tight. Assume that: \\(\\bar{F}_{i}\\subseteq F_{i}^{(t)},\\ \\forall i\\), then there is no truncation error and \\[\\frac{\\|\\text{Truncate}(Q)^{T}P\\|_{F}^{2}}{\\|\\text{Truncate}(Q)\\|_{F}^{2}} \\geq\\frac{\\|Q^{T}P\\|_{F}^{2}}{\\|Q\\|_{F}^{2}}. \\tag{46}\\]Putting everything together, we can now prove Theorem 3.2: Proof.: Based on Algorithm 2.1, \\(Q_{t}\\) is obtained by doing **qr** factorization on \\(\\text{Truncate}(AQ_{t-1})\\), i.e. \\[Q_{t}R_{t}=\\text{Truncate}(AQ_{t-1}). \\tag{3.47}\\] \\[\\|P^{T}Q_{t}\\|_{2}^{2} \\geq\\frac{\\|P^{T}\\text{Truncate}(AQ_{t-1})\\|_{F}^{2}}{\\|R_{t}\\|_{ F}^{2}}=\\frac{\\|P^{T}\\text{Truncate}(AQ_{t-1})\\|_{F}^{2}}{\\|\\text{Truncate}(AQ_{t-1})\\|_{F}^{2}}\\text { by \\eqref{eq:qtruncate}} \\tag{3.49}\\] \\[\\geq\\frac{\\|P^{T}AQ_{t-1}\\|_{F}^{2}}{\\|AQ_{t-1}\\|_{F}^{2}}-\\delta _{\\text{Truncate}}\\text{ by \\eqref{eq:qtruncate}}\\] (3.50) \\[\\geq\\frac{\\|P^{T}\\bar{A}Q_{t-1}\\|_{F}^{2}}{\\|AQ_{t-1}\\|_{F}^{2}}- \\frac{4m\\rho(E,K)}{\\|AQ_{t-1}\\|_{F}^{2}}-\\delta_{\\text{Truncate}}\\text{ by Lemma \\ref{eq:qtruncate}}\\] (3.51) \\[\\geq\\frac{\\|P^{T}\\bar{A}Q_{t-1}\\|_{F}^{2}}{\\|\\bar{A}Q_{t-1}\\|_{F} ^{2}}-\\frac{4\\rho(E,K)}{\\lambda_{m}^{2}(1-\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2})}- \\delta_{\\text{Truncate}}\\] (3.52) \\[\\geq\\frac{\\|P^{T}Q_{t-1}\\|_{F}^{2}}{(1-\\gamma^{2})\\|P^{T}Q_{t-1}\\| _{F}^{2}+m\\gamma^{2}}-\\delta_{E}-\\delta_{\\text{Truncate}}\\text{ by Lemma \\ref{eq:qtruncate}} \\tag{3.48}\\] The inequality (3.51) holds due to the fact that: \\[\\frac{m}{\\|\\bar{A}Q_{t-1}\\|_{F}^{2}}\\leq\\|(\\bar{A}Q_{t-1})^{-1}\\|_{2}^{2}\\leq \\frac{1}{\\lambda_{m}^{2}(1-\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2})}\\text{ by \\eqref{eq:qtruncate}}.\\] Assume that \\(\\|P^{T}Q_{t}\\|_{F}^{2}=c\\|P^{T}Q_{t}\\|_{2}^{2}\\), where \\(c\\in[1,m]\\). Then we have: \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2} =m-\\|P^{T}Q_{t}\\|_{F}^{2}=m-c\\|P^{T}Q_{t}\\|_{2}^{2} \\tag{3.54}\\] \\[\\leq m-c\\frac{\\|P^{T}Q_{t-1}\\|_{F}^{2}}{(1-\\gamma^{2})\\|P^{T}Q_{t -1}\\|_{F}^{2}+m\\gamma^{2}}+c\\delta_{E}+c\\delta_{\\text{Truncate}}\\] (3.55) \\[=\\frac{m\\gamma^{2}\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}+(m-c)\\|P^{T}Q_ {t-1}\\|_{F}^{2}}{m-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}}+c\\delta_{ E}+c\\delta_{\\text{Truncate}}\\] (3.56) \\[\\leq\\frac{\\gamma^{2}\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}+\\frac{m-c} {m}\\|P^{T}Q_{t-1}\\|_{F}^{2}}{1-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2} }+c\\delta_{E}+c\\delta_{\\text{Truncate}} \\tag{3.53}\\] When \\(\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}\\geq\\frac{m-c}{1-\\gamma^{2}}\\), \\[\\frac{m\\gamma^{2}\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}+(m-c)\\|P^{T}Q_{t-1}\\|_{F}^{ 2}}{m-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}^{2}}\\leq\\|\\sin\\Theta(P,Q_{t -1})\\|_{F}^{2}. \\tag{3.57}\\] When \\(c\\approx m\\), \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}^{2}\\lessapprox\\frac{\\gamma^{2}\\|\\sin\\Theta(P,Q_{ t-1})\\|_{F}^{2}}{1-(1-\\gamma^{2})\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2}}+m\\delta_{E}+m \\delta_{\\text{Truncate}}. \\tag{3.58}\\] We have proved that the orthogonal projector \\(\\mathcal{Q}^{(t)}=Q_{t}Q_{t}^{T}\\) onto \\(\\text{span}\\{Q_{t}\\}\\) converges to the true orthogonal projector \\(\\mathcal{P}=PP^{T}\\) onto \\(\\text{span}\\{P\\}\\). We now show that when the sequence of projectors converges, each column vector of \\(Q_{t}\\) also converges to the corresponding column vector of \\(P\\). Let \\([Q]_{i}\\) be the first \\(i\\) columns of \\(Q\\). Let \\(\\mathcal{Q}_{i}^{(t)}:=[Q]_{i}[Q]_{i}^{T}\\) be the orthogonal projector onto \\(\\text{span}\\{[Q_{t}]_{i}\\}\\) and \\(\\mathcal{P}_{i}\\) the orthogonal projector onto \\(\\text{span}\\{[P]_{i}\\}\\). Then we have the following corollary: **Corollary 3.9**: _If the sequence of orthogonal projectors \\(\\{\\mathcal{Q}_{i}^{(t)}\\}_{t\\geq 1}\\) converges to \\(\\mathcal{P}_{i}\\) for all \\(i\\), then each column vector of \\(Q_{t}\\) converges to the corresponding column vector of \\(P\\)._ We can express the \\(i+1^{th}\\) column vector of \\(Q_{t}\\) as \\[q_{i+1}^{(t)}=(\\mathcal{Q}_{i+1}^{(t)}-\\mathcal{Q}_{i}^{(t)})q_{i+1}^{(t)}\\] \\[=[(\\mathcal{Q}_{i+1}^{(t)}-\\mathcal{P}_{i+1})+(\\mathcal{P}_{i+1}-\\mathcal{P}_{ i})+(\\mathcal{P}_{i}-\\mathcal{Q}_{i}^{(t)})]q_{i+1}^{(t)}\\] \\[=[(\\mathcal{Q}_{i+1}^{(t)}-\\mathcal{P}_{i+1})+(\\mathcal{P}_{i}-\\mathcal{Q}_{i }^{(t)})]q_{i+1}^{(t)}+p_{i+1}p_{i+1}^{T}q_{i+1}^{(t)}\\] \\[\\Rightarrow 1-|p_{i+1}^{T}q_{i+1}^{(t)}|\\leq\\|(\\mathcal{Q}_{i+1}^{(t)}- \\mathcal{P}_{i+1})+(\\mathcal{P}_{i}-\\mathcal{Q}_{i}^{(t)})\\|_{2}.\\] Since the projector \\(\\mathcal{Q}_{i+1}^{(t)}\\) converges to \\(\\mathcal{P}_{i+1}\\), and \\(\\mathcal{Q}_{i}^{(t)}\\) converges to \\(\\mathcal{P}_{i}\\), we have \\(q_{i+1}^{(t)}\\) converges to \\(p_{i+1}\\). ## 4 Truncated Orthogonal Iteration with Strict Sparsity Constraint In real scenarios, final output vectors are often required to be sparse for better interpretation. Sparsity constraints are not enforced in TOrth or in ITSPCA [16], since by doing QR decomposition, the \\(m^{th}\\) vector needs to be orthogonalized against all previously obtained vectors \\(v_{1},\\cdots,v_{m-1}\\). If the previous vectors are partially overlapping, the \\(m^{th}\\) vector is likely to be dense. Therefore we also present Algorithm 4 that performs a truncation step on the vector obtained by QR, which we denote as \\(Q\\). Specifically, we solve the following problem: \\[Q_{\\text{truncate}}=\\arg\\min_{\\tilde{Q}}\\|Q-\\tilde{Q}\\|_{F}^{2},\\quad\\text{ subject to: }\\|\\hat{q}_{i}\\|_{0}\\leq k_{i},\\ \\|\\hat{q}_{i}\\|_{2}=1.\\] ``` Input: Symmetric positive semidefinite matrix \\(A\\in\\mathbb{S}^{p}\\), initial vector \\(Q_{0}\\in\\mathbb{R}^{p\\times m}\\), cardinality vector \\(K\\in\\mathbb{R}^{m}\\) Output: A sparse and near-orthogonal matrix \\(Q_{t}\\) repeat Compute \\(P_{t}=AQ_{t-1}\\). Denote the \\(i^{th}\\) column of \\(P_{t}\\) as \\(p_{i}\\). for\\(i=1,\\cdots,m\\)do Let \\(F_{i}=\\text{supp}(p_{i},k_{i})\\) be the indices of \\(p_{i}\\) with the largest \\(k_{i}\\) absolute values. Compute \\(\\hat{p}_{i}=\\text{Truncate}(p_{i},F_{i})\\). \\(\\hat{P}_{t}[:,i]=\\hat{p}_{i}\\) endfor Reorthogonalize \\(\\tilde{Q}_{t}=\\mathbf{qr}(P_{t})\\). Denote the \\(i^{th}\\) column of \\(\\tilde{Q}_{t}\\) as \\(\\tilde{q}_{i}\\). for\\(i=1,\\cdots,m\\)do Let \\(F_{i}=\\text{supp}(\\tilde{q}_{i},k_{i})\\) be the indices of \\(\\tilde{q}_{i}\\) with the largest \\(k_{i}\\) absolute values. Compute \\(\\hat{q}_{i}=\\text{Truncate}(\\tilde{q}_{i},\\mathcal{F}_{i})\\). Compute \\(q_{i}=\\frac{\\hat{q}_{i}}{\\|\\hat{q}_{i}\\|}\\). \\(Q_{t}[:,i]=q_{i}\\) endfor \\(t\\gets t+1\\) until Convergence ``` **Algorithm 4** Truncated Orthogonal Iteration with Post Truncation (TOrthT)To test the impact of this post-truncation step, we construct a toy example where \\(\\bar{A}=V\\Lambda V^{T}\\), \\(A=\\bar{A}+E\\) and \\(\\rho(\\bar{A})=1\\), \\(\\rho(E)\\approx 0.21\\). We apply Algorithm 4 to three simple scenarios: a) the leading eigenvectors that we are trying to recover have completely overlapping nonzero indices; b) the leading eigenvectors have partially overlapping nonzero indices; and c) the indices are non-overlapping. We keep \\(K=[p,p,p]=[100,100,100]\\) in the first \\(20\\) iterations, \\(K=[50,50,50]\\) in iteration \\(21-40\\), \\(K=[25,25,25]\\) in iteration \\(41-60\\), and \\(K=[10,10,10]\\) in iteration \\(61-80\\). In all three cases, \\(\\|Q_{\\text{truncate}}-Q\\|_{F}^{2}\\) is kept relatively low (\\(\\approx 1e-4\\)) when the algorithm converges, as shown in Figure 1. For the non-overlapping case, the algorithm is able to recover the true support set before truncation. The orthogonality loss of the final output matrix \\(Q_{t}\\) is also calculated, and \\(\\|I-Q_{t}^{T}Q_{t}\\|_{F}^{2}=1.57e-4\\) in case a), 1.17e-4 in case b) and 0 in case c). The post-truncation step does not have a significant impact on the quality or orthogonality of the obtained vectors, especially when the true eigenvectors have non-overlapping support sets. ## 5 Relations to Existing Algorithms When estimating a single sparse eigenvector, Theorem 3 can be applied directly to the Truncated Power Method (TPower) [25] using \\(m=1\\). In this case, we have the following corollary: **Corollary 5.1**: _Let \\(P\\) be the matrix of eigenvectors corresponding to the \\(m-\\)largest eigenvalues of \\(\\bar{A}\\). \\(A=\\bar{A}+E\\). Define \\(\\gamma:=\\frac{\\lambda_{2}}{\\lambda_{1}}<1\\). Let \\(q_{t}\\) be the matrix obtained at iteration \\(t\\) by the Truncated Power method. Then_ \\[|\\sin\\angle(p,q_{t})|^{2}\\leq\\frac{\\gamma^{2}|\\sin\\angle(p,q_{t-1})|^{2}}{1-( 1-\\gamma^{2})|\\sin\\angle(p,q_{t-1})|^{2}}+\\delta_{E}+\\delta_{\\text{Truncate}}.\\] _And a uniform bound is given by_ \\[|\\sin\\angle(p,q_{t})|\\leq\\mu^{t}|\\sin\\angle(p,q_{0})|+\\sqrt{\\delta_{E}+\\delta _{\\text{Truncate}}}\\frac{1-\\mu^{t}}{1-\\mu},\\] _where_ \\[\\mu=\\frac{\\gamma}{\\sqrt{1-(1-\\gamma^{2})|\\sin\\angle(p,q_{0})|^{2}}}\\leq 1,\\ \\delta_{E}=\\frac{4\\rho(E,k)}{\\lambda_{2}^{2}(1-|\\sin\\angle(p,q_{0})|^{2})},\\ \\delta_{\\text{Truncate}}=2\\sqrt{\\frac{\\min\\{\\bar{k},p-k\\}}{p}}.\\] Figure 1: (a) (b) (c) correspond to three different cases of support sets of the leading eigenvectors. We plot \\(\\|Q_{\\text{truncate}}-Q_{t}\\|_{F}^{2}\\) vs. iteration number \\(t\\) to measure the difference between the output matrix before and after the truncation step. The truncation level starts at \\(p\\) (nothing is truncated) and is decreased at iteration \\(21\\), \\(41\\), \\(61\\) and \\(81\\). The Iterative Thresholding Sparse PCA (ITSPCA) proposed by [16] also fits into the framework introduced in Section 2 and uses a thresholding step. The thresholding step is performed through a user-specified function \\(\\eta\\) which satisfies: \\[|\\eta(y_{i},t)-y_{i}|\\leq t,\\ \\eta(y_{i},t)\\mathbf{1}_{|y_{i}|<t}=0\\ \\forall y_{i}\\in y,\\ t>0. \\tag{10}\\] Common thresholding techniques include soft and hard thresholding, as well as a range of operators in between, see e.g. SCAD [6]. The analysis in Theorem 3 can be applied to ITSPCA if we substitute Lemma 3 by the following lemma: **Lemma** : _Consider a unit vector \\(\\bar{x}\\in\\mathbb{R}^{p}\\) with support set \\(supp(\\bar{x})=\\bar{F}\\), and \\(\\bar{k}=|\\bar{F}|\\). Consider a vector \\(y\\) that is thresholded by the user-specified thresholding function \\(\\eta\\), where \\(\\eta\\) satisfies (10).Then_ \\[\\frac{\\text{Thresh}(y,t)^{T}\\bar{x}}{\\|\\text{Thresh}(y,t)\\|_{2}}\\geq\\frac{|y^ {T}\\bar{x}|}{\\|y\\|_{2}}-\\frac{t\\sqrt{k}}{\\|y\\|_{2}}. \\tag{11}\\] Analysis in [16] under the spiked covariance model is also applicable to our Algorithm 1 since we can transform our sparsity constraint, which is an upper bound on \\(\\ell_{0}\\) norm, to an upper bound on \\(\\ell_{1}\\) norm: \\[\\|q_{j}\\|_{0}\\leq k_{j},\\ \\|q_{j}\\|_{2}=1\\Rightarrow\\|q_{j}\\|_{1}\\leq\\sqrt{k_{j}}.\\] ## 6 Experimental results In this section we show several numerical experiments to demonstrate the efficiency and accuracy of the proposed algorithms. Because the Truncated Orthogonal Iteration is a direct extension of the Truncated Power Method to recover multiple units, we first apply both methods on a simulated dataset with true sparse eigenvectors and report their run-time and distance to the true eigenvectors. We see that the Truncated Orthogonal Iteration is less sensitive to random initialization compared to TPower. We then use the PitProps Dataset, a standard benchmark, to evaluate the performance of the proposed algorithms and compare with state of the art. We also apply our algorithm to a sea surface temperature dataset for recovery of sparse patterns, and to the MNIST dataset for classification of handwritten digits. We refer to Algorithm 1 as TOrth and Algorithm 1 as TOrthT. **Initialization.** We use the same warm initialization strategy as specified in [25], i.e. starting with a larger \\(k\\) and use the output as the initialization for a smaller \\(k\\). Specifically, for each column vector we run the algorithm with \\(\\{8k_{i},4k_{i},2k_{i},k_{i}\\}\\) sequentially and use the original dimension \\(p\\) if the multiple of \\(k\\) is greater than \\(p\\). **Choosing the Cardinality Parameter \\(k\\).** Unless the cardinality parameter \\(k\\) is pre-specified, we start with a large cardinality constraint and gradually tighten it. Once the subspace converges for \\(k\\), we proceed to the next truncation level with cardinality parameter \\(k/2\\). If \\(k\\) is too small, \\(\\|\\sin\\Theta(Q_{t},Q_{t-1}\\|_{F}^{2})\\) is likely to have a big jump, as illustrated in Figure 2. **Convergence Criterion.** Since we are interested in not only recovering the eigenspace but also recovering the exact eigenvectors, the convergence criterion for the proposed algorithm is based on the size of \\(\\|Q_{t}-Q_{t-1}\\|_{2}\\). Unless specified, the default threshold is set to \\(10^{-12}\\), and the algorithm is forced to terminate after a maximum of 200 iterations. ### Comparisons on a Simulated Dataset where \\(\\bar{A}\\) is known We first consider a simulated example where we know the true matrix \\(\\bar{A}\\) with sparse eigenvectors and we add a small perturbation matrix \\(E\\) to it. Our goal is to recover the true sparse eigenvectors of \\(\\bar{A}\\) from the perturbed matrix \\(A=\\bar{A}+E\\). Specifically, we use a \\(1000\\times 1000\\) positive semidefinite matrix \\(\\bar{A}=V\\Sigma V^{T}\\) with the first three columns of \\(V\\) being sparse orthonormal vectors, and the rest columns of \\(V\\) are randomly generated orthonormal vectors such that \\(V^{T}V=I\\). We consider the following three different cases of the support set: * Case I: the support sets for the first three eigenvectors are identical. Specifically, for \\(i=1,2,3\\), the first 10 entries of \\(v_{i}\\) are nonzero and the rest of the entries are zero. * Case II: the support sets partially overlap. * Case III: the support sets are completely non-overlapping. For all three cases, the eigenvalues are set to be \\(\\lambda_{1}=1\\), \\(\\lambda_{2}=0.9\\), \\(\\lambda_{3}=0.8\\), \\(\\lambda_{j}=0.1\\) for \\(j=4,\\cdots,1000\\). \\(\\rho(E)\\approx 0.22\\). For each case, we use a fixed \\(\\bar{A}\\) and run 1000 trials for each algorithm, and each trial is randomly initialized. We denote the true eigenvectors of \\(A\\) as \\(v_{1},\\ v_{2},\\ v_{3}\\) and the recovered eigenvectors as \\(u_{1},\\ u_{2},\\ u_{3}\\). For each algorithm, we record the inner products of the true and the recovered eigenvectors averaged over _all_ trials. We also record the success rate, where a trial is counted as success if all three inner products are greater than 0.99. The recovery rate is measured by the recovery of the full support set, i.e. a trial is counted as a successful recovery if the recovered eigenvector has all nonzeros in the correct indices, otherwise it is counted as failure. The results show that TOrth achieves the best success rate among all three cases, and TOrthT achieves a comparable success rate with TOrth and has a better sparsity recovery rate than TPower. We also observe that in TPower, the second and the third recovered vectors \\(u_{2},\\ u_{3}\\) have worse quality in terms of distance to the true eigenvector compared to the first recovered vector \\(u_{1}\\). We explain this phenomenon by doing an error analysis for the deflation scheme. Suppose that \\(u_{j}=v_{j}+z\\), where \\(z\\) is the difference between the true and the recovered eigenvector. Without loss of generality, we compute the matrix after one deflation: \\[(I-u_{1}u_{1}^{T})(\\bar{A}+E)(I-u_{1}u_{1}^{T})\\] \\[=(I-v_{1}v_{1}^{T})\\bar{A}(I-v_{1}v_{1}^{T})+C\\bar{A}C+B\\bar{A}C+ C\\bar{A}B+CEC+BEB+BEC+CEB\\] Fig. 2: Effect of decreasing the truncation level on the distance between consecutive updates \\((Q_{t},Q_{t-1})\\) and on the distance between the updated matrix and the true eigenvectors \\((Q_{t},V)\\). We use the same toy example as described in section 4, with the matrix dimension \\(p=100\\) and the true sparsity level \\(\\bar{k}=10\\). Left: truncation level starts at \\(k=100\\) and decreases at iteration number 21 (\\(k=50\\)), 41 (\\(k=25\\)), 61 (\\(k=10\\)). Right (over-truncation): For iteration 1-80, same as described in the Right. At iteration 81, truncation level further decreases to \\(k=5\\). \\[:=(I-v_{1}v_{1}^{T})\\bar{A}(I-v_{1}v_{1}^{T})+E_{1}\\] where \\(B=I-v_{1}v_{1}^{T}\\) and \\(C=zz^{T}-v_{j}z^{T}-zv_{j}^{T}\\). When \\(u_{1}\\) converges to \\(v_{1}\\), \\(z\\) is negligible and \\(C\\approx\\mathbf{0}\\). But when \\(u_{1}\\) diverges, the error accumulates and the perturbation matrix \\(\\rho(E_{1})\\) to the deflated matrix can be even larger than \\(\\rho(\\bar{A})\\), thus causing \\(u_{2}\\) harder to converge. ### PitProps Dataset The PitProps dataset [8] is one of the classical examples of PCA interpretability and a benchmark to evaluate the performance of sparse PCA algorithms. The dataset contains 180 observations of props of Corsican pine from East Anglia and 13 variables corresponding to the physical properties of the props. The first six principal components obtained by standard PCA accounts for 87% of total variance. We compute the first six sparse loadings of the data and compare the results with other sparse PCA algorithms. Since the outputs of SPCA algorithms are not guaranteed to be uncorrelated, we measure the performances of sparse PCA algorithms by the proportion of adjusted explained variance (Prop. of AdjVar.) and the cumulative percentage of explained variance (CPEV), as explained below. * **Prop. of AdjVar.** Suppose \\(X\\) is the data matrix and \\(V\\) are the obtained sparse loadings. The adjusted variance [27] of the first \\(m\\) principal components \\(Y=XV\\) is computed by: \\[\\text{AdjVar}(V)=\\sum_{j=1}^{m}R_{jj}^{2}\\] where \\(R\\) is the upper-triangular matrix obtained by QR factorization of \\(Y\\) and is used to de-correlate between the components. If given the covariance matrix \\(A=X^{T}X\\), \\(R\\) can be obtained by the Cholesky factorization: \\(R=\\mathbf{chol}(V^{T}AV)\\). * **CPEV.** To account for the non-orthogonality of the loading matrix, CPEV was proposed in [19] and uses the projection of \\(X\\) onto the \\(m-\\)dimensional subspace spanned by the loading vectors \\(Z\\): \\[X_{m}=XV(V^{T}V)^{-1}V^{T}\\] the CPEV is then computed as \\(\\text{Trace}(X_{m}^{T}X_{m})/\\text{Trace}(X^{T}X)\\). \\begin{table} \\begin{tabular}{|l|c|c|c|c|c|c|} \\hline & Algorithms & \\(|v_{1}^{T}u_{1}|\\) & \\(|v_{2}^{T}u_{2}|\\) & \\(|v_{3}^{T}u_{3}|\\) & Success Rate & Recovery Rate \\\\ \\hline \\multirow{3}{*}{\\begin{tabular}{l} Case I: \\\\ Completely \\\\ overlap \\\\ \\end{tabular} } & Standard & 0.9912 & 0.9888 & 0.9869 & 0 & 0 \\\\ \\cline{2-6} & \\begin{tabular}{l} TPower \\\\ \\end{tabular} & 0.9885 & 0.9501 & 0.7881 & 78.7\\% & 39.2\\% \\\\ \\cline{2-6} & \\begin{tabular}{l} TOrth \\\\ \\end{tabular} & 0.9955 & 0.9910 & 0.9872 & **98.8\\%** & 50.0\\% \\\\ \\cline{2-6} & \\begin{tabular}{l} TOrthT \\\\ \\end{tabular} & 0.9935 & 0.9910 & 0.9872 & **98.8\\%** & **51.9\\%** \\\\ \\hline \\multirow{3}{*}{\\begin{tabular}{l} Case II: \\\\ Partially \\\\ overlap \\\\ \\end{tabular} } & Standard & 0.9916 & 0.9896 & 0.9863 & 0 & 0 \\\\ \\cline{2-6} & \\begin{tabular}{l} TPower \\\\ \\end{tabular} & 0.9131 & 0.8252 & 0.8141 & 75.8\\% & 75.8\\% \\\\ \\cline{2-6} & \\begin{tabular}{l} TOrth \\\\ \\end{tabular} & 0.9161 & 0.8962 & 0.9600 & **89.2\\%** & 0 \\\\ \\cline{2-6} & \\begin{tabular}{l} TOrthT \\\\ \\end{tabular} & 0.8971 & 0.8712 & 0.9540 & 86.8\\% & **86.8\\%** \\\\ \\hline \\multirow{3}{*}{\\begin{tabular}{l} Case III: \\\\ non-overlap \\\\ \\end{tabular} } & Standard & 0.9915 & 0.9892 & 0.9878 & 0 & 0 \\\\ \\cline{2-6} & \\begin{tabular}{l} TPower \\\\ \\end{tabular} & 0.8690 & 0.7550 & 0.7569 & 66.8\\% & 66.8\\% \\\\ \\cline{1-1} \\cline{2-6} & \\begin{tabular}{l} TOrth \\\\ \\end{tabular} & 0.8470 & 0.8020 & 0.9069 & **79.4\\%** & **79.4\\%** \\\\ \\cline{1-1} \\cline{2-6} & \\begin{tabular}{l} TOrthT \\\\ \\end{tabular} & 0.8560 & 0.7960 & 0.9030 & 79.0\\% & 79.0\\% \\\\ \\hline \\end{tabular} \\end{table} Table 1: Results on Simulated DataWe report the results obtained by Algorithm 1 and compare with the results obtained by TPower [25], GPower [14], PathPCA [4], rSVD [19] and SPCA [27] in Table 2. Overall, TOrthT performs on par with the other algorithms. ### Denoising of Synthetic Signals In this experiment we follow a similar setting as the denoising experiment from [9] and generate signals from the following noisy linear model: \\[u_{1}V^{1}+u_{2}V^{2}+u_{3}V^{3}+\\epsilon\\in\\mathbb{R}^{400}\\] where \\(V=[V^{1},V^{2},V^{3}]\\in\\mathbb{R}^{400\\times 3}\\) are sparse and structured dictionary elements organized on a \\(20\\times 20\\)-dimensional grid and are non-overlapping, as shown in the first row of Figure 3. Each dictionary element has a structured sparsity consisting of a \\(10\\times 10\\) nonzero block. The components of the noise vector \\(\\epsilon\\) are independent and identically distributed generated from a normal distribution. The linear coefficients \\([u_{1},u_{2},u_{3}]\\) are generated from a normal distribution: \\[[u_{1},u_{2},u_{3}]\\sim\\mathcal{N}\\Bigg{(}\\mathbf{0},\\left[\\begin{array}{ccc }1&0&0.5\\\\ 0&1&0.5\\\\ 0.5&0.5&1\\end{array}\\right]\\Bigg{)}.\\] We generate \\(n=250\\) signals according to the noisy linear model, and decompose the data matrix to obtain the first three dictionary elements using standard PCA, standard PCA+ post truncation (only truncate at the end) and TOrth. The sparsity level is set to be the true sparsity, i.e. \\(K=[100,100,100]\\). The results are shown in Figure 3. We observe that the standard PCA and simple truncation are not able to recover the original dictionaries, while TOrth finds the structured sparsity in all three elements. ### Sea Surface Temperature Example The Sea Surface Temperature dataset (SST) records the weekly means of satellite ocean temperature data over \\(360\\times 180=64,800\\) grid points from 1990 to present [18]. The El Nino Southern Oscillation (ENSO) is defined as any sustained temperature anomaly above running mean temperature with a duration of 9 to 24 months. The canonical El Nino is associated with a narrow band of warm water off coastal Peru, as recovered by TOrth shown in Figure 4 left. Standard PCA, as shown in Figure 4 right, is unable to separate this band from a global weather pattern across the Pacific and Atlantic. \\begin{table} \\begin{tabular}{|c|c|c|c|c|} \\hline Algorithms & Input Parameters & Output Card. & Prop. of AdjVar. & CPEV \\\\ \\hline TOrthT & \\(K=[7,2,4,3,5,4]\\) & 25 & 0.7956 & 0.8487 \\\\ & \\(K=[6,2,1,2,1,1]\\) & 13 & 0.7009 & 0.7528 \\\\ \\hline TPower & \\(K=[7,2,4,3,5,4]\\) & 25 & 0.7913 & 0.8377 \\\\ & \\(K=[6,2,1,2,1,1]\\) & 13 & 0.7003 & 0.7585 \\\\ \\hline GPower\\({}_{\\ell_{1}}\\) & \\(\\gamma=0.22\\); see [14] & 25 & 0.8083 & 0.8279 \\\\ & \\(\\gamma=0.40\\) & 13 & 0.7331 & 0.7599 \\\\ \\hline PathPCA & \\(K=[7,2,4,3,5,4]\\) & 25 & 0.8003 & 0.8438 \\\\ & \\(K=[6,2,1,2,1,1]\\) & 13 & 0.7202 & 0.7700 \\\\ \\hline rSVD\\({}_{\\ell_{1}}\\) & see Shen and Huang [19] & 25 & 0.8025 & 0.8450 \\\\ \\hline SPCA & see Zou et al. [27] & 18 & 0.7575 & 0.8022 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Results on PitPropsWe use this example to demonstrate the computational efficiency of our algorithm. The SST data dimension is \\(1455\\times 64800\\), where \\(n=1455\\) is the number of temporal snapshots and \\(p=64800\\) is the spatial grid points in each snapshot. We compute the fourth mode that is associated with the canonical El Nino, so we set \\(m=4\\) and compare with other block (sparse) PCA algorithms. The time complexity and actual running time are recorded in Table 3. Truncated SVD serves as a baseline as it computes all singular vectors and singular values and is more expensive than algorithms that only computes the leading vectors. The other approaches, although comparable to TOrth in time complexity, require longer running time since the Polar decomposition via SVD [14] is more expensive than the QR decomposition. The methods we compared to (GPower [14], Variable Projection SPCA [5]) are already among the top performers in terms of computational speed. For example, the elastic net SPCA algorithm [27] requires \\(O(np^{2}+p^{3})\\) and takes at least \\(\\times 10\\) running time compared to Variable Projection SPCA [5] on the SST dataset. ### Classification Example on the MNIST dataset We apply Algorithm 1 to the MNIST handwritten digit dataset and compare the classification performance with that of standard PCA. The MNIST dataset has 70,000 samples and each samples is a 2D image with \\(28\\times 28\\) pixels, Figure 4: The fourth mode recovered by standard PCA (Left) and by TOrth (Right). Figure 3: Original signals and signals recovered by PCA, PCA with simple truncation and TOrth. yielding 748 features. We first use the default train-test split where 60,000 samples are used as the training set and 10,000 samples as the test set. Applying a k-nearest neighbor classifier (KNN) on the original data gives a prediction error of 3%, where the number of nearest neighbors is chosen to be 3 by cross-validation. We apply standard PCA and TOrth to reduce the dimension from 748 to various subspace dimension \\(p\\). With PCA, applying KNN on the projected test data achieves the lowest prediction error of 2.47% when \\(p=60\\), and the performance of KNN starts to decline due to the curse of dimensionality. Fixing \\(p=60\\), results of TOrth with different \\(k\\) are shown in Table 4. We observe that by using only \\(k=20\\) in each loading vectors and 60 loading vectors, TOrth is able to achieve a prediction error comparable to KNN applied to raw data with full dimension. Figure 5 displays the top 30 loading vectors obtained by PCA and TOrth, where loading vectors obtained by TOrth captures more local features and includes fractions/strokes of digits. ### 20 newsgroup dataset We take a sample of the 20 newsgroup dataset [15], with binary occurrence data for 100 keywords across 16242 postings. The postings come from 4 general topics: computer, recreation, science and talk. We apply sparse PCA using TOrthT with sparsity level \\(k=10\\) and the standard PCA on the centered data. Table 5 shows the 10 nonzero entries in the coefficient vector for the first two PCs. The keywords associated with the first PC are more relevant to religion and politics, and the keywords associated with the second PC are more relevant to computers. We also project the data onto the 2D subspace spanned by PC1 and PC2, as shown in Figure 6. We observe that with the loading vectors obtained from TOrthT, the projections of the \"computer\"-themed data are dense in PC2 and sparse in PC1, while the \"talk\"-themed data projections are dense in PC1 and sparse in PC2. In contrast, projections onto the normal PCs are clustered and are dense in both directions, and lacks physical interpretations. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline k & 10 & 20 & 40 & 80 & 160 & 320 & 640 & 748 (\\(p\\)) \\\\ \\hline Prediction error (\\%) & 4.26 & 2.95 & 2.78 & 2.64 & 2.63 & 2.49 & 2.48 & 2.47 \\\\ \\hline \\end{tabular} \\end{table} Table 4: Prediction error on MNIST \\begin{table} \\begin{tabular}{|c|c|c|c|c|} \\hline & SVD & VarProj & GPower\\({}_{m}\\) & TOrth \\\\ \\hline \\(n>p\\) & \\(O(np^{2})\\) & \\(O(np^{2})+kO(mpn+m^{2}p)\\) & \\(kO(mpn+m^{2}p)\\) & \\(kO(mpn+m^{2}p)\\) \\\\ \\hline \\(n<p\\) & \\(O(pn^{2})\\) & \\(O(pn^{2})+kO(mpn+m^{2}p)\\) & \\(kO(mpn+m^{2}p)\\) & \\(kO(mpn+m^{2}p)\\) \\\\ \\hline Running Time & 31.30 & 44.79 & 17.19 & 5.96 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Comparisons on Complexity and Running Time \\begin{table} \\begin{tabular}{|c|c c c c c|c c|} \\hline PC1 & Question & Fact & Problem & Course & Case \\\\ & World & God & Number & Human & Government \\\\ \\hline PC2 & Help & Email & Problem & System & Windows \\\\ & Program & University & Computer & Software & Files \\\\ \\hline \\end{tabular} \\end{table} Table 5: Nonzero entries in the coefficient vector obtained by TOrthT ## 7 Conclusions In this paper, we have proposed two algorithms based on the orthogonal iteration to recover sparse eigenvectors, and established convergence analyses. Our algorithms can be easily implemented and work efficiently on a wide range of data science applications, while achieving comparable or superior performance compared with existing algorithms. Compared to its single-vector counterpart, the block scheme is more robust to random initialization and achieves better accuracy and sparse recovery rate. There are still many open problems in this area of research. One challenge for the block approach is how to maximally preserve orthogonality while achieving sparsity. Another problem is choosing a proper initialization strategy that leads to better and faster convergence. Lastly, different truncation schemes, including probabilistic rather than deterministic approaches, can be explored to preserve the true support set and avoid truncation error in the future. **Acknowledgments.** Figure 5: Top 30 loading vectors of MNIST training set. Left: loading vectors obtained by standard PCA. Right: loading vectors obtained by Algorithm 2.1, with \\(k_{i}=20,\\forall i\\). **Appendix.** We give the proof for (3.15) and (3.16), which is based on [Theorem 8.1.10] of [22]. **Theorem 7.1**: _Let \\(P\\) be the matrix of eigenvectors corresponding to the \\(m-\\)largest eigenvalues of \\(\\bar{A}\\). Assume \\(\\lambda_{m}>\\lambda_{m+1}\\). Define \\(\\gamma:=\\frac{\\lambda_{m+1}}{\\lambda_{m}}\\). Then the matrices \\(Q_{t}\\) generated by the standard orthogonal iteration satisfy:_ \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}\\leq\\gamma\\frac{\\|\\sin\\Theta(P,Q_{t-1})\\|_{F}}{ \\sqrt{1-\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}^{2}}},\\] _assuming \\(\\|\\sin\\Theta(P,Q_{t-1})\\|_{2}<1\\)._ In the standard orthogonal iteration, \\[\\bar{A}Q_{t-1}=Q_{t}R_{t}. \\tag{7.1}\\] We can decompose \\(Q_{t}\\) as \\(Q_{t}=PX_{t}+P^{\\perp}Y_{t}\\), where \\(P^{\\perp}\\) is the orthogonal complement of \\(P\\) and its columns are the eigenvectors of \\(\\bar{A}\\) corresponding to the \\(m+1,\\cdots p\\) eigenvalues, i.e. \\(\\bar{A}P=P\\Lambda_{m}\\) Figure 6: Projection of the “computer” and “talk”-themed data onto PC1 and PC2. (a) and (b): projections onto PC directions obtained by PCA. (c) and (d): projections onto PC directions obtained by TOrthT. \\(\\bar{A}P^{\\perp}=P^{\\perp}\\Lambda^{\\prime}\\), where \\(\\Lambda^{\\prime}=\\text{diag}(\\lambda_{m+1},\\cdots,\\lambda_{p})\\). We have the following equations: \\[X_{t}=P^{T}Q_{t},\\quad Y_{t}={P^{\\perp}}^{T}Q_{t}\\Leftrightarrow\\left[\\begin{array} []{c}X_{t}\\\\ Y_{t}\\end{array}\\right]=\\left[\\begin{array}{cc}P&P^{\\perp}\\end{array}\\right]^ {T}Q_{t}. \\tag{7.2}\\] By applying the thin CS decomposition [22] on (7.2) we have \\[1=\\sigma_{\\min}(X_{t})^{2}+\\sigma_{\\max}(Y_{t})^{2}, \\tag{7.3}\\] \\[\\left[\\begin{array}{cc}P&P^{\\perp}\\end{array}\\right]\\left[ \\begin{array}{cc}\\Lambda_{m}&\\\\ &\\Lambda^{\\prime}\\end{array}\\right]\\left[\\begin{array}{c}P^{T}\\\\ {P^{\\perp}}^{T}\\end{array}\\right]Q_{t-1}=Q_{t}R_{t}\\text{ by \\eqref{eq:1}}, \\tag{7.5}\\] \\[\\left[\\begin{array}{cc}\\Lambda_{m}&\\\\ &\\Lambda^{\\prime}\\end{array}\\right]\\left[\\begin{array}{c}P^{T}\\\\ {P^{\\perp}}^{T}\\end{array}\\right]Q_{t-1}=\\left[\\begin{array}{c}P^{T}\\\\ {P^{\\perp}}^{T}\\end{array}\\right]Q_{t}R_{t},\\] \\[\\left[\\begin{array}{cc}\\Lambda_{m}&\\\\ &\\Lambda^{\\prime}\\end{array}\\right]\\left[\\begin{array}{c}X_{t-1}\\\\ Y_{t-1}\\end{array}\\right]=\\left[\\begin{array}{c}X_{t}\\\\ Y_{t}\\end{array}\\right]R_{t}\\] \\[\\Rightarrow\\Lambda_{m}X_{t-1}=X_{t}R_{t},\\quad\\Lambda^{\\prime}Y_{ t-1}=Y_{t}R_{t}. \\tag{7.4}\\] Assuming \\(R_{t}\\) and \\(X_{t-1}\\) are nonsingular, we obtain the following equations from (7.5): \\[Y_{t} =\\Lambda^{\\prime}Y_{t-1}R_{t}^{-1}, \\tag{7.7}\\] \\[R_{t}^{-1} =(\\Lambda_{m}X_{t-1})^{-1}X_{t}=X_{t-1}^{-1}\\Lambda_{m}^{-1}X_{t},\\] (7.8) \\[\\|R_{t}^{-1}\\|_{2} \\leq\\|X_{t-1}^{-1}\\|_{2}\\|\\Lambda_{m}^{-1}\\|_{2}\\|X_{t}\\|_{2}\\leq \\frac{1}{\\sqrt{1-\\|Y_{t-1}\\|_{2}^{2}}}\\frac{1}{\\lambda_{m}} \\tag{7.6}\\] The last inequality in (7.8) comes from (7.3) and the fact that \\[\\|X_{t-1}^{-1}\\|_{2}\\leq\\frac{1}{\\sigma_{\\min}(X_{t-1})}=\\frac{1}{\\sqrt{1- \\sigma_{\\max}(Y_{t-1})^{2}}}.\\] Based on (7.6) and (7.8), we arrive at the bound given in (3.16): (7.9) \\[\\|\\sin\\Theta(P,Q_{t})\\|_{F}=\\|Y_{t}\\|_{F} \\leq\\|\\Lambda^{\\prime}\\|_{2}\\|Y_{t-1}\\|_{F}\\|R_{t}^{-1}\\|_{2}\\] (7.10) \\[\\leq\\lambda_{m+1}\\cdot\\|Y_{t-1}\\|_{F}\\cdot\\frac{1}{\\sqrt{1-\\|Y_{ t-1}\\|_{2}^{2}}}\\cdot\\frac{1}{\\lambda_{m}}\\] (7.11) \\ ## References * [1]T. Cai, D. Kim, X. Song, and Y. Wang, _Optimal sparse eigenspace and low-rank density matrix estimation for quantum systems_, Journal of Statistical Planning and Inference, 213 (2020), pp. 50-71. * [2]A. d'Aspremont, L. E. Ghaoui, M. I. Jordan, and G. R. Lanckriet, _A direct formulation for sparse pca using semidefinite programming_, in Advances in neural information processing systems, 2005, pp. 41-48. * [3]A. de Pierrefeu, T. Lofstedt, F. Hadj-Selem, M. Dubois, R. Jardri, T. Fovet, P. Ciuciu, V. Froun, and E. Duchesnay, _Structured sparse principal components analysis with the tv-elastic net penalty_, IEEE transactions on medical imaging, 37 (2017), pp. 396-407. * [4]A. d'Aspremont, F. Bach, and L. E. Ghaoui, _Optimal solutions for sparse principal component analysis_, Journal of Machine Learning Research, 9 (2008), pp. 1269-1294. * [5]N. B. Erichson, P. Zheng, K. Manohar, S. L. Brunton, J. N. Kutz, and A. Y. Aravkin, _Sparse principal component analysis via variable projection_, SIAM Journal on Applied Mathematics, 80 (2020), pp. 977-1002. * [6]J. Fan and R. Li, _Variable selection via nonconcave penalized likelihood and its oracle properties_, Journal of the American statistical Association, 96 (2001), pp. 1348-1360. * [7]H. Hotelling, _Analysis of a complex of statistical variables into principal components._, Journal of educational psychology, 24 (1933), p. 417. * [8]J. Jeffers, _Two case studies in the application of principal component analysis_, Journal of the Royal Statistical Society: Series C (Applied Statistics), 16 (1967), pp. 225-236. * [9]R. Jenatton, G. Obozinski, and F. Bach, _Structured sparse principal component analysis_, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010, pp. 366-373. * [10]I. M. Johnstone, _On the distribution of the largest eigenvalue in principal components analysis_, Annals of statistics, (2001), pp. 295-327. * [11]I. M. Johnstone and A. Y. Lu, _On consistency and sparsity for principal components analysis in high dimensions_, Journal of the American Statistical Association, 104 (2009), pp. 682-693. * [12]I. Jolliffe, _Principal Component Analysis_, Springer Verlag, 1986. * [13]I. T. Jolliffe, _Rotation of iii-defined principal components_, Journal of the Royal Statistical Society: Series C (Applied Statistics), 38 (1989), pp. 139-147. * [14]M. Journee, Y. Nesterov, P. Richtarik, and R. Sepulchre, _Generalized power method for sparse principal component analysis_, Journal of Machine Learning Research, 11 (2010), pp. 517-553. * [15]K. Lang, _Newsweeder: Learning to filter netnews_, in Machine Learning Proceedings 1995, Elsevier, 1995, pp. 331-339. * [16]Z. Ma et al., _Sparse principal component analysis and iterative thresholding_, The Annals of Statistics, 41 (2013), pp. 772-801. * [17]K. Pearson, _Liii. on lines and planes of closest fit to systems of points in space_, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2 (1901), pp. 559-572. * [18]R. W. Reynolds, N. A. Rayner, T. M. Smith, D. C. Stokes, and W. Wang, _An improved in situ and satellite sst analysis for climate_, Journal of climate, 15 (2002), pp. 1609-1625. * [19]H. Shen and J. Z. Huang, _Sparse principal component analysis via regularized low rank matrix approximation_, Journal of multivariate analysis, 99 (2008), pp. 1015-1034. * [20]G. W. Stewart, _Matrix perturbation theory_, (1990). * [21]L. Tian, F. Nie, R. Wang, and X. Li, _Learning feature sparse principal subspace_, Advances in Neural Information Processing Systems, 33 (2020). * [22]C. F. Van Loan and G. H. Golub, _Matrix computations_, Johns Hopkins University Press Baltimore, 1983. * [23]V. Q. Vu, J. Lei, et al., _Minimax sparse principal subspace estimation in high dimensions_, The Annals of Statistics, 41 (2013), pp. 2905-2947. * [24]G. Wang and S. Dey, _Upper bounds for model-free row-sparse principal component analysis_, in International Conference on Machine Learning, PMLR, 2020, pp. 9868-9875. * [25]X.-T. Yuan and T. Zhang, _Truncated power method for sparse eigenvalue problems_, Journal of Machine Learning Research, 14 (2013), pp. 899-925. * [26]Y. Zhang, A. d'Aspremont, and L. El Ghaoui, _Sparse pca: Convex relaxations, algorithms and applications_, in Handbook on Semidefinite, Conic and Polynomial Optimization, Springer, 2012, pp. 915-940. * [27]H. Zou, T. Hastie, and R. Tibshirani, _Sparse principal component analysis_, Journal of computational and graphical statistics, 15 (2006), pp. 265-286. * [28]H. Zou and L. Xue, _A selective overview of sparse principal component analysis_, Proceedings of the IEEE, 106 (2018), pp. 1311-1320.
A wide range of problems in computational science and engineering require estimation of sparse eigenvectors for high dimensional systems. Here, we propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously. We establish numerical convergence results for the proposed algorithms using a perturbation framework, and extend our analysis to other existing alternatives for sparse eigenvector estimation. We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets, from simple simulations to real-world datasets including MNIST, sea surface temperature and 20 newsgroups. In all these cases, we show that the new methods get state of the art results quickly and with minimal parameter tuning. O rthogonal Iteration, Eigenvalue problem, Sparsity, Sparse PCA 15A18, 65F15
Give a concise overview of the text below.
171
arxiv-format/2307_11259v2.md
Investigating Low Data, Confidence Aware Image Prediction on Smooth Repetitive Videos using Gaussian Processes Nikhil U. Shinde\\({}^{1}\\), Xiao Liang\\({}^{1}\\), Florian Richter\\({}^{1}\\), Sylvia Herbert\\({}^{1}\\), and Michael C. Yip\\({}^{1}\\) \\({}^{1}\\) Nikhil U. Shinde, Xiao Liang, Florian Richter, Sylvia Herbert and Michael C. Yip are with the University of California San Diego {nshinde, x5liang, frichter, shehert, yip}@ucsd.edu ## I Introduction Predicting the future states of an environment is key to enabling smart decision-making. As humans, we use predictions to inform daily decisions. This ranges from navigating through a crowded crosswalk of pedestrians to using weather forecasts to plan out our week. In robotics, we use such predictive models not only to model robots and generate complex control strategies, but also to enable robots to model and better interact with their environment. Though perfectly predicting such diverse phenomena requires underlying state information and large trained complex models, humans are often able to make predictions in completely novel environments using imperfect models informed solely by visual input. In this paper, we focus on the task of predicting the future in smooth video sequences given only a very limited number of initial frames for context and training. With cameras becoming one of the most prevalent sensing modalities, future video prediction is a well-researched topic. Several works like [1] and [2] directly predict future frames of video in the image space. Other works like [3] even use these predictions for robotic control. However, most of these works use complex parametric models like neural networks that require large datasets to train. These works demonstrate high predictive accuracy, but fail to provide good confidence metrics around their predictions. In this work we take an initial step towards investigating confidence-aware video prediction from low amounts of data using Gaussian processes (GPs). In particular, we look at the problem of predicting the next \\(t\\) frames of a video given the previous \\(n\\) consecutive video frames, given very limited priors and training data. Due to the difficulty posed by our restriction to the problem with low training data, which may arise due to constraints such as cost, regulations or entering an unseen scenario, we focus our investigation on predicting smooth videos with highly repetitive, but still complex motion. We focus our evaluation on predicting videos of fluid viscosity flows. In these videos a large amount of dynamic information can be gained from just a few frames. We choose to focus on fluids because the repetitive dynamics and patterns observed in fluids are observed in and used to model many complex real world phenomena such as the flow of pedestrians [4, 5, 6] and weather prediction [7]. We showcase our method by demonstrating predictions on real-world examples of crowd flow (Figure 1) and videos of satellite weather patterns (Figure 5). Additionally, due to our very limited training distribution, we expect our models to have limited predictive fidelity. To know when we can trust our predictions, we propagate uncertainty through our predictions over time and give confidence metrics on the prediction of future frames. Our Fig. 1: Two real-world pedestrian flow prediction results over time generated by our method trained with only 10 images. Our method predicts future pedestrian motion. Larger variance appears at regions of moving pedestrians, roughly agreeing with regions of error. use of GPs for prediction enables high-quality predictions near our easily expandable training distributions, while also providing variance estimates as interpretable metrics around the confidence of our predictions. In this paper we present the following novel contributions: 1. A framework for confidence-aware prediction from low data on smooth videos using Gaussian Processes. 2. Evaluation of our framework and comparisons on fluid viscosity prediction. 3. Examples of our framework on real-world data through predictions on the flow of pedestrians and satellite weather patterns. ## II Related Works Solutions for image sequence prediction problems often heavily rely on large datasets. There are several available video datasets such as the Kitti [8], Camvid [9] and Caltech Pedestrian Dataset [10] for prevalent problems like autonomous driving. Some video datasets, such as the robot pushing dataset [3], provide video data influenced by external controls for tasks like robot manipulation. To solve the problem settings captured by the aforementioned datasets, researchers train large parametric neural networks. Most state of the art methods in video prediction build off of a few baseline neural network architectures: convolutional, recurrent and generative models. Convolutional neural networks, which rely on learning 2D convolutional kernels, enabled a breakthrough in problems in the image domain [11]. They have also been extended to problems in video through 3D convolutions [12, 13]. Recurrent neural networks (RNNs) [14] and long short-term memory (LSTM) networks [15] have more principled architectures to handle the time dependencies that come with sequences of images. They have been leveraged by works such as [16, 17], and [18]. Generative adversarial networks directly model the generative distribution of the predicted images [19]. To handle uncertainty, some researchers have turned to using Variational autoencoders [20] and ensemble networks [21]. Most approaches employ combinations of these architectures to achieve state of the art results. Methods like [1] and [2] use these methods to directly predict the pixels in future images. We take inspiration from this direct prediction approach, along with generative and convolutional approaches, and design our method to directly generate distributions on output pixels while iterating over the image in a kernel-like fashion. There is also a large body of work on predicting and simulating fluids. The motion of incompressible fluids is governed by the Navier Stokes equations, a set of partially differentiable equations (PDEs). Modern techniques such as [22, 23, 24] and [25] use neural networks to learn to solve complex PDEs from data. We compare our method against the FNO2D-time and FNO 3D networks proposed in [25]. All the methods discussed above focus on a prediction problem where large representative datasets are readily available, while failing to provide confidence metrics around their predictions. In this paper, we focus on the case when the available training data is limited to only a few frames and thus understanding predictive confidence even more important. In these low-data scenarios, the above methods often fail to converge to useful solutions. We choose Gaussian process regression as the core predictive component of our method. Gaussian processes have been used across various robotic and machine learning applications to estimate highly nonlinear functions with probabilistic confidence [26]. These non-parametric models have been regularly used to estimate functions online with very little data. [27] use GPs to model complex robot-object interactions online. The authors of [28] use a GP-enhanced reinforcement learning model to learn blimp dynamics online. This model improves the state predictions of the traditional ODE modeling approach while also giving a useful uncertainty estimate. SOLAR-GP builds upon such system identification approaches and uses localized sparse GP models to learn robot dynamics online to improve teleoperation [29]. PILCO improves the system identification approach further by learning a probabilistic dynamics model [30]. They propagate prediction uncertainty through time to facilitate long-term planning and improve policy search methods for reinforcement learning with very little data collection. GP's predictive uncertainty measure has also been widely used by the safety community. Safe IML uses GPs to estimate an environment's safety function online [31]. They leverage the uncertainty outputted by the GP to provide safety guarantees and inform intelligent and risk-aware exploration that does not compromise the robot's safety. In this work, we use GPs for low data, confidence aware predictions of future images from image sequences. ## III Background: Gaussian Processes The core predictive component of our method uses a Single Output GP Regression Model. A GP models a function \\(f\\), using training data \\((X,f(X))\\). \\(X=[x_{0},x_{1},\\ldots,x_{n-1}]\\in\\mathbb{R}^{n\\times D}\\) are all the training inputs and \\(f(x)=[f(x_{0}),\\ldots,f(x_{n-1})]\\in\\mathbb{R}^{n\\times 1}\\) are the training outputs. Given test inputs \\(X\\in\\mathbb{R}^{m\\times D}\\) we want to find outputs \\(f(X^{{}^{\\prime}})\\). Let \\(X_{A}\\in\\mathbb{R}^{(m+n)\\times D}\\) refer to all the train and test inputs and \\(f(X_{A})\\) be the corresponding outputs. A GP relies on the assumption that all the outputs are characterized by a multivariate gaussian distribution \\(f(X_{A})\\sim\\mathcal{N}(\\mu(X_{A}),\\Sigma_{X_{A}X_{A}})\\). We assume the mean \\(\\mu(X_{A})=0\\), and the covariance matrix is characterized by a kernel function \\(k(x,y)\\) such that \\(\\Sigma_{X_{A},X_{A}}[u,v]=k(X_{A}[u],X_{A}[v])\\). To solve for the distribution of the test outputs \\(p(f(X^{{}^{\\prime}}))\\sim\\mathcal{N}(\\mu(X^{{}^{\\prime}}),\\Sigma_{X^{{}^{\\prime }}X^{{}^{\\prime}}})\\) we use the marginal likelihood of a multivariate gaussian \\(p(f(X^{{}^{\\prime}}|X,f(X),X^{{}^{\\prime}}))\\) to get: \\[\\mu(X^{{}^{\\prime}})=k(X^{{}^{\\prime}},X)[k(X,X)+\\sigma_{n}^{2}I]^{-1}f(X) \\tag{1}\\] \\[\\Sigma_{X^{{}^{\\prime}}X^{{}^{\\prime}}}=k(X^{{}^{\\prime}},X^{{}^{\\prime}})-k(X^ {{}^{\\prime}},X)[k(X,X)+\\sigma_{n}^{2}I]^{-1}k(X,X^{{}^{\\prime}}) \\tag{2}\\] here, \\(k(X^{{}^{\\prime}},X)[u,v]=k(X^{{}^{\\prime}}[u],X[v])\\), \\(k(X^{{}^{\\prime}},X)\\in\\mathbb{R}^{n\\times n}\\), \\(k(X^{{}^{\\prime}},X)=k(X,X^{{}^{\\prime}})^{T}\\in\\mathbb{R}^{m\\times n}\\), \\(\\sigma_{n}^{2}\\) is the noise variance, and \\(I\\in\\mathbb{R}^{n\\times n}\\) is the identity matrix. To train the GP Regression model we optimize the noise variance, \\(\\sigma_{n}^{2}\\), and kernel parameters to maximize the log-likelihood of the training data. We use the radial basis function (RBF) kernel: \\[k(x,y)=\\alpha^{2}\\exp(-\\frac{(x-y)^{T}\\Lambda^{-1}(x-y)}{2}) \\tag{3}\\] Kernel parameters, \\(\\alpha\\) and \\(\\Lambda\\), are optimized during training. For outputs of dimension \\(O>1\\), we train a GP for each dimension \\(a\\in[0,O-1]\\). Each model has a kernel \\(k_{a}(.,.)\\), trained by optimizing its parameters \\(\\sigma_{n,a},\\alpha_{a}\\) and \\(\\Lambda_{a}\\). ## IV Methods ### _Problem Statement and Prediction Framework_ We define an image sequence as a sequence of frames \\([z_{0},z_{1}\\dots z_{t_{0}-1},z_{t_{0}},\\dots z_{t-1},z_{t}\\dots]\\). Here \\(z_{i}\\in\\mathbb{R}^{H\\times W}\\) denotes the \\(i\\)-th frame in the sequence. Given initial training frames \\([z_{0},\\dots z_{t_{0}-1}]\\), our objective is to predict frames \\([z_{t_{0}},\\dots z_{t},\\dots]\\). As additional frames \\([z_{t_{0}},\\dots z_{t^{\\prime}}]\\) become available, they may be incorporated into the model's training data to improve the accuracy of the future prediction. Figure 2 provides an overview of our method. We train a model to use recurring motion patterns to understand scene dynamics to predict future images. Our model learns and predicts on square image patches of dimension \\((p,p)\\). Predicting at a smaller scale enables us to better use our limited training data and extract smaller repeating patterns that are more likely to recur across space and time. Our method predicts one frame at a time given the \\(3\\) most recent, seen, and predicted frames. We use \\(3\\) input frames to capture second-order dynamics. The GP regression models predict per-pixel distributions in future images. These are combined to form a random variable image. The predicted image is incorporated into the next set of inputs, and the process is repeated. Our method must handle a combination of random and known inputs, while propagating probability distributions through time. ### _Training_ To construct our model, we first create a training data set from frames \\([z_{0},\\dots,z_{t_{0}-1}]\\). We divide the images into sets of \\(4\\) sequential images \\([z_{i},z_{i+1},z_{i+2},z_{i+3}],i\\in[0,t_{0}-4]\\). To create a datapoint we take \\(p\\) dimensional patches corresponding to the same pixel locations from each image. \\(z_{i}[k:k+p,l:l+p]\\in\\mathbb{R}^{p\\times p}\\) denotes a \\(p\\) by \\(p\\) patch in image \\(z_{i}\\) starting at pixel \\((k,l)\\). A training input, \\(x_{j}\\in\\mathbb{R}^{3p^{2}}\\), is created by flattening and concatenating the patches from the first \\(3\\) images. The corresponding training output, \\(f(x_{j})\\in\\mathbb{R}^{(p-2b)^{2}}\\), is created by flattening the corresponding patch from the \\(4\\)th image \\(z_{i+3}\\), cropped with a patch boundary term _b_: \\(z_{i+3}[k+b:k+p-b,l+b:l+p-b]\\in\\mathbb{R}^{(p-2b)\\times(p-2b)}\\). When \\(b>0\\), we do not predict the outer edges of the patch, where predictions may suffer due to contributions from the scene outside our input. Within each set of \\(4\\) sequential images, we select data points with a stride of \\(s\\) pixels. In this paper, we use a \"wrapping\" approach to handle patches that extend beyond the edge of an image. This approach assumes the frame \\(z_{i}\\) captures a periodic environment where \\(z[H+i,W+j]=z[i-1,j-1]\\) and \\(z[-i,-j]=z[H-(i+1),W-(j+1)]\\). Approaches like padding frames or skipping incomplete patches are possible, but not further explored in this paper. We repeat this procedure for every set of images to create the training dataset with \\(n\\) data points: \\((X,f(X))=(x_{j},f(x_{j})),j\\in[0,n-1]\\). \\(X\\in\\mathbb{R}^{n\\times 3p^{2}}\\) and \\(f(X)\\in\\mathbb{R}^{n\\times(p-2b)^{2}}\\). We create a GP Regression model for every output dimension \\(O=(p-2b)^{2}\\). Each model is trained by optimizing its noise, \\(\\sigma_{n,a}\\), and kernel parameters, \\(\\alpha_{a}\\) and \\(\\Lambda_{a}\\) in \\(k_{a}(.,.)\\), to maximize the log likelihood of the training data for output dimension \\(a\\in[0,(p-2b)^{2}]\\). To predict future images, each GP model outputs a mean and variance corresponding to a pixel in the output patch. The predicted image \\(z_{i}\\), is represented by a mean and variance image pair \\((M_{i},V_{i})\\). Each pixel in \\(M_{i}\\) and \\(V_{i}\\) corresponds to the mean and variance of the predicted random gaussian variable for that pixel location, respectively. ### _Prediction_ Once trained, we can use our model to rollout predictions for any \\(T\\) time steps into the future, starting from \\(3\\) known, consecutive input images \\([z_{i},z_{i+1},z_{i+2}]\\). We use a recursive method to predict multiple frames into the future. Our model takes the \\(3\\) most recently seen and predicted frames and uses Fig. 2: Overview of the proposed method. We begin by pre-processing the initial video frames \\([z_{0},\\dots,z_{t_{0}-1}]\\) to form the dataset to train a GP regression model. During test time, three sequential input images are processed into test inputs. Our trained model predicts output distributions from these test inputs. These distributions are then combined to form a predictive distribution of the image at the next time step. The prediction is then incorporated into the next set of input images to recursively rollout a sequence of image probabilities. them to predict one future frame, represented as a random variable. We incorporate this predicted random variable as the latest image in the \\(3\\) frame inputs to predict the next time step. This process is repeated to predict the desired \\(T\\) steps into the future. We begin by discussing our predictions in the context of predicting the fourth time step and onwards. Starting at the fourth prediction, all the input images are random variables previously outputted by our model. The first three predictions incorporate known, observed input images and predicted random input images. These initializing predictions will be discussed as a special case of the more general prediction from all random variable input images. We discuss the general method of predicting \\(z_{i+3}\\) from input images \\([z_{i},z_{i+1},z_{i+2}]\\), which are all random variables outputted by our model. To predict \\(z_{i+3}\\), we create a set of \\(m\\) test inputs \\(x^{{}^{\\prime}}_{j}\\), \\(j\\in[0,m-1]\\). Each test input is a multivariate Gaussian random variable composed of \\(3p^{2}\\) independent Gaussian random variables selected from the input images. Since the input images are predicted random variables, our test inputs are selected from their corresponding mean and variance images: \\([(M_{i},V_{i}),(M_{i+1},V_{i+1}),(M_{i+2},V_{i+2})]\\). We make simplifying assumptions that the predictions of each GP model as well as the outputs of the same model on different inputs are independent. Without assuming independence, we would have the computationally intractable task of tracking the covariance across all pixels across all time. As a result, the predicted images and their sub-patches can be flattened, concatenated, and represented as one multivariate Gaussian random variable. We use the patch-based selection method described in Section IV-B, separately, on the sets of consecutive mean and variance images to generate the mean and variance input vectors, \\(x^{{}^{\\prime}}_{j,\\mu}\\in\\mathbb{R}^{3p^{2}}\\) and \\(x^{{}^{\\prime}}_{j,\\sigma}\\in\\mathbb{R}^{3p^{2}}\\) respectively. These vectors specify the multivariate gaussian distribution of the input \\(x^{{}^{\\prime}}_{j}\\sim\\mathcal{N}(x^{{}^{\\prime}}_{j,\\mu},\\Sigma_{x^{{}^{ \\prime}}_{j,\\sigma}})\\). To construct \\(\\Sigma_{x^{{}^{\\prime}}_{j,\\sigma}}\\), we use our independence assumptions such that the input covariance is a diagonal matrix with the vector of variances, \\(x^{{}^{\\prime}}_{j,\\sigma}\\), along the diagonal. We adjust our stride to generate an input to predict every pixel in the future image. We discuss our method in the context of predicting a single output dimension \\(a\\in[0,(p-2b)^{2}]\\) from a single input \\(x^{{}^{\\prime}}_{j}\\). Our model predicts the random variable \\(f(x^{{}^{\\prime}}_{j})[a]\\). As in standard GP Regression, we are solving for the distribution of \\(p(f(x^{{}^{\\prime}}_{j})[a])\\). We solve for \\(p(f(x^{{}^{\\prime}}_{j})[a])\\) by marginalizing \\(p(f(x^{{}^{\\prime}}_{j})[a]|X,f(X),x^{{}^{\\prime}}_{j})\\) over the input image distributions. \\[p(f(x^{{}^{\\prime}}_{j})[a])=\\int_{-\\infty}^{\\infty}p(f(x^{{}^{\\prime}}_{j})[a ]|x^{{}^{\\prime}}_{j},X,f(X)[:,a])p(x^{{}^{\\prime}}_{j})dx^{{}^{\\prime}}_{j} \\tag{4}\\] Solving this integral is analytically intractable. We approximate the posterior distribution from equation 4 to be Gaussian. Having the outputs form a multivariate Gaussian, like the inputs, enables recursive prediction. To solve for \\(p(f(x^{{}^{\\prime}}_{j})[a])\\) we take advantage of this assumption and use moment matching in a method akin to [30]. Our method is distinguished from [30] in its use of multiple past states as inputs, prediction on images, and incorporation of known input states. Moment matching enables us to directly solve for the mean \\(\\mu(x^{{}^{\\prime}}_{j})[a]\\) and variance \\(\\Sigma(x^{{}^{\\prime}}_{j})[a,a]\\) of the outputted Gaussian distribution. This gives us the following formula to predict the mean of an output pixel from all random inputs: \\[\\mu(x^{{}^{\\prime}}_{j})[a]=d_{a}^{T}\\beta_{a} \\tag{5}\\] \\[d_{a}[i]=\\alpha_{a}^{2}(|\\Sigma_{x^{{}^{\\prime}}_{j,\\sigma}}\\Lambda _{a}^{-1}+I|)^{-\\frac{1}{2}}e^{-\\frac{1}{2}v_{i}^{T}(\\Sigma_{x^{{}^{\\prime}}_{ j,\\sigma}}+\\Lambda_{a})^{-1}v_{i}}\\] \\[\\beta_{a}=[k_{a}(X,X)+\\sigma_{n}^{n}I]^{-1}f(X)[:,a]\\] Here \\(d_{a}\\in\\mathbb{R}^{n}\\). \\(v_{i}=x^{{}^{\\prime}}_{j,\\mu}-x_{i}\\) where \\(x_{i}\\) is the \\(i\\)th training input. To predict variance from random inputs we use: \\[\\Sigma(x^{{}^{\\prime}}_{j})[a,a]=\\alpha_{a}^{2}-\\text{Tr}\\left((k _{a}(X,X)+\\sigma_{n,a}^{2}I)^{-1}Q_{aa}\\right)+ \\tag{6}\\] \\[\\beta_{a}^{T}Q_{aa}\\beta_{a}-\\mu(x^{{}^{\\prime}}_{j})[a]\\] \\[Q_{aa}[i,k]=k_{a}(x_{i},x^{{}^{\\prime}}_{j,\\mu})k_{a}(x_{k},x^{{} ^{\\prime}}_{j,\\mu})|R|^{-\\frac{1}{2}}e^{\\frac{1}{2}x_{i}^{T}R^{-1}\\Sigma_{x^{{}^ {\\prime}}_{j,\\sigma}}z_{ik}} \\tag{7}\\] Here \\(Q_{aa}\\in\\mathbb{R}^{n\\times n}\\), \\(R=\\Sigma_{x^{{}^{\\prime}}_{j,\\sigma}}2\\Lambda_{a}^{-1}+I\\in\\mathbb{R}^{n \\times n}\\) and \\(z_{ik}=\\Lambda_{a}^{-1}v_{i}+\\Lambda_{a}^{-1}v_{k}\\). These equations are used on all the test inputs to predict the mean and variance for every pixel in \\(z_{i+3}\\). The predicted image \\(z_{i+3}\\) is stored as a mean and variance image tuple: \\((M_{i+3},V_{i+3})\\). To continue the predictive rollout, we incorporate the latest prediction into a new set of input images \\([z_{i+1},z_{i+2},z_{i+3}]\\) to predict \\(z_{i+4}\\). The mean images \\([M_{i},\\dots,M_{i+3},\\dots]\\) act as our predictions, while the variance images \\([V_{i},\\dots,V_{i+3},\\dots]\\) act as a confidence measure on our prediction. In the first \\(3\\) predictions some or all of the input images are known, observed images. Predictions with these inputs are special cases of the general formulation with all random variable inputs. To solve this case, we still consider all components of our input as random variables. We use our independence assumptions to disentangle the predictive method into parts that solely interact with the input dimensions contributed from the known observed images, \\(x^{{}^{\\prime}}_{j}\\), and those contributed from the predicted random variable images, \\(x^{{}^{\\prime}}_{j}\\). We treat the predicted random variable component of the input as a multivariate Gaussian, \\(x^{{}^{\\prime}}_{j}\\sim\\mathcal{N}(x^{{}^{\\prime}}_{j,\\mu},\\Sigma_{x^{{}^{ \\prime}}_{j,\\sigma}})\\), and use a Dirac delta, \\(\\delta(x-x^{{}^{\\prime}}_{j,\\mu})\\), to describe the distribution at the observed components. We split up the RBF kernel function \\(k_{a}(x,y)=k_{a}^{K}(x^{K},y^{K})k_{a}^{R}(x^{R},y^{R})\\) into components that interact solely with the known, \\(k_{a}^{K}\\), and random, \\(k_{a}^{R}\\) dimensions of the inputs. These known and random kernels have parameters \\(\\alpha_{a,K},\\Lambda_{a,K}\\) and \\(\\alpha_{a,R},\\Lambda_{a,R}\\) respectively. We use moment matching with these representations to solve the intractable integral from equation 4 for distributions on future predictions. The delta distribution at the known pixels allows us to integrate out certain contributions from the known components during the moment matching steps. This gives us the following equation to predict the mean from these hybrid inputs: \\[\\mu(x^{{}^{\\prime}}_{j})=d_{a,h}^{T}\\beta_{a} \\tag{8}\\] \\[d_{a,h}[i]=k_{a}^{K}(x_{i}^{K},x^{{}^{\\prime}}_{j,\\mu}).\\]Where the \\(\\cdot\\) operator represents elementwise multiplication. \\(\\beta_{a}\\) is defined in equation 5, and \\(v_{i,R}=x_{j,\\mu}^{\\prime}-x_{i}^{R}\\). Here \\(x_{i}^{K},x_{i}^{R}\\) are the components of the \\(i^{\\text{th}}\\) training data input that correspond to the known and random components of the test input respectively. To predict the variance from the hybrid inputs we use: \\[\\begin{split}\\Sigma(x_{j}^{{}^{\\prime}})[a,a]=\\alpha_{a}^{2}- \\text{Tr}((k_{a}(X,X)+\\sigma_{n,a}^{2}I)^{-1}Q_{aa,h})+\\\\ \\beta_{a}^{T}Q_{aa,h}\\beta_{a}-\\mu(x_{j}^{{}^{\\prime}})[a]\\\\ Q_{aa,h}=Q_{aa,K}\\cdot Q_{aa,R}\\\\ Q_{aa,K}=k_{a}^{K}(x_{j,\\mu}^{{}^{\\prime}K},X^{K})^{T}k_{a,K}(x_{j, \\mu}^{{}^{\\prime}K},X^{K})\\in\\mathbb{R}^{n\\times n}\\\\ Q_{aa,R}[i,k]=\\frac{1}{\\sqrt{|R_{R}|}}\\\\ k_{a}^{K}(x_{i}^{R},x_{j,\\mu}^{{}^{\\prime}R})k_{a}^{R}(x_{k}^{R},x_ {j,\\mu}^{{}^{\\prime}R})\\frac{\\mathbb{e}^{2}\\tilde{\\pi}_{i,R}^{2}R\\tilde{R}^{ -1}_{i}\\Sigma_{x_{j,\\sigma}^{{}^{\\prime}R}}z_{i,k,R}}{\\mathbb{e}^{2}\\tilde{ \\pi}_{i,R}^{2}R\\tilde{R}^{-1}_{i}\\Sigma_{x_{j,\\sigma}^{{}^{\\prime}R}}z_{i,k,R} }\\end{split} \\tag{9}\\] The \\(\\cdot\\) operator represents elementwise multiplication. \\(X^{K}\\) are the dimensions associated with the known inputs across all training inputs. \\(R_{R}=\\Sigma_{x_{j,\\sigma}^{{}^{\\prime}R}}\\left(2\\Delta_{a,R}^{-1}\\right)+I\\), and \\(z_{ik,R}=\\Lambda_{a,R}^{-1}v_{i,R}+\\Lambda_{a,v}^{-1}v_{k,R}\\). Additionally, \\(\\alpha_{a},\\sigma_{n,a}\\) are parameters of the original (unsplit) kernel function \\(k_{a}\\). Together we use equation 5, equation 6, equation 8 and equation 9 to predict the mean and confidence bounds on future images. ## V Experiments and Results We test our methods by predicting the vorticity of an incompressible fluid in a unit torus environment. Our data is computed using the 2D Navier Stokes Equations. We generate our data with traditional PDE solvers using the code and approach detailed in [25]. The fluid simulation generates image sequences whose pixels change smoothly over both space and time. This environment is well suited to the RBF kernel. The dynamics of the toroidal environment wrap around the square image frame. This enables us to utilize the \"wrapping\" approach when creating edge patches. Each image pixel is a float centered at 0. We directly predict future pixels using our 0-mean GP regression model. As shown in Figure 3(a), each image \\(z_{t}\\in\\mathbb{R}^{H\\times W}\\) in the image sequence represents the vorticity of the fluid at time \\(t\\). We fix our fluid viscosity at \\(1e-3\\), time step at \\(1e-4\\) seconds and image resolution at \\(H=W=32\\). Our experiments use input patch dimension \\(p=15\\) with patch boundary \\(b=7\\), resulting in \\((1,1)\\) output patches. We use a single GP Regression model to predict this single output dimension. We generate training inputs using a stride of \\(s=2\\) and test inputs using a test stride of \\(s=1\\). Each experiment predicts \\(15\\) frames into the future. The training images and initial input images are specified for each experiment. We use two metrics: relative error [25] (\\(RE(z,\\tilde{z})\\)) and mean standard deviations off (\\(StdE(z,\\tilde{z_{\\mu}},\\tilde{z_{\\sigma}})\\)) to analyze the performance of our model. The relative error is given by \\[RE(z,\\tilde{z})=\\frac{||z-\\tilde{z}||_{2}}{||z||_{2}} \\tag{10}\\] where \\(z\\in\\mathbb{R}^{H\\times W}\\) is the ground truth image, \\(\\tilde{z}\\in\\mathbb{R}^{H\\times W}\\) is the predicted image, and \\(||.||_{2}\\) is the 2-norm. This normalizes Fig. 3: Forward Prediction Experiment: Our model, trained using frames \\([z_{0},\\dots,z_{9}]\\), is used to predict frames \\([z_{10},\\dots,z_{24}]\\) of a 2D Navier Stokes simulation. (a) Ground truth, predicted mean, l1-error, and variance images. (b-c) Graphs of the prediction’s relative error (RE) and mean standard deviations off (StdE). This shows our model’s ability to accurately predict dynamic scenes. Error and variance increases overtime. Before \\(t=19\\), the variance effectively informs the erroneous region. At later timesteps the variation in the variance becomes less informative. The base variance increases with the model becoming more uncertain overall. The accuracy of the model’s confidence in its own predictions naturally oscillates as seen in the mean std off plot in Figure. 2(c). The decrease in this metric is caused when the model’s predicted variance grows faster than the true-error, while an increase correlates to the inverse. The increase in the predicted variance eventually dominates true error causing mean stds off to decrease, which is preferable as it ensures we provide a conservative estimation where the predicted bounds capture the ground truth. the error with the magnitude of the original image. Mean standard deviations off is given by \\[StdE(z,\\tilde{z_{\\mu}},\\tilde{z_{\\sigma}})=\\frac{1}{H\\cdot W}\\sum_{i=0}^{H-1}\\sum _{j=0}^{W-1}\\left(\\frac{|z[i,j]-\\tilde{z_{\\mu}}[i,j]|}{\\sqrt{\\tilde{z_{\\sigma}} [i,j]}}\\right), \\tag{11}\\] where \\(z\\in\\mathbb{R}^{H\\times W}\\) is the ground truth image, \\(\\tilde{z_{\\mu}},\\tilde{z_{\\sigma}}\\in\\mathbb{R}^{H\\times W}\\) is the predicted mean and variance images, and \\(|.|\\) is the absolute value function. This metric provides the average absolute standard deviations between our predicted mean image and the ground truth. **Forward Prediction Experiment:** We train our model using the first \\(t_{0}=10\\) frames of a video \\([z_{0},\\dots z_{9}]\\). This model is used to predict the next \\(15\\) frames \\([z_{10},\\dots,z_{24}]\\), from input images \\([z_{7},z_{8},z_{9}]\\). The results of this experiment are shown in Figure 3. Our model's predicted mean images track the complex dynamics of the ground truth with very little training data. Figure 2(a) indicates that our model predictions are relatively accurate. Despite the error increases over time shown by the error images and Figure 2(b), our method's uncertainty also increases as shown by the variance images. Note that our model's predicted variance is more trustworthy at earlier prediction time points (e.g. \\(t\\leq 19\\)), agreeing with the error image. However, relative spatial variations become ineffective when predicting far into the future. A graph of \\(StdE\\) in Figure 2(c) shows the oscillation in the accuracy of the model's confidence in its predictions. \\(StdE\\) decreases when the model's predicted variance grows faster than the true error. Meanwhile a lower predicted variance but larger true error results in this metric increasing. In Figure 2(c) we can see that the growth of the predicted variance eventually dominates the true error causing the \\(StdE\\) to decrease. This behavior is preferable as a lower \\(StdE\\) ensures our predictions provide a conservative estimate and our predicted bounds capture the ground truth. **Predictive Comparison Experiment:** We compare our method's performance to the FNO2D-time and FNO 3D neural network methods in [25]. All methods are trained using a similarly low number of training images. We also contrast with a non-parametric K Nearest Neighbors approach. We set \\(k=20\\), use the same input and output pre-processing, and use the same optimized RBF kernel-based similarity metric to provide a fair comparison. The results are in Figure 3(a). Both the neural networks and the KNN fail to learn the complex dynamics from the few available training frames, and simply output noise or sequences of identical frames. Figure 3(b) shows the relative error averaged over prediction experiments on 100 different Navier Stokes simulations. **Real World Experiments**: We evaluate the proposed method on the image prediction task of pedestrians on a public dataset [32]. The dataset contains videos of pedestrians and vehicles taken from an overhead camera. We create grey-scale images representing pedestrians' positions by dilating and smoothing their pixel coordinates on images. Figure 1 shows the prediction results on the grey-scale images. The proposed approach can predict pedestrian motion trends. At the end of the prediction, variance is higher in regions of larger pedestrian motion. Higher-variance regions overlap with regions of larger error. In addition, our method is evaluated with a real satellite video of the hurricane Ian [33]. An example hurricane image and related results are shown in Figure 5. We crop out patches of the video and convert them to grey-scale. Our method can capture interesting dynamics such as translation and expansion of features. As expected, it fails at predicting trends that are not presented in the training images, such as the emergence of new features. The predicted variance is larger at regions that have high-intensity variation, indicating that our method becomes more uncertain when the dynamics are complex. Fig. 4: Predictive Comparison Experiment: This figure compares the predictions on 2D Navier Stokes simulations between our method, a non-parametric KNN-based method, and the neural network-based methods, FNO-2D-time and FNO-3D trained on similarly low data. (a) Snapshots from predictions on a single test sequence. (b) Relative error vs the predictive time step averaged across 100 prediction tests. ## VI Conclusion and Future Discussion In this paper we provided a novel method using non-parametric GP-Regression models for confidence aware prediction of future images in an image sequence with very little training data. We evaluated our method on predictions of a 2D Navier Stokes simulation environment. These experiments demonstrated our method's ability to confidently capture the ground truth image sequence within our predicted image distribution for complex dynamic environments. We showcased our approach on real world environments by predicting pedestrian traffic flows as well as satellite weather phenomenon. This demonstrates our method's ability to be applied to real world applications, especially tasks where collecting a large representative dataset may be difficult due to constraints from cost or regulation restrictions. This work is an initial step into using GPs to predict images with interpretable confidence metrics. More research is needed on using GPs for image prediction to improve their abilities to learn these complex visual dynamics, enhance their predictive accuracy and tighten their predicted confidence bounds. We also seek to explore avenues to combine the benefits of our approach, such as the ability to cater towards newly acquired small batches of data and provide interpretable confidence metrics with the accuracy and sharpness benefits that come from big data and large parametric model driven approaches. Finally we hope to use the predictions from these approaches and their confidence metrics to better guide online decision making for autonomous robotic systems. [MISSING_PAGE_POST] . Kingma and M. Welling (2014) Auto-encoding variational bayes. Cited by: SSIII-A. * [42]M. Talebizadeh and A. Moridnejad (2011) Uncertainty analysis for the forecast of lake level fluctuations using ensembles of amn and amn models. Expert Systems with Applications38 (4), pp. 4126-4135. External Links: Document, Link Cited by: SSIII-A. * [43]M. Raissi, P. Perdikaris, and G. Karimalakis (2019) Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics378, pp. 686-707. External Links: Document, Link Cited by: SSIII-A. * [44]M. Raissi, P. Perdikaris, and G. Karimalakis (2019) Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics378, pp. 686-707. External Links: Document, Link Cited by: SSIII-A. * [45]M. Tahebizadeh and A. Moridnejad (2011) Uncertainty analysis for the forecast of lake level fluctuations using ensembles of amn and amn models. Expert Systems with Applications38 (4), pp. 4126-4135. External Links: Document, Link Cited by: SSIII-A. * [46]M. Tahebizadeh and A. Moridnejad (2011) Uncertainty analysis for the forecast of lake level fluctuations using ensembles of amn and amn models. Expert Systems with Applications38 (4), pp. 4126-4135. External Links: Document, Link Cited by: SSIII-A. * [47]M. Tahebizadeh and A. Moridnejad (2011) Uncertainty analysis for the forecast of lake level fluctuations using ensembles of amn and amn models. Expert Systems with Applications38 (4), pp. 4126-4135. External Links: Document, Link Cited by: SSIII-A. * [48]M. Tahebizadeh and A. Moridnejad (2011) Uncertainty analysis for the forecast of lake level fluctuations using ensembles of amn and amn models. Expert Systems with Applications38 (4), pp. 4126-4135. External Links: Document, Link Cited by: SSIII-A. * [49]M. Tahebizadeh and A. Moridnejad (2011) Uncertainty analysis for the forecast of lake level fluctuations using ensembles of amn and amn models. Expert Systems with Applications38 (4), pp. 4126-4135. External Links: Document, Link Cited by: SSIII-A. * [50]M. Tahebizadeh and A. Moridnejad (2011) Uncertainty analysis for the forecast of lake level fluctuations using ensembles of amn and amn models. * [23] C. M. Jiang, S. Esmaeilzadeh, K. Azizzadenesheli, K. Kashinath, M. Mustafa, H. A. Tchelepi, P. Marcus, Prabhat, and A. Anandkumar, \"Meshfreeflownet: A physics-constrained deep continuous space-time super-resolution framework,\" 2020. * [24] D. Greenfeld, M. Galun, R. Kimmel, I. Yavneh, and R. Basri, \"Learning to optimize multigrid pde solvers,\" 2019. * [25] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar, \"Fourier neural operator for parametric partial differential equations,\" 2021. * [26] C. E. Rasmussen and C. K. I. Williams, _Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)_. The MIT Press, 2005. * [27] N. U. Shinde, J. Johnson, S. Herbert, and M. C. Yip, \"Object-centric representations for interactive online learning with non-parametric methods,\" 2023. * [28] J. Ko, D. J. Klein, D. Fox, and D. Haehnel, \"Gaussian processes and reinforcement learning for identification and control of an autonomous blimp,\" in _Proceedings 2007 IEEE International Conference on Robotics and Automation_, 2007, pp. 742-747. * [29] B. Wilcox and M. C. Yip, \"Solar-gp: Sparse online locally adaptive regression using gaussian processes for bayesian robot model learning and control,\" _IEEE Robotics and Automation Letters_, vol. 5, no. 2, pp. 2832-2839, 2020. * [30] M. P. Deisenroth and C. E. Rasmussen, \"Pilco: A model-based and data-efficient approach to policy search,\" in _Proceedings of the 28th International Conference on International Conference on Machine Learning_, ser. ICML'11. Madison, WI, USA: Omnipress, 2011, p. 465-472. * [31] M. Turchetta, F. Berkenkamp, and A. Krause, \"Safe exploration for interactive machine learning,\" 2019. * [32] D. Yang, L. Li, K. Redmill, and Umit Ozguner, \"Top-view trajectories: A pedestrian dataset of vehicle-crowd interaction from controlled experiments and crowded campus,\" 2019. * [33] Denver7, \"Tracking hurricane iian's explosive growth through satellite imagery,\" Sept. 2022. [Online]. Available: [https://www.youtube.com/watch?v=Fw8VWSn9Lps](https://www.youtube.com/watch?v=Fw8VWSn9Lps) * [34] S. Vinga, \"Convolution integrals of normal distribution functions,\" 01 2004. ## Appendix ### _Predictions: The first \\(3\\) Predictions in a sequence of predictions_ The first three predictions of a rollout involve predicting from a combination of known and predicted random variable sets of input images. We treat these predictions as special cases of the more general case presented in Section IV-C. Each input \\(x^{{}^{\\prime}}_{j}\\) is composed of \\(3p^{2}\\) independent random variables, with \\(p^{2}\\) random variables contributed from each input image. We separate the input \\(x^{{}^{\\prime}}_{j}\\) along the dimensions of the input that are known and random, to handle the different components separately. \\(x^{{}^{\\prime}}_{j,Rn}\\) and \\(x^{{}^{\\prime}}_{j,Kn}\\) are random variables that denote the random and known components of the input respectively. \\(X_{Rn},X_{Kn}\\) and \\(x_{i,Rn},x_{i,Kn}\\) reference the corresponding known and random dimensions in all the training inputs and a single training input respectively. The kernel functions \\(k_{a}\\) and the probability distribution over the input \\(p(x^{{}^{\\prime}}_{j})\\) are the only forms of interaction with the inputs while predicting the output distribution \\(p(f(x^{{}^{\\prime}}_{j})[a])\\). The structure of these functions and our independence assumptions allow us to cleanly split up our inputs into known and random components. We use the Radial Basis Function (equation 3) as our kernel function \\(k_{a}(x,y)\\). In this function the inputs interact with one another along the same dimension, allowing us to re-write the kernel as: \\[\\begin{split} k_{a}(x,y)=\\alpha_{a}^{2}\\exp\\left(-\\frac{(x-y)^{T }\\Lambda_{a}^{-1}(x-y)}{2}\\right)=\\alpha_{a,Kn}^{2}\\\\ \\exp\\left(-\\frac{(x_{Kn}-y_{Kn})^{T}\\Lambda_{a,Kn}^{-1}(x_{Kn}- y_{Kn})}{2}\\right).\\\\ \\alpha_{a,Rn}^{2}\\exp\\left(-\\frac{(x_{Rn}-y_{Rn})^{T}\\Lambda_{a, Rn}^{-1}(x_{Rn}-y_{Rn})}{2}\\right)\\\\ =k_{a,Kn}(x_{Kn},y_{Kn})\\cdot k_{a,Rn}(x_{Rn},y_{Rn})\\end{split} \\tag{12}\\] The subscripts on the inputs correspond to their known and random dimensions. \\(k_{a,Rn}\\) and \\(k_{a,Kn}\\) are the kernel functions that act on the known and random dimensions respectively. Each are parameterized by their own set of kernel parameters: \\(\\alpha_{a,Kn},\\Lambda_{a,Kn}\\) and \\(\\alpha_{a,Rn},\\Lambda_{a,Rn}\\) respectively. \\(\\alpha_{a,Rn}\\cdot\\alpha_{a,Kn}=\\alpha_{a}\\). \\(\\Lambda_{a,Kn}\\) and \\(\\Lambda_{a,Rn}\\) are block diagonal matrices sampled from \\(\\Lambda_{a}\\), a large diagonal matrix, along the known and random dimensions. Using our assumption that all predicted pixels within and across images are independent, we separate the probability distribution: \\[p(x^{{}^{\\prime}}_{j})=p(x^{{}^{\\prime}}_{j,Kn})\\cdot p(x^{{}^{\\prime}}_{Rn}) \\tag{13}\\] \\(p(x^{{}^{\\prime}}_{ \\(x_{j}^{{}^{\\prime}}\\) which is a multivariate gaussian random variable \\(x_{j}^{{}^{\\prime}}\\sim\\mathcal{N}(x_{j,\\mu}^{{}^{\\prime}},\\Sigma_{x_{j,\\sigma}^{{} ^{\\prime}}})\\) defined in section IV-C. Mean Prediction Derivation:We begin by walking through the derivation of the mean \\(\\mu(x_{j}^{{}^{\\prime}})[a]\\) of the output distribution \\(p(f(x_{j}^{{}^{\\prime}})[a])\\) presented in equation 5. For the following derivations we simplify the notation from \\(p(f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}},X,f(X)[:,a])\\) to \\(p(f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}})\\) as \\(X\\) and \\(f(X)[:,a]\\) are known quantities. We begin our moment matching based derivation by taking the mean of the intractable integral presented in equation 4. \\[\\mu(x_{j}^{{}^{\\prime}})[a] =E_{f}[\\int_{-\\infty}^{\\infty}p(f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{ {}^{\\prime}})p(x_{j}^{{}^{\\prime}})dx_{i}^{{}^{\\prime}}] \\tag{16}\\] \\[=E_{f,x_{j}^{{}^{\\prime}}}[p(f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{} ^{\\prime}})]\\] \\[=E_{x_{j}^{{}^{\\prime}}}[E_{f}[p(f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{ {}^{\\prime}})]]\\] \\(E_{f}[p(f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}})]\\) is the analytical form of the mean during Gaussian Process Regression from equation 1. Substituting this formula into the above equations we get: \\[\\mu(x_{j}^{{}^{\\prime}})[a]=E_{x_{j}^{{}^{\\prime}}}[k_{a}(x_{j}^{{}^{\\prime}},X)[k_{a}(X,X)+\\sigma_{n,a}^{2}I]^{-1}f(X)[:,a]] \\tag{17}\\] We denote \\(\\beta_{a}\\in\\mathbb{R}^{n}\\) to be \\([k_{a}(X,X)+\\sigma_{n}^{2}I]^{-1}f(X)[:,a]]\\). We denote \\(d_{a}\\in\\mathbb{R}^{n}\\) to be \\(E_{x_{j}^{{}^{\\prime}}}[k_{a}(x_{j}^{{}^{\\prime}},X)]\\). \\[d_{a}[i]=\\int_{-\\infty}^{\\infty}k_{a}(x_{j}^{{}^{\\prime}},x_{i})p(x_{j}^{{}^{ \\prime}})dx_{j}^{{}^{\\prime}} \\tag{18}\\] \\[\\mu(x_{j}^{{}^{\\prime}})[a]=d_{a}^{T}\\beta_{a}=\\beta_{a}^{T}d_{a}\\in\\mathbb{R} \\tag{19}\\] Expanding \\(k_{a}\\) to the RBF kernel equations and \\(p(x_{j}^{{}^{\\prime}})\\) to the multivariate gaussian pdf, we solve for \\(d_{a}\\) using [34]. This gives us the mean prediction equations listed in 5 and related below in equation 20: \\[\\mu(x_{j}^{{}^{\\prime}})[a]=d_{a}^{T}\\beta_{a} \\tag{20}\\] \\[d_{a}[i]=\\frac{\\alpha_{a}^{2}}{\\sqrt{|\\Sigma_{x_{j,\\sigma}^{{}^{ \\prime}}}\\Lambda_{a}^{-1}+I|}}e^{-\\frac{1}{2}v_{j}^{T}(\\Sigma_{x_{j,\\sigma}^{{} ^{\\prime}}}+\\Lambda_{a})^{-1}v_{i}}\\] \\[\\beta_{a}=[k_{a}(X,X)+\\sigma_{n}^{2}I]^{-1}f(X)[:,a]\\] Variance Prediction Derivation:We now walk through the derivation of the predicted variance \\(\\Sigma(x_{i}^{{}^{\\prime}})[a,a]\\in\\mathbb{R}\\). Let \\(\\Sigma(x_{i}^{{}^{\\prime}})\\in\\mathbb{R}^{(p-b)^{2}\\times(p-b)^{2}}\\) be the covariance matrix of the predicted output, where \\(\\Sigma(x_{i}^{{}^{\\prime}})[a,a]\\in\\mathbb{R}\\) is the variance of output \\(f(x_{i}^{{}^{\\prime}})[a]\\). Due to our independence assumptions between outputted pixel distributions, we assert that the covariance between outputs \\(f(x_{j}^{{}^{\\prime}})[a]\\) and \\(f(x_{j}^{{}^{\\prime}})[b]\\), representing different output dimensions, \\(\\Sigma(x_{i}^{{}^{\\prime}})[a,b]=0,\\ \\forall a\ eq b\\). \\[\\Sigma(x_{j}^{{}^{\\prime}})=E_{x_{j}^{{}^{\\prime}}}\\left[\\left(f(x_{j}^{{}^{ \\prime}})-\\mu(x_{j}^{{}^{\\prime}})\\right)^{T}\\left(f(x_{j}^{{}^{\\prime}})-\\mu(x_ {j}^{{}^{\\prime}})\\right)\\right] \\tag{21}\\] This is simplified using the law of total variance. \\[\\Sigma(x_{j}^{{}^{\\prime}})[a,a]=E_{x_{j}^{{}^{\\prime}}}\\left[var_{f}(f(x_{j}^ {{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}})\\right]+ \\tag{22}\\] \\[E_{f,x_{j}^{{}^{\\prime}}}\\left[f(x_{j}^{{}^{\\prime}})[a]f(x_{j}^{{ }^{\\prime}})[a]\\right]-\\mu(x_{j}^{{}^{\\prime}})[a]^{2}\\] \\[E_{f,x_{j}^{{}^{\\prime}}}\\left[f(x_{j}^{{}^{\\prime}})[a]f(x_{j}^{{}^{\\prime}})[ a]\\right]= \\tag{23}\\] \\[\\int_{-\\infty}^{\\infty}E_{f}\\left[f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{ {}^{\\prime}}\\right]E_{f}\\left[f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}} \\right]p(x_{j}^{{}^{\\prime}})dx_{j}^{{}^{\\prime}}\\] \\(E_{f}\\left[f(x_{j}^{{}^{\\prime}})[a]f(x_{j}^{{}^{\\prime}})[a]\\right]=\\) \\(E_{f,x_{j}^{{}^{\\prime}}}\\left[f(x_{j}^{{}^{\\prime}})[a]f(x_{j}^{{}^{\\prime}})[ a]\\right]=\\) \\(\\int_{-\\infty}^{\\infty}E_{f}\\left[f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}} \\right]E_{f}\\left[f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}}\\right]p(x_{j}^{{ }^{\\prime}})dx_{j}^{{}^{\\prime}}\\) \\(E_{f}\\left[f(x_{j}^{{}^{\\prime}})[a]f(x_{j}^{{}^{\\prime}})[a]\\right]=\\) \\(E_{f,x_{j}^{{}^{\\prime}}}\\left[f(x_{j}^{{}^{\\prime}})[a]f(x_{j}^{{}^{\\prime}})[ a]\\right]=\\) \\(E_{f}\\left[f(x_{j}^{{}^{\\prime}})[a]|x_{j}^{{}^{\\prime}}\\right]\\) is the mean output of standard Gaussian Process Regression from equation 1. Substituting this into the above equations we have: \\[E_{f,x_{j}^{{}^{\\prime}}}\\left[f(x_{j}^{{}^{\\prime}})[a]f(x_{j}^{{}^{\\prime}})[a]\\right] \\tag{24}\\] \\[=\\int_{-\\infty}^{\\infty}\\beta_{a}^{T}k_{a}(x_{j}^{{}^{\\prime}},X)^{ T}k_{a}(x_{j}^{{}^{\\prime}},X)\\beta_{a}p(x_{j}^{{}^{\\prime}})dx_{j}^{{}^{\\prime}}\\] \\[=\\beta_{a}^{T}\\int_{-\\infty}^{\\infty}k_{a}(x_{j}^{{}^{\\prime}},X)^ {T}k_{a}(x_{j}^{{}^{\\prime}},X)p(x_{j}^{{}^{\\prime}})dx_{j}^{{}^{\\prime}}\\beta_{a}\\] We define \\(Q_{aa}=\\int_{-\\infty}^{\\infty}k_{a}(x_{j}^{{}^{\\prime}},X)^{T}k_{a}(x_{j}^{{}^{ \\prime}},X)p(x_{j}^{{}^{\\prime}})dx_{j}^{{}^{\\prime}}\\in\\mathbb{R}^{(p-b)^{2} \\times(p-b)^{2}}\\). \\(\\beta_{a}\\) is defined in the above sections. This gives us: \\[E_{f,x_{i}^{{}^{\\prime}}}\\left[f(x_{i}^{{}^{\\prime}})[a]f(x_{i}^{{}^{ \\prime}})[a]\\right]=\\beta_{a}^{T}Q_{aa}\\beta_{a}\\] (25) \\[Q_{aa}[i,k]=\\frac{k_{a}(x_{i},x_{j}^{{}^{\\prime}})k_{a}(x_{k},x_{j }^{{}^{\\prime}})}{\\sqrt{|R|}}e^{z_{iL}^{T}\\mathbb{R}^{-1}\\Sigma_{x_{j,\\sigma}^{{Mean Prediction Derivation: []In this section we discuss the derivation of the mean prediction \\(\\mu(x^{{}^{\\prime}}_{j})[a]\\) of the output distribution \\(p(f(x^{{}^{\\prime}}_{j})[a])\\) when using a combination of known and random input images, for the first \\(3\\) predictions of a rollout. We show the derivation for equation 14. Since this is a special case of prediction from all random inputs we begin our derivation from the derivation of the mean prediction equations for all random inputs in section B We begin our derivation from equations 17, 18 and 19. In this derivation we use the split kernel and probability density functions in equations 12 and 13 to deal with the hybrid, random and known, nature of the inputs. The joint probability distribution of all known pixels can be substituted with a delta function at the known values: \\(\\delta(x-x^{{}^{\\prime}}_{j,Kn,\\mu})\\). This allows us to integrate out certain contributions from the known components. \\(\\beta_{a}\\), being a constant, remains unchanged and we re derive \\(d_{a}\\) as \\(d_{a,hybrid}\\). \\[\\begin{split} d_{a,hybrid}[i]=E_{x^{{}^{\\prime}}_{j}}[k_{a}(x^{{} ^{\\prime}}_{j},X[i])]\\\\ =E_{x^{{}^{\\prime}}_{j,Rn},x^{{}^{\\prime}}_{j,Kn}}[k_{a,Rn}(x^{{} ^{\\prime}}_{j,Rn},X_{Rn}[i])k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn},X_{Kn}[i])]\\\\ =E_{x^{{}^{\\prime}}_{j,Rn}}[k_{a,Rn}(x^{{}^{\\prime}}_{j,Rn},X_{ Rn})]E_{x^{{}^{\\prime}}_{j,Kn}}[k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn},X_{Kn})]\\\\ =\\int_{-\\infty}^{\\infty}k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn},X_{Kn}) \\delta(x^{{}^{\\prime}}_{j,Kn,\\mu}-x^{{}^{\\prime}}_{j,Kn})dx^{{}^{\\prime}}_{j, Kn}\\\\ \\int_{-\\infty}^{\\infty}k_{a,Rn}(x^{{}^{\\prime}}_{j,Rn},X_{Rn})p( x^{{}^{\\prime}}_{j,Rn})dx^{{}^{\\prime}}_{Rn}\\end{split} \\tag{29}\\] Solving this yields the mean prediction for the third rollout given in equation 14 related below as equation 30: \\[\\begin{split}\\mu(x^{{}^{\\prime}}_{j})=d^{T}_{a,hybrid}\\beta_{a} \\\\ d_{a,hybrid}[i]=k_{a,Kn}(x_{i,Kn},x^{{}^{\\prime}}_{j,Kn,\\mu}) \\cdot\\frac{\\alpha^{2}_{a,Rn}}{\\sqrt{|\\Sigma_{x^{{}^{\\prime}}_{j,\\sigma}}\\Lambda ^{-1}_{a,Rn}+I|}}\\\\ e^{-\\frac{1}{2}v^{T}_{i,Rn}\\left(\\Sigma_{x^{{}^{\\prime}}_{j, \\sigma}}+\\mathrm{A}_{a,Rn}\\right)^{-1}v_{i,Rn}}\\end{split} \\tag{30}\\] Variance Prediction Derivation: Here we discuss the derivation of the variance \\(\\Sigma(x^{{}^{\\prime}}_{i})[a,a]\\in R\\) in equation 15 from hybrid and random inputs. We follow the method outlined in the derivation for all random inputs. We use the split kernel and probability density functions in equations 12 and 13 to separately deal with the random and known components of the inputs. With this we arrive at an identical formulation to the case with all random inputs where \\(Q_{aa,hybrid}\\) is used in place of \\(Q_{aa}\\). \\[\\begin{split} Q_{aa,hybrid}=E_{x^{{}^{\\prime}}_{j}}[k_{a}(x^{{} ^{\\prime}}_{j},X)^{T}k_{a}(x^{{}^{\\prime}}_{j},X)]\\\\ =E_{x^{{}^{\\prime}}_{j,Rn}}[k_{a,Rn}(x^{{}^{\\prime}}_{j,Rn},X_{ Rn})^{T}k_{a,Rn}(x^{{}^{\\prime}}_{j,Rn},X_{Rn})]\\\\ E_{x^{{}^{\\prime}}_{j,Kn}}[k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn},X_{ Kn})^{T}k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn},X_{Kn})]\\\\ =\\int_{-\\infty}^{\\infty}k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn},X_{Kn})^{T} k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn},X_{Kn})\\\\ \\delta(x^{{}^{\\prime}}_{j,Kn}-x^{{}^{\\prime}}_{j,Kn,\\mu})dx^{{}^{ \\prime}}_{j,Kn}\\\\ \\int_{-\\infty}^{\\infty}k_{a,Rn}(x^{{}^{\\prime}}_{j,Rn},X_{Rn})^{T }k_{a,Rn}(x^{{}^{\\prime}}_{j,Rn},X_{Rn})p(x^{{}^{\\prime}}_{j,Rn})dx^{{}^{ \\prime}}_{Rn}\\end{split} \\tag{31}\\] Here \\(\\cdot\\) denotes the element wise multiplication operator. The integrals with the multivariate gaussian pdfs result in the same solution as elaborated in the random variance derivation. Solving this yields the variance prediction given in equation 15 related below as equation 32: \\[\\begin{split}\\Sigma(x^{{}^{\\prime}}_{j})[a,a]=\\alpha^{2}_{a}-\\\\ trace((k_{a}(X,X)+\\alpha^{2}_{n,a}I)^{-1}Q_{aa,hybrid})+\\\\ \\beta^{T}_{a}Q_{aa,hybrid}\\beta_{a}-\\mu(x^{{}^{\\prime}}_{j})[a]\\\\ Q_{aa,hybrid}=Q_{aa,Kn}\\cdot Q_{aa,Rn}\\\\ Q_{aa,Kn}=k_{a,Kn}(x^{{}^{\\prime}}_{j,Kn,\\mu},X_{Kn})^{T}k_{a, Kn}(x^{{}^{\\prime}}_{j,Kn,\\mu},X_{Kn})\\in\\mathbb{R}^{n\\times n}\\\\ Q_{aa,Rn}[i,k]=\\frac{1}{\\sqrt{|R_{Rn}|}}\\cdot k_{a,Rn}(x_{i,Rn},x^{{}^{\\prime}}_{j,\\mu})k_{a,Rn}(x_{k,Rn},x^{{}^{\\prime}}_{j,\\mu})\\\\ e^{\\frac{1}{2}z^{T}_{i,Rn}R_{Rn}^{-1}\\Sigma_{x^{{}^{\\prime}}_{j, \\sigma}}z_{i,Rn}}\\\\ e^{\\frac{1}{2}z^{T}_{i,Rn}R_{Rn}^{-1}\\Sigma_{x^{{}^{\\prime}}_{j, \\sigma}}z_{i,Rn}}\\end{split} \\tag{32}\\] Rollout Discussion: []In this section we discuss the composition of our inputs and additional details of each step in our predictive rollout. In a predictive rollout we are predicting \\(T\\) time steps into future starting from \\(3\\) known, consecutive input images \\([z_{i},z_{i+1},z_{i+2}]\\). With each prediction we predict a single time step into the future before incorporating our prediction into our next set of inputs. We continue this process until we predict the desired number of time steps. First Step:The first step of the predictive rollout predicts \\(z_{i+3}\\) from input images \\([z_{i},z_{i+1},z_{i+2}]\\). For this first prediction all the input images are known quantities. As a result for each test input \\(x^{{}^{\\prime}}_{j}\\:j\\in[0,m-1]\\), the entire test input is known, \\(x^{{}^{\\prime}}_{j}=x^{{}^{\\prime}}_{j,Kn}\\), and \\(x^{{}^{\\prime}}_{j,Rn}\\) does not exist. \\(x^{{}^{\\prime}}_{j,Kn,\\mu}\\in\\mathbb{R}^{3p^{2}}\\) is formed in a manner identical to the training inputs. When plugging these inputs into the hybrid mean prediction equation 14 and variance prediction equation 15 we remove the random components of the equations giving us: \\[d_{a,hybrid}[i]=k_{a,Kn}(x_{i,Kn},x^{{}^{\\prime}}_{j,Kn,\\mu}) \\tag{33}\\] \\[Q_{aa,hybrid}=Q_{aa,Kn} \\tag{34}\\] Substituting these back into the equations 14 and 15 we get the formulas to predict a single output dimension of for a single test input for the first prediction. These equations equivalent to the basic Gaussian Process Regression equations for mean and variance prediction. Using these formulaswe predict the distribution for every pixel in \\(z_{i+3}\\) from the \\(m\\) test inputs. This gives us the final predicted image \\(z_{i+3}\\) which is stored as a mean, variance image tuple \\((M_{i+3},V_{i+3})\\). _Second Step:_ The second step of the predictive rollout predicts \\(z_{i+4}\\) from input images \\([z_{i+1},z_{i+2},z_{i+3}]\\). \\(z_{i+3}\\) is a random variable, from the first prediction, represented by the mean and variance image tuple \\((M_{i+3},V_{i+3})\\). \\(z_{i+1}\\) and \\(z_{i+2}\\) are known. When constructing each test input, \\(x_{j,Kn,\\mu}\\) is constructed from flattened and concatenated patches of \\([z_{i+1},z_{i+2}]\\). \\(x^{{}^{\\prime}}_{j,\\mu}\\) is constructed from flattened patches of \\(M_{i+3}\\) and \\(x^{{}^{\\prime}}_{j,\\sigma}\\) is constructed from flattened patches of \\(V_{i+3}\\) to create the random input \\(x^{{}^{\\prime}}_{j,Rn}\\). These components together form a single test input \\(x^{{}^{\\prime}}_{j}\\). We plug these inputs into the hybrid mean prediction equation 14 and variance prediction equation 15 to compute the model's output distribution for a single output dimension. We repeat this to predict the output distributions for each pixel in the future image \\(z_{i+4}\\) which is stored as a mean and variance image tuple \\((M_{i+3},V_{i+3})\\). _Third Step:_ The third step of the predictive rollout predicts \\(z_{i+5}\\) from input images \\([z_{i+2},z_{i+3},z_{i+4}]\\). \\(z_{i+3}\\) and \\(z_{i+4}\\) are random variables, from the first and second predictions, represented by the mean and variance image tuples \\((M_{i+3},V_{i+3}),(M_{i+4},V_{i+4})\\). \\(z_{i+2}\\) is known. When constructing each test input, \\(x_{j,Kn,\\mu}\\) is constructed from flattened patches of \\([z_{i+2}]\\). \\(x^{{}^{\\prime}}_{j,\\mu}\\) is constructed from flattened concatenated patches of \\(M_{i+3},M_{i+4}\\) and \\(x^{{}^{\\prime}}_{j,\\sigma}\\) is constructed from flattened patches of \\(V_{i+3},V_{i+4}\\) to create the random input \\(x^{{}^{\\prime}}_{j,Rn}\\). These components together form a single test input \\(x_{j}\\). We plug these inputs into the hybrid mean prediction equation 14 and variance prediction equation 15 to compute the model's output distribution for a single output dimension. We repeat this to predict the output distributions for each pixel in the future image \\(z_{i+5}\\) which is stored as a mean and variance image tuple \\((M_{i+5},V_{i+5})\\). _Fourth Step and Onwards:_ The third step of the predictive rollout predicts \\(z_{i+6}\\) from input images \\([z_{i+3},z_{i+4},z_{i+5}]\\). For this and all subsequent predictions, all the input images are random variables outputted by our model. To predict the future image we utilize the approach detailed in the section IV-C on Prediction with all random inputs. ### _Training Input Creation Graphic_ The Figure 6 graphically demonstrates the process of creating a training datapoint from patches of \\(4\\) consecutive video frames. ### _Additional Experiments and Details_ #### Iv-E1 Predictive Comparison Experiment _Additional Details:_ In this section we highlight additional details on the methodology used to generate the results for the 'Predictive Comparison' Experiment in Section V. To compare all three methods, we predict frames \\([z_{10},\\ldots,z_{24}]\\) given frames \\([z_{0},\\ldots,z_{9}]\\). For our method we train our model using \\([z_{0},\\ldots,z_{9}]\\) and begin our prediction with input images \\([z_{7},z_{8},z_{9}]\\), identical to the approach outlined in the 'Forward Prediction' Experiment. The FNO-2d-time model convolves across the two spatial dimensions to predict a single future image with a recurrent structure in time. This model uses a rollout method to predict longer video sequences. The model predicts one future frame at a time and incorporates its last prediction into its next input. We continue this until we have predicted the desired number of frames. The model is structured to take an input of three images and predict a single future image. We train this method in a manner similar to ours. The training data is created using the first \\(10\\) frames \\([z_{0},\\ldots,z_{9}]\\). These frames are separated into data points of \\(4\\) consecutive images \\([z_{i},\\ldots,z_{i+3}];\\forall i\\in[0,6]\\), where the first three images form the input and the fourth is the output. The model is trained over 20 epochs. The trained model is then used to predict \\(15\\) frames \\([z_{10},\\ldots,z_{24}]\\) of the same sequence. The FNO-3d model is a neural network that convolves in space and time to directly output several frames from a set of input frames. We train this model using the first \\(25\\) frames from two unique sequences, generated using the same simulation parameters. The first \\(10\\) images are used as the inputs, with the remaining \\(15\\) serving as the training outputs. Once trained over 20 epochs this model is given a set of \\(10\\) consecutive frames \\([z_{0},\\ldots,z_{9}]\\) to predict the next \\(15\\): \\([z_{10},\\ldots,z_{24}]\\). The results of this comparison are shown in fig. 4. #### Iv-E2 Average Result Metrics We showcase the average relative error and average mean standard deviations off of our model's predictions evaluated on \\(100\\) separate video sequences in Figure 8. For each video sequence our model is trained using the first \\(10\\) frames \\([z_{0},\\ldots,z_{9}]\\) and used to predict the next \\(15\\) frames, \\([z_{10},\\ldots,z_{24}]\\). We use the parameters discussed in the 'Forward Prediction' experiment. #### Iv-E3 Sequential Prediction Experiment In this experiment we examine the benefits of incorporating recent data into our model. We incrementally train models with the first \\(5,10\\) and \\(15\\) images of a video sequences. Each of these models is used to predict \\(15\\) frames into the future, starting from their last training image. In Figure 7 we can visually see the improvement in prediction accuracy as we update our models. In the relative error graph (Figure 6(c)) we also show the results of starting predictive rollouts for the lower data models, trained with \\(5\\) and \\(10\\) images, later in the sequence. This provides a fair evaluation to compare the impact of adding recent data, by mitigating the added error compounded as a result of the predictive rollouts. The graph shows a large improvement in accuracy as we incorporate data into our model. #### Iv-E4 Additional Visual Results Figures 9 and 10 show additional results for predictions on different Navier Stokes simulations using the \"Forward Prediction Experiment\" detailed in section V. Figure 6: Graphic to visualize the process of generating a training data point from a sequence of \\(4\\) consecutive video frames. Each grey figure represents a labelled video frame. The red squares represent the patches that are sampled to create the input output pair. In the fourth image the green pixel represents the output patch after cropping the patch with patch border \\(b\\). The patches from the first \\(3\\) images are flattened and concatenated to form the input. The patch from the fourth image is flattened and used as the output. In this case the output patch is a single pixel. Figure 7: Sequential Prediction Experiment: Our model, trained using frames \\([z_{0},\\dots,z_{t_{0}-1}]\\), is used to predict frames \\([z_{t_{0}},\\dots,z_{t_{0}+15}]\\) of a \\(2\\)D Navier Stokes simulation. Prediction results for \\(t=5,10,15\\) are shown along the rows of (b), respectively. Figure (a)a displays ground truth images. Figure (b)b shows the predicted mean images. Figure (c)c displays a graph of the relative error of the predicted means. In Figure (c)c we show additional error results where we start predictive rollouts with the models trained with \\(5,10\\) images from later time steps. This provides a fair experiment to show the predictive improvement resulting from incorporating recent data into our model. Figure 10(a) shows the ground truth, predicted mean and variance images. Figure 10(b) and Figure 10(c) show graphs of the prediction's relative error and mean standard deviations off. Figure 8: Averaged Metrics: Figure 7(a) shows the average relative error metrics using our method over 100 results on separate video sequences. Figure 7(b) shows the average mean standard deviations off the predicted image is from the ground truth using the mean and variance generated using our method. This result is also averaged over predictions on 100 different video sequences. To generate the above results our model was trained on frames \\([z_{0},\\dots,z_{9}]\\) of each sequence and used to predict the next \\(15\\) frames, \\([z_{10},\\dots,z_{24}]\\) Figure 10: Forward Prediction Experiment: Additional Experiment \\(2\\): Our model, trained using frames \\([z_{0},\\dots,z_{9}]\\), is used to predict frames \\([z_{10},\\dots,z_{24}]\\) of a 2D Navier Stokes simulation. Figure 10(a) shows the ground truth, predicted mean and variance images. Figure 10(b) and Figure 10(c) show graphs of the prediction’s relative error and mean standard deviations off. Figure 9: Forward Prediction Experiment: Additional Experiment \\(1\\): Our model, trained using frames \\([z_{0},\\dots,z_{9}]\\), is used to predict frames \\([z_{10},\\dots,z_{24}]\\) of a \\(2\\)D Navier Stokes simulation. Figure 8(a) shows the ground truth, predicted mean and variance images. Figure 8(b) and Figure 8(c) show graphs of the prediction’s relative error and mean standard deviations off.
The ability to predict future states is crucial to informed decision-making while interacting with dynamic environments. With cameras providing a prevalent and information-rich sensing modality, the problem of predicting future states from image sequences has garnered a lot of attention. Current state-of-the-art methods typically train large parametric models for their predictions. Though often able to predict with accuracy these models often fail to provide interpretable confidence metrics around their predictions. Additionally these methods are reliant on the availability of large training datasets to converge to useful solutions. In this paper, we focus on the problem of predicting future images of an image sequence with interpretable confidence bounds from very little training data. To approach this problem, we use non-parametric models to take a probabilistic approach to image prediction. We generate probability distributions over sequentially predicted images, and propagate uncertainty through time to generate a confidence metric for our predictions. Gaussian Processes are used for their data efficiency and ability to readily incorporate new training data online. Our method's predictions are evaluated on a smooth fluid simulation environment. We showcase the capabilities of our approach on real world data by predicting pedestrian flows and weather patterns from satellite imagery.
Provide a brief summary of the text.
227
arxiv-format/2111_10293v1.md
# A 3D-2D-convolution Neural Network Model for Hyperspectral Image Classification Jiaxin Cao\\({}^{1}\\) and Xiaoyan Li\\({}^{1}\\) \\({}^{1}\\) School of Computer Science, China University of Geosciences, Wuhan 430074, China ## 1 Introduction Hyperspectral image classification is the hotspot in remote sensing image interpretation and is of great difficulty. Its purpose is to assign an accurate label to each pixel in the image and then divide the image into areas with different ground object semantic identification [1]. Currently, the convolutional neural network has been successfully applied to the tasks of hyperspectral image classification [3-5]. In hyperspectral image (HSI) classification, the convolutional neural network acts as an \"information distiller\", gradually extracting high-level abstract semantic features with the deepening of the network. In this process, the hyperspectral images with a huge amount of data are transformed, the irrelevant information is filtered out, and the useful information is enlarged and refined [7]. Prior to deep learning methods, traditional methods mostly used a linear discriminant analysis [8], such as the principal component analysis [9] and independentcomponent analysis [10], to extract features. Additionally, they used a shallow classifier [11, 12, 13] to complete classification. These methods rely on manual designed features. For complex and diverse hyperspectral data, it is difficult to find a universal feature extraction method using such a route. Convolution neural network, which can learn features from HSI autonomously, provides a good solution for feature extraction. The HSI classification models based on 1D-CNN [14] or 2D-CNN [15] can achieve considerable classification results by automatically extracting features from hyperspectral images, but along with a degree of spatial or spectral information loss. Chen et al. [18] constructed a 3D-CNN model composed of 3D convolutional layers and 3D pooling layers, improving classification performance by means of deep exploration into spatial-spectral features. Deeper networks enable deeper and more robust features and the network structure needs careful designing to pretend the greatly rising of the parameters amount. Lee et al. [19] made good use of residual connection in the spectral feature learning and built a deeper network (Res-2D-CNN) by which deeper and more abstract features could be extracted. Liu et al. [31] introduce residual connections to 3D-CNN and built Res-3D-CNN, which is aimed at enhancing spatial-spectral feature learning. Zhong et al. [20] focused on the raw hyperspectral data without dimensionality reduction and built SSRN (spectral-spatial residual network). They introduced residual connection into the whole network and separate deep feature learning procedure into independent spatial feature learning and spectral feature learning. More discriminative features were learned by SSRN and the separated feature learning pattern has a significant impact on subsequent hyperspectral classification research. Recently, dense connections have attracted more attention from hyperspectral researchers [32]. Dense connection reduces the network parameters through a small convolution kernel number, and realizes efficient feature reuse through feature map concatenation, both of which alleviates the problem of model overfitting. Hu et al. [20] proposed squeeze-and-excitation networks and introduced the attention mechanism to the image classification network, winning the champion of 2017 ImageNet Large Scale Visual Recognition Competition. Recently, the attention mechanism [21] has been applied to the construction of HSI classification models. The attention mechanism is a resource allocation scheme, through which limited computing resources will be used to process more important information. Therefore, the attention mechanism module can effectively enhance the expression ability of the model without excessively increasing complexity. Wang et al. [22] constructed a spatial-spectral squeeze-and-excitation (SSSE) module to automatically learn the weight of different spectral and different neighborhood pixels to emphasize the meaningful features and suppress unnecessary ones so that the classification accuracy is improved effectively. Li et al. [23] added an attention module (Squeeze-and-Excitation block) respectively after the dense connection module used for shallow and middle feature extraction to emphasize effective features in the spectral bands, and then feed it to further deep feature extraction. The attention mechanism in the HSI classification model is used for finding more discriminative feature patterns in spectral or spatial dimension Based on 3D-2D-CNN and the densely connected module, SE-HybridSN realized a more efficient feature reuse and feature fusion. Moreover, the attention mechanism was introduced to the 3D Based on 3D-2D-CNN and the densely connected module, ADHybridSN realized a more efficient feature reuse and feature fusion. Moreover, the attention mechanism was introduced to the 3D and spatial features in a targeted refinement circumstance. With fewer parameters, SE-HybridSN achieves better classification performance in the Indian Pines, Salinas and University of Pavia datasets. ## 2 Method1 ### SE-HybridSN Model Hyperspectral image classification is to assign a specific label to every pixel in hyperspectral images. The convolutional neural networks based hyperspectral classification models take small image patches as the input. Every hyperspectral image patch was composed of the spectral vectors within a predefined range and its land-use type was determined by its center pixel. The hyperspectral patch can be denoted as \\(\\boldsymbol{P}_{L\\times W\\times S}\\), where \\(\\boldsymbol{L}\\times W\\) represents the spatial dimension and S represents the number of spectral bands. In our proposed model, the input data was processed by principal components analysis (PCA) in the spectral dimension, which greatly reduced the redundancy within hyperspectral data. Figure 1 shows the network structure of the proposed SE-HybridSN. SE-HybridSN is based on the 3D-2D-CNN feature extraction pattern and is composed of 6 convolutional layers. We introduced the channel attention method after every 3D convolutional layer to refine the extracted spatial-spectral features. ### Convolutional Layers Used in Model In 2D-CNN, the input data are convolved with 2D kernels. The convolution happens by computing the sum of the dot product between input data and kernel. The kernel is strided over the input data to cover full spatial dimension. The convolved features are passed through the activation function to introduce the non-linearity in the model. In 2D convolution, the activation value at spatial position \\(\\left(\\mathbf{x,y}\\right)\\) in the \\(\\mathbf{j}^{th}\\) feature map of the \\(\\mathbf{i}^{th}\\) layer, denoted as \\(V_{i,j}^{x,y}\\), is generated using the following equation, \\[\ u_{i,j}^{x,y,z}=\\varphi(b_{i,j}+\\sum_{\\tau=1}^{d_{i-1}}\\sum_{\\rho=-\\gamma}^{ \\gamma}\\sum_{\\sigma=-\\delta}^{\\delta}W_{i,j,\\tau}^{\\sigma,\\rho}\\times\ u_{i-1, \\tau}^{x+\\sigma,y+\\rho}) \\tag{1}\\] The 3D convolution is done by convolving a 3D kernel with the 3D-data. In the proposed model for HSI data, the feature maps of convolution layer are generated using the 3D kernel over multiple contiguous bands in the input layer; this captures the spectral information. In 3D convolution, the activation value at spatial position \\(\\left(\\mathbf{x,y,z}\\right)\\) in the \\(\\mathbf{j}^{th}\\) feature map of the \\(\\mathbf{i}^{th}\\) layer, denoted as \\(\ u_{i,j}^{x,y,z}\\), is generated as follows, \\[\ u_{i,j}^{x,y,z}=\\varphi(b_{i,j}+\\sum_{\\tau=1}^{d_{i-1}}\\sum_{\\lambda=-\\eta}^ {\\eta}\\sum_{\\rho=-\\gamma}^{\\gamma}\\sum_{\\sigma=-\\delta}^{\\delta}\\mathbf{W}_{i.j, \\tau}^{\\sigma,\\rho,\\lambda}\\times\ u_{i-1,\\tau}^{x+\\sigma,y+\\rho,z+\\lambda}) \\tag{2}\\] where \\(2\\eta+1\\) is the depth of kernel along spectral dimension and other parameters are the same as in (Eqn. 1). ### Attention Mechanism Currently, the attention mechanism has been successfully applied to the area of computer vision based on convolutional neural networks. The attention mechanism can be used to readjust feature maps generated by some layers of a neural network, which make it able to detect specific channel or spatial feature. The attention mechanism can be roughly divided into spatial attention and channel attention. In our proposed model, channel attention was introduced to the refactor and refines the spatial-spectral features extract by every convolutional layer in the dense block. **Channel Attention Module** As mentioned above, the feature map extracted by a single 3D convolution kernel is modeled as a 3D cube, which can learn detailed features and correlation information across spectral bands of hyperspectral data to some extent. Take the \\(3\\times 3\\times 3\\) convolu Figure 1:tional kernel as an example, the same parameters are used for single channel 3D hyperspectral data during which each convolution operation covers three spectral bands. As the band span of the spectral features characterized by a single convolutional layer is fixed, the spectral feature mining has been limited to some extent. Therefore, in order to further refine the extracted spatial-spectral features, feature maps of all channels were concatenated in the spectral dimension to form a 3D tensor. The reshaped 3D tensor had a large channel number, which was equal to the original channel number times the original spectral band number. Then channel attention was introduced to assign a specific weight for each channel. Figure 2 is a schematic diagram of the channel attention mechanism used in this article. Let the dimension of the feature map generated by 3D convolutional layers be \\(\\mathrm{B}\\times\\mathrm{L}\\times\\mathrm{W}\\times\\mathrm{C}\\times\\mathrm{N}\\), where \\(\\mathrm{B}\\) represents batch size, \\(\\mathrm{L}\\times\\mathrm{W}\\) represents the spatial dimension of the feature map, \\(\\mathrm{C}\\) represents the spectral dimension and \\(\\mathrm{N}\\) represents the number of convolution kernels. In our proposed method, the 5D feature map will be reshaped to be a 4D tensor and its dimension will be \\(\\mathrm{B}\\times\\mathrm{L}\\times\\mathrm{W}\\times\\mathrm{(CN)}\\), where \\(\\mathrm{CN}\\) is the new channel number. Figure 3: \\(\\mathrm{B}\\times\\mathrm{L}\\times\\mathrm{W}\\times\\mathrm{C}\\times\\mathrm{N}\\), where \\(\\mathrm{B}\\) represents batch size, \\(\\mathrm{L}\\times\\mathrm{W}\\) represents the spatial dimension of the feature map, \\(\\mathrm{C}\\) represents the spectral dimension and \\(\\mathrm{N}\\) represents the number of convolution kernels. In our proposed method, the 5D feature map will be reshaped to be a 4D tensor and its dimension will be \\(\\mathrm{B}\\times\\mathrm{L}\\times\\mathrm{W}\\times\\mathrm{(CN)}\\), where \\(\\mathrm{CN}\\) is the new channel number. ## 3 Experiments ### Dataset We have used three publicly available hyperspectral image datasets, namely Indian Pines, University of Pavia and Salinas Scene. The Indian Pines (IP) dataset has images with \\(145\\times 145\\) spatial dimension and 224 spectral bands in the wavelength range of 400 to 2500 nm, out of which 24 spectral bands covering the region of water absorption have been discarded. The ground truth available is designated into 16 classes of vegetation. The University of Pavia (UP) dataset consists of 610\\(\\times\\)340 spatial dimension pixels with 103 spectral bands in the wavelength range of 430 to 860 nm. The ground truth is divided into 9 urban land-cover classes. The Salinas Scene (SA) dataset contains the images with 512\\(\\times\\)217 spatial dimension and 224 spectral bands in the wavelength range of 360 to 2500 nm. The 20 water absorbing spectral bands have been discarded. In total 16 classes are present in this dataset. Figures 4-6 show the distribution of each ground objects on the Indian Pines, University of Pavia and Salinas. Figure 4: False-color image and color coding for the University of Pavia Figure 3: False-color image and color coding for Indian Pines In our experiments, for each image we divided all the pixels into three parts: training set, test set and validation set. The proportion of the training set and validation set of the Indian Pines, University of Pavia and Salinas datasets was 5%, 1% and 1% respectively and the remaining pixels served as a test set. The sample distribution of the three datasets for each class of ground object is shown in Tables 1-3. ### Experimental Results Three indexes were used to measure the accuracy of models, namely, overall accuracy (OA), average accuracy (AA) and Kappa coefficient (Kappa). OA represents the proportion of the number of samples that were correctly classified by the model. AA stands for the average precision of all land objects. KAPPA is an accuracy measure based on the confusion-matrix, which represents the percentage of errors reduced by classification versus a completely random classification. In order to avoid fluctuations caused by accidental factors as far as possible, we conducted 20 consecutive experiments. Tables 4-6 show the average indices and standard deviation of each model on three datasets. Figures 7-9 show the false-color map, the ground truths and the classification results of each model for three datasets. We can tell by the data and predicted maps that the classification result of SE-HybridSN was more detailed and accurate in Indian Pines, Salinas and University of Pavia. Among the contrast models, the OA of 2D-CNN on the three datasets were lower than the other con \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline Num- & Class & Training & Validation & Testing & Total \\\\ ber & & & & & \\\\ \\hline 1 & Brocoli\\_green\\_weeds\\_1 & 20 & 20 & 1969 & 2009 \\\\ 2 & Brocoli\\_green\\_weeds\\_2 & 37 & 37 & 3625 & 3726 \\\\ 3 & Fallow & 20 & 20 & 1936 & 1976 \\\\ 4 & Fallow\\_rough\\_plow & 14 & 14 & 1366 & 1394 \\\\ 5 & Fallow\\_smooth & 27 & 27 & 2624 & 2678 \\\\ 6 & Stubble & 40 & 39 & 3880 & 3959 \\\\ 7 & Celery & 36 & 36 & 3507 & 3579 \\\\ 8 & Grapes\\_untrained & 112 & 113 & 11046 & 11271 \\\\ 9 & Soil\\_vinylard\\_develop & 62 & 62 & 6079 & 6203 \\\\ 10 & Corn\\_senesced\\_green\\_weeds & 33 & 33 & 3212 & 3278 \\\\ 11 & Lettuce\\_romaine\\_4wk & 11 & 10 & 1047 & 1068 \\\\ 12 & Lettuce\\_romaine\\_5wk & 19 & 20 & 1888 & 1927 \\\\ 13 & Lettuce\\_romaine\\_6wk & 9 & 9 & 898 & 916 \\\\ 14 & Lettuce\\_romaine\\_7wk & 10 & 11 & 1049 & 1070 \\\\ 15 & Vinyard\\_untrained & 72 & 73 & 7213 & 7268 \\\\ 16 & Vinyard\\_vertical\\_trellis & 18 & 18 & 1771 & 1807 \\\\ & Total & 541 & 541 & 53047 & 54129 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Number of training, validation and testing samples of the Salinastrast models, indicating that the 2D-CNN model was not suitable for small sample hyperspectral classification. Secondly, the classification result of Res-3D-CNN was higher than that of Res-2D-CNN, indicating that the 3D-CNN model could explore spatial-spectral features of training samples more effectively. R-HybridSN was superior to the HybridSN in Indian Pines and University of Pavia, and the two models had a higher classification accuracy than 3D-CNN, to a certain extent, it proved that, compared with the model that used the 3D convolution kernel or 2D convolution kernel alone, the 3D-2D-CNN model was more suitable for the classification under the condition of small samples. Among the three 3D-2D-CNN models, SE-HybridSN achieved the highest classification accuracies in three datasets. For example, the OA of SE-HybridSN was 2.63% and 3.66% higher than HybridSN and SSRN in Indian Pines. Figure 6: The classification maps of Indian Pines. (**a**) Ground truth. (**b–f**) Predicted classification maps for 2D-CNN, 3D-CNN, SSRN, HybridSN and SE-HybridSN respectively. Figure 7: The classification maps of the University of Pavia. (**a**) Ground truth. (**b–f**) Predicted classification maps for 2D-CNN, 3D-CNN, SSRN, HybridSN and SE-HybridSN respectively. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline Number & Training & 2D-CNN & 3D-CNN & SSRN & HybridSN & SE-HybridSN \\\\ & Samples & & & & & \\\\ \\hline 1 & 3 & 12.07 & 27.07 & 79.23 & 61.10 & 45.73 \\\\ 2 & 72 & 78.46 & 83.45 & 88.60 & 92.20 & 95.44 \\\\ 3 & 41 & 60.00 & 75.37 & 85.81 & 96.48 & 97.41 \\\\ 4 & 12 & 42.84 & 56.06 & 70.53 & 77.11 & 93.17 \\\\ 5 & 24 & 81.87 & 92.90 & 93.11 & 94.30 & 96.71 \\\\ 6 & 37 & 92.30 & 96.50 & 96.43 & 97.27 & 99.30 \\\\ 7 & 2 & 27.40 & 67.80 & 82.36 & 89.00 & 98.60 \\\\ 8 & 24 & 99.44 & 98.27 & 98.51 & 97.97 & 99.00 \\\\ 9 & 1 & 53.61 & 60.28 & 68.90 & 83.89 & 64.44 \\\\ 10 & 48 & 74.42 & 83.22 & 87.72 & 95.18 & 96.01 \\\\ 11 & 122 & 82.74 & 89.38 & 91.42 & 97.78 & 98.31 \\\\ 12 & 30 & 57.36 & 63.55 & 90.04 & 86.25 & 91.95 \\\\ 13 & 10 & 84.19 & 88.43 & 91.00 & 89.00 & 98.70 \\\\ 14 & 63 & 92.57 & 97.89 & 97.96 & 98.23 & 99.43 \\\\ 15 & 20 & 64.65 & 81.57 & 82.57 & 83.04 & 90.94 \\\\ 16 & 4 & 81.85 & 92.98 & 88.51 & 85.42 & 96.13 \\\\ \\hline \\hline & KAPPA & \\(0.754\\pm 0.03\\) & \\(0.84\\pm 0.025\\) & \\(0.89\\pm 0.012\\) & \\(0.935\\pm 0.008\\) & \\(0.963\\pm 0.005\\) \\\\ & OA (\\%) & \\(78.48\\pm 3.58\\) & \\(86.04\\pm 2.19\\) & \\(93.10\\pm 0.42\\) & \\(94.31\\pm 0.65\\) & \\(96.76\\pm 0.44\\) \\\\ & AA (\\%) & \\(64.74\\pm 3.21\\) & \\(78.42\\pm 2.87\\) & \\(88.58\\pm 0.15\\) & \\(89.01\\pm 1.23\\) & \\(91.39\\pm 2.09\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Classification results in Indian Pines Figure 8: The classification maps of Salinas. (**a**) Ground truth. (**b–f**) Predicted classification maps for 2D-CNN, 3D-CNN, SSRN, HybridSN and SE-HybridSN respectively. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline Number & Training & 2D-CNN & 3D-CNN & SSRN & HybridSN & SE-HybridSN \\\\ & Samples & & & & & \\\\ \\hline 1 & 20 & 59.97 & 97.63 & 97.50 & 99.99 & 99.99 \\\\ 2 & 37 & 99.48 & 99.82 & 100 & 100.00 & 100 \\\\ 3 & 20 & 60.01 & 92.35 & 99.23 & 99.48 & 99.54 \\\\ 4 & 14 & 98.27 & 98.87 & 98.00 & 98.11 & 99.13 \\\\ 5 & 27 & 94.80 & 96.85 & 97.11 & 99.08 & 98.92 \\\\ 6 & 40 & 99.89 & 99.98 & 98.43 & 99.57 & 99.93 \\\\ 7 & 36 & 97.21 & 98.80 & 97.36 & 99.46 & 99.70 \\\\ 8 & 112 & 83.53 & 87.19 & 98.51 & 99.13 & 98.44 \\\\ 9 & 62 & 99.26 & 99.55 & 99.90 & 99.97 & 100 \\\\ 10 & 33 & 84.95 & 93.58 & 97.72 & 98.70 & 97.84 \\\\ 11 & 11 & 90.00 & 91.44 & 97.42 & 98.34 & 99.04 \\\\ 12 & 19 & 97.15 & 99.26 & 98.04 & 99.68 & 99.89 \\\\ 13 & 9 & 95.74 & 97.74 & 97.70 & 97.21 & 94.93 \\\\ 14 & 10 & 94.84 & 98.29 & 98.42 & 97.58 & 93.51 \\\\ 15 & 72 & 72.57 & 78.62 & 97.18 & 98.45 & 96.84 \\\\ 16 & 18 & 91.12 & 87.12 & 91.11 & 99.85 & 99.45 \\\\ \\hline \\hline \\end{tabular} \\begin{tabular}{c c c c c c} \\hline \\hline KAPPA & \\(0.862\\pm 0.02\\) & \\(0.918\\pm 0.015\\) & \\(0.942\\pm 0.005\\) & \\(0.992\\pm 0.003\\) & \\(0.986\\pm 0.003\\) \\\\ OA (\\%) & \\(87.61\\pm 1.38\\) & \\(92.58\\pm 1.19\\) & \\(96.10\\pm 0.50\\) & \\(99.25\\pm 0.65\\) & \\(98.76\\pm 0.24\\) \\\\ AA (\\%) & \\(88.66\\pm 2.21\\) & \\(94.82\\pm 1.87\\) & \\(97.01\\pm 0.63\\) & \\(99.09\\pm 0.49\\) & \\(98.57\\pm 0.42\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Classification results in Salinas \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline Number & Training & 2D-CNN & 3D-CNN & SSRN & HybridSN & SE-HybridSN \\\\ & Samples & & & & & \\\\ \\hline 1 & 66 & 91.32 & 92.83 & 94.72 & 91.78 & 96.79 \\\\ 2 & 186 & 97.50 & 96.54 & 97.15 & 99.77 & 99.74 \\\\ 3 & 21 & 68.51 & 70.08 & 82.73 & 92.24 & 91.44 \\\\ 4 & 30 & 95.09 & 95.99 & 96.82 & 91.11 & 94We further compared the experiment results of the three 3D-2D-CNN based models and drew the following conclusions. Firstly, the classification accuracy of SE-HybridSN was relatively balanced on three datasets. It further demonstrated the strong feature extraction ability of the dense block and the necessity of Channel Attention module. What is more, SE-HybridSN had an uneven classification accuracy on different datasets. Using a similar amount of training samples in three datasets, the classification effect of Salinas was far better than Indian Pines. Thus, the generalization ability of SE-HybridSN needs to be further analyzed. Thirdly, compared with the other two 3D-2D-CNN models, SE-HybridSN had a tremendous improvement in small sample classes, such as the Stone-steel Towers in Indian Pines and Shadows in the University of Pavia. However, the classification accuracy of SE-HybridSN on some ground objects, such as oats and alfalfa in Indian Pines and Lettuce_romaine_7wk in Salinas was still lower than HybridSN, which needs to be further studied. ## 4 Conclusion In this paper, in order to realize the efficient extraction and refinement of the spatial-spectral feature in the \"small sample\" hyperspectral image classification, we proposed an SE-HybridSN model from the perspective of network optimization. Based on the 3D-2D-CNN model, multifeature reuse was realized by a dense block. Besides, the 3D convolution and 2D convolution were respectively equipped with channel attention, thus the spatial-spectral features were further refined. We conducted a series of experiments on three open datasets: Indian Pines, Salinas and the University of Pavia. The experiment results show that the SE-HybridSN model had a better classification effect than all the contrast models. However, the accuracy improvement brought by network optimization was limited, so other strategies should be combined to further improve the classification accuracy. ## References * [1] Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3-6 December 2012; pp. 1097-1105. * [2] Li, X.; Shen, X.; Zhou, Y.; Wang, X.; Li, T.-Q. Classification of breast cancer histopathological images using interleaved DenseNet with SENet (IDSNet). PLoS ONE 2020, 15, e0232127. * [3] Chen, Y.S.; Jiang, H.L.; Li, C.Y.; Jia, X.P.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE T trans. Geosci. Remote Sens. 2016, 54, 6232-6251. * [4] Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843-4855. [CrossRef] [PubMed] * [5] Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral-spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847-858. * [6] Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral-Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1608. [C * [7] Francois, C. Deep Learning with Python, 1st ed.; Posts and Telecom Press: Beijing, China, 2018; p. 6. * [8] Liao, W.; Pizurica, A.; Scheunders, P.; Philips, W.; Pi, Y. Semisupervised Local Discriminant Analysis for Feature Extraction in Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 184-198. * [9] Prasad, S.; Bruce, L.M. Limitations of Principal Components Analysis for Hyperspectral Target Recognition. IEEE Geosci. Remote Sens. Lett. 2008, 5, 625-629. * [10] Li, W.; Prasad, S.; Fowler, J.E.; Bruce, L.M. Locality-Preserving Dimensionality Reduction and Classification for Hyperspectral Image Analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1185-1198. * [11] Samaniego, L.; Bardossy, A.; Schulz, K. Supervised classification of remotely sensed imagery using a modified k-NN technique. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2112-2125. * [12] Kumar, S.; Ghosh, J.; Crawford, M.M. Best-bases feature extraction algorithms for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1368-1379. * [13] Foody, G.M.; Mathur, A. A relative evaluation of multiclass image classification by support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1335-1343. * [14] Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 1-12. * [15] Zhao, W.; Guo, Z.; Yue, J.; Zhang, X.; Luo, L. On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery. Int. J. Remote Sens. 2015, 36, 3368-3379. * [16] Liu, B.; Yu, X.; Zhang, P.; Tan, X. Deep 3D convolutional network combined with spatial-spectral features for hyperspectral image classification. Acta Geod. Cartogr. Sin. 2019, 48, 53-63. * [17] Meng, Z.; Li, L.; Jiao, L.; Feng, Z.; Tang, X.; Liang, M. Fully Dense Multiscale Fusion Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 2718. [CrossRef] * [18] Swalpa, K.R.; Gopal, K.; Shiv, R.D.; Bidyut, B.C. HybridSN: Exploring 3-D-2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277-281. * [19] Feng, F.; Wang, S.; Wang, C.; Zhang, J. Learning Deep Hierarchical Spatial-Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN. Sensors 2019, 19, 5276. * [20] Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18-22 June 2018; pp. 7132-7141. * [21] Woo, S.; Park, J.; Lee, J.Y.; So Kweon, I. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8-14 September 2018; pp. 3-19. * [22] Wang, L.; Peng, J.; Sun, W. Spatial-Spectral Squeeze-and-Excitation Residual Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 884. * [23] Li, G.; Zhang, C.; Lei, R.; Zhang, X.; Ye, Z.; Li, X. Hyperspectral remote sensing image classification using three-dimensional-squeeze-and-excitation-DenseNet (3D-SE-DenseNet). Remote Sens. Lett. 2020, 11, 195-203. * [24] Lin, Z.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046-5063. * [25] Zhong, Z.; Li, J.; Clausi, D.A.; Wong, A. Generative Adversarial Networks and Conditional Random Fields for Hyperspectral Image Classification. IEEE T. Cybern. 2020, 50, 3318-3329. * [26] Liu, B.; Yu, X.; Zhang, P.; Tan, X.; Yu, A.; Xue, Z. A semi-supervised convolutional neural network for hyperspectral image classification. Remote Sens. Lett. 2017, 8, 839-848. * [27] Sellami, A.; Farah, M.; Farah, I.R.; Solaiman, B. Hyperspectral imagery classification based on semi-supervised 3-D deep neural network and adaptive band selection. Expert Syst. Appl. 2019, 129, 246-259. * [28] Song,W.; Li, S.; Li, Y. Hyperspectral images classification with hybrid deep residual network. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23-28 July 2017; pp. 2235-2238 * [29] Liu, B.; Yu, X.; Yu, A.; Zhang, P.Q.; Wan, G.; Wang, R.R. Deep Few-Shot Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 99, 1-15.
In recent years, with the emergence of new technologies for deep learning, deep learning is widely used for hyperspectral image classification. convolutional neural network is one of the most frequently used deep learning based methods for visual data processing. The use of CNN for hyperspectral image classification is also visible in recent works. However, the classification effect is not satisfactory when limited training samples are available. Focused on \"small sample\" hyperspectral classification, we proposed a novel 3D-2D-convolutional neural network model named SE-HybridSN. In SE-HybridSN model, a dense block was used to reuse shallow features and aimed at better exploiting hierarchical spatial-spectral features. Subsequent depth separable convolutional layers were used to discriminate the spatial information. Further refinement of spatial-spectral features was realized by the channel attention method, which were performed behind every 3D convolutional layer and every 2D convolutional layer. Experiment results indicate that our proposed model can learn more discriminative spatial-spectral features using very few training data. In Indian Pines, Salinas and the University of Pavia, SE-HybridSN using only 5%, 1% and 1%labeled data for training. A very satisfactory performance is obtained using the proposed SE-HybridSN. Hyperspectral Image Classification, Convolutional Neural Net\\({}^{-}\\) works, Deep Learning, Spatial-Spectral, Channel Attention
Write a summary of the passage below.
290
cambridge_university_press/158a6cf9_179b_443f_b72f_46075940ca33.md
Greater than the sum of its parts: optical remote sensing and sediment core data provide a holistic perspective on glacial processes Heny Jacob Miller Gage and Carolyn Hope Eyles School of Earth, Environment, and Society, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S ###### 25 May 2023 sediment cores, which offer insight into a variety of glacial processes which shift the timing, type, and amount of sediment that is delivered to and deposited in the proglacial basin (Bakke and Paasche, 2011). Paleenvironmental data obtained from sediment cores are useful for answering the questions 'What?' and 'When?'. For example: What was the sediment provenance and depositional environment? When was the sediment produced, transported, and deposited? What are the seasonal or annual dynamics of glacial processes? This is especially important for comparing phenomena over long timescales when there are shifts in frequency or magnitude that produce changes in sedimentological characteristics. One of the advantages of employing sediment cores is that they provide a wide range of sedimentological data, such as grain size, sediment lithology, sorting, lamination, and structural characteristics, as well as geochemical data, each of which offer information about separate glacial processes (Bakke and Paasche, 2011). Hence, a single sediment core that records variations in sediment input over time can potentially give a comprehensive picture of locally changing conditions in the depositional environment (but see Van Wyk de Vries and others, 2023). Paleenvironmental records obtained from sediment cores have been used for a variety of applications, including the study of meltwater dynamics (Striberger and others, 2012), subglacial and proglacial weathering (Takano and others, 2015), glacial retreat (Avery and others, 2019), and water contamination (Zhu and others, 2020). Paleenvironmental analysis of terrestrial landforms and sediment exposures can also provide valuable information about past environments and depositional processes although they are often limited by the interpretability, length, and temporal scale of the available record (Benn and Owen, 2002; Barr and Lovell, 2014). There is a growing awareness that modern geoscience necessitates an interdisciplinary approach which permits geoscientis to solve new scientific problems. To this end, remote sensing techniques have been increasingly combined with in situ field measurements to estimate glacier characteristics, particularly mass balance (e.g. Clark, 1997; Rivera and others, 2002; Negi and others, 2012; Karais and others, 2022). Outcrop logging and geomorphological mapping have also been combined successfully with remote sensing data to extend the spatial scale of observations and inform understanding of the relationship between glacial processes and landforms (Boulton and others, 1999; Chandler and others, 2016, 2018; Davies and others, 2017; Lovell and others, 2018; Ewertowski and Tomczyk, 2020; Storrar and others, 2020; Boston and others, 2023). In turn, remote sensing imagery can be used for landsystem analysis to provide important geomorphological insights, and has been combined with stratigraphic logging and numerical dating to investigate landform evolution during successive glacial advances/ retreats (Boulton and Clark, 1990; Livingstone and others, 2010; Gribenski and others, 2016). A significant opportunity that has not been fully exploited to date is the integration of optical remote sensing data (namely multispectral imagery) and paleoenvironmental data obtained from sediment cores. While practitioners of glacial geomorphology and of remote sensing have employed integrated methods since the early development (i.e. 1970s, 1980s) of satellite-based imagery, these methods were largely restricted to the analysis of data from glacial landforms or sediment sections (Sugden, 1978; Punkari, 1980). This continues to be the case despite the opportunities that exist (e.g. Straneo and others, 2019) to combine optical remote sensing datasets with data from sediment cores, which are widely used to study glacial processes and past environmental conditions. As the quality and accessibility of remotely sensed data has enhanced, integration of these specific datasets will provide a more holistic picture of glacial sedimentary processes on a variety of temporal and spatial scales. ### The case for combining optical remote sensing and sediment core data Optical remote sensing and paleoenvironmental data derived from sediment cores have complementary characteristics (Table 1). This means that a multi-method approach may address the conceptual and technical limitations of each data source (e.g. proxy interpretation, sensor calibration) while augmenting the spatiotemporal extent of the combined dataset. An integrated approach offers the opportunity to link past and present analogs to answer a broader range of questions on contemporary glacial processes and long-term glacial activity. Combining optical remote sensing and sediment core data improves the scale, scope, and interpretability of these datasets. ### Scale Sediments deposited over decades to tens of thousands of years can be dated with relatively high precision to provide annual (or in some cases, sub-annual) data (e.g. Stansell and others, 2013). However, the data which can be gleaned from these measurements are inherently restricted by uncertainty in the dating of the core and the rate of sedimentation (Croudace and others, 2019; Lisiecki and others, 2022). Remote sensing imagery is temporally precise, with a high frequency of collection so can be used to analyze glacial processes at more continuous intervals with high temporal certainty. Thus, coarse resolution processes interpreted from sediment cores can be confirmed over short time-scales using remote sensing. Likewise, paleoenvironmental data obtained from sediment cores offer much longer temporal extent than satellite-based remote sensing, for which data have only been available since the mid-late twentieth century (Figure 1). In terms of spatial scale, optical remote sensing imagery is extensive and can provide comprehensive coverage over an entire glacier and proglacial basin. Sediment cores can be difficult and costly to extract, which may limit the quantity of data that can be obtained from a site of interest (Bakke and Paasche, 2011). The inaccessibility of many glacial environments, and the fact that lengthy sediment records are typically restricted to proglacial lakes, also place constraints on the spatial extent of paleoenvironmental datasets derived from sediment cores alone (Bakke and Paasche, 2011). Obtaining data from spaceborne sensors alleviates the challenges of intensive field data collection and can provide \\begin{table} \\begin{tabular}{l l l} \\hline \\hline Characteristics & Optical remote sensing data & Sediment core data \\\\ \\hline Spatial extent & Discrete to continuous, spatially extensive & Discrete, spatially restricted \\\\ Temporal extent & Short (days to decades), high precision & Long (decades to epochs), poor precision \\\\ Conventional applications & Contemporary glacial processes & Pale-glacial processes \\\\ interpretation & May require ground-tutching and calibration & Relaint upon assumptions about proxy-process extrapolation \\\\ Data availability & On-demand & Months to years to collect and process \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Complementary characteristics of remotely sensed and sediment core data coverage over areas that otherwise cannot be assessed (Bishop and others, 2000; Bhardwaj and others, 2016). ### Interpretation Interpreting paleoenvironmental data requires practitioners to make assumptions about how sediments are related to glacial processes in situ. For example, geochemical sediment proxies have nuanced interpretations which can shift based on the deposition environment (Croudace and Rothwell, 2015). Sediments preserve an indirect record of environmental conditions at the time of their deposition, which means that well-supported interpretations of geochemical sediment proxies make inherently speculative conclusions about why and how glacial processes have occurred (Bakke and Paasche, 2011). In this respect, remote sensing is especially valuable because it can supply information about the context in which sediments have been deposited (e.g. to illustrate the environmental conditions responsible for wave deposition in proglacial lakes; Zolitschka and others, 2015; Van Wyk De Vries and others, 2022). Information about contemporary processes observed in remotely sensed imagery can be used to verify whether historical phenomena interpreted from paleoenvironmental records continue to occur. One of the drawbacks of utilizing remote sensing data is that they often require ground-truthing to verify the precision of imagery interpretations. For instance, when using spectral indices to delineate glacier extent, seasonal snow, water bodies, or clouds may be incorrectly identified because they have similar index values to glacial ice (Holobica and others, 2021; Racoviteanu and others, 2022). This is because optical remote sensors only measure reflectance within discrete wavelength bands which may not be precise enough to discriminate surfaces with similar spectral properties (Bhardwaj and others, 2016). Remote sensing data also require calibration to eliminate the effects of atmospheric and ground-based scattering which impact bottom-of-atmosphere reflectance measurements (Hall and others, 1990; Burns and Nolin, 2014; Jawak and others, 2022). Ultimately, these limitations mean that field data are frequently required to verify the validity of remote sensing measurements (Jones, 1983; Brandelik and others, 1998). While sediment core records collected from lacustrine or marine environments do not provide information about specific ground cover spectral indices or atmospheric corrections, they _can_ be used to supplement remote sensing data by providing confirmation that changes interpreted in optical imagery have indeed occurred in the glacial environment, such as shifts in the composition of glacierized and nongla-cized surfaces (e.g. Gage and others, 2022, 2022). ### Scope A promising outcome of combining datasets with complementary spatial and temporal extents is the ability to extend analyses from Figure 1: Temporal effect and frequency of optical remote sensing and sediment core data. Each point represents a category of information that can be extracted from remotely sensed or sediment data. Positions are represented conceptually; there is variability in both the extent and frequency of these data sources based on the methodology employed. local to regional and global scales. Local scale observations can be upscaled by linking glacial processes or landforms that are well understood from sediment cores to remote sensing signatures (e.g. spectral indices) which can be identified more broadly across the landscape (Bamber and Rivera, 2007; Jennings and others, 2022). An advantage of this approach is that by aligning sediment core data at the local scale with remotely sensed imagery signatures it is possible to infer sedimentological changes that might be occurring across the broader landscape. The integration of sedimentological core analysis and optical remote sensing may also broaden the conceptual scope of research to numerous questions in glacial geology which are difficult to answer using one of these techniques alone. Investigations into the temporal variability of processes linking glacier advance/ retreat and sediment and landform development are particularly well suited to this approach. For instance, examining relationships between changing ice velocity or ice recession rates and sediment characteristics can be achieved by comparing glacial motion in imagery timeseries to sediment core records. Other topics relating to sediment production, such as the effect of changing ice thickness, margin configuration, and seasonal/annual changes of the sediment load associated with meltwater discharge could also be investigated by integrating datasets. ### Combining optical remote sensing and paleoenvironmental research There are several examples in the literature which pioneered an approach integrating optical remote sensing and paleoenvironmental data obtained from sediment cores (also see Schaefer and Gilbert, 2008; Dowdeswell and others, 2015; Flink and others, 2018; Noormets and others, 2021; Piret and others, 2021). Lopez-Moreno and others (2017) studied the recent development of progalal lakes in Peru's CorolliE Blanca using short lake sediment cores. X-ray fluorescence and grain size analysis allowed the identification of two distinct sedimentary units consistent with deposition in a high- and low-energy environment, respectively. Multispectral Landsat imagery was then used to link these changes to shifts in glacial extent, which indicated that the adjacent glacier partially covered the lake in 1975 and retreated rapidly during the mid-1980s and 1990s due to warm and dry El Nino conditions. Similarly, preliminary work from Gage and others (2022a, 2022b) using X-ray fluorescence on three sediment cores from Lake Shallow in the CorolliE Blanca demonstrate that sediment influx (Zr/Ti) and iron enrichment (Fe/Ti) proxies had increased between the early 20th century and present. The authors examined spectacles of iron (ferrous iron index) and glacier cover (NDSI) over pairs of Sentinel-2 images and determined that areas of high glacial retreat were spatially associated with areas of ferrous iron enrichment, particularly during warm El Nino years. This suggests that climate change has accelerated glacial retreat and exposed iron-rich bedrock which is now contaminating meltwater. Van Wyk de Vries and others (2022) examined the sediment dynamics of a progalical lake in the Southern Patagonian Icefield by assessing the composition of 47 sediment cores. Rhythmic alternation between coarse-grained, calcium-enriched dark laminae and fine-grained calcium-depleted light laminae would suggest seasonal deposition according to meltwater flux dynamics, with a higher summer sediment input and lower winter input. However, assessment of Sentinel-2 multispectral imagery demonstrated that relative suspended sediment concentration was higher in the winter than the summer. This dissuaded the authors of the notion that sediment dynamics were driven solely by the seasonal cycle of sediment delivery and revealed that seasonality in lake mixing contributed to the deposition of annual waves in the sediment. Andresen and others (2012) collected optical imagery and three sediment cores from the Sermilk Fjord to reconstruct a record of calving activity at the Helheim Glacier in Greenland over the past 120 years. The authors used satellite and aerial imagery to confirm that sand deposition in the sediment cores, a proxy for ice-rated debris, is concurrent with glacial retreat. These data indicate that glacial retreat in the imagery intensifies during 5-10-year calving episodes and that the Helheim Glacier is highly responsive to atmosphere-ocean variability over short timescales. Campagne and others (2015) performed micro-paleontological and geochemical analyses on a sediment core to reconstruct ocean conditions in the Mertz Glacier polynya. The authors observed an approximately 70-year periodicity between conditions with abundant open-water diatoms, high Ti, and large grain size, and conditions with high sea-ice indicators, low Ti, and low grain size. To investigate the potential cause for this cyclicity, Giles (2017) employed a complementary remote sensing approach using a sequence of images and determined that glacial advances cause the glacier tongue to calve periodically, which controls the development of the polynya. ### Considerations and limitations For those wishing to adopt integrated methodologies, optical remote sensing techniques are low-hanging fruit - optical imagery is financially and technologically accessible and does not require intensive field data collection. Appropriate study sites are those with good optical imagery coverage in locations where sediment cores can be reasonably obtained. Integrative methodologies should also aim to examine time periods which lend themselves to combining optical imagery and sediment core data, such as the late Holocene (particularly the Anthropocene) during which there is remote sensing coverage and paleo-processes can be documented. Weaknesses or inadequacies of the specific datasets being examined should be clearly identified. For remote sensing datasets, this might include uncertainty in assessing a multispectral index which behaves similarly for multiple surface cover types, or the limited temporal extent of available imagery. Challenges with sediment core datasets could include difficulties interpreting a particular geochemical sediment proxy, establishing accurate sedimentation rates, or dealing with discontinuities which disrupt interpretations of the paleoclimatic record. These challenges should be leveraged while designing the methodology to be used so that complementary analyses of optical imagery and sediment core data can minimize the identified uncertainties. We propose that the integration of optical remote sensing and sediment core data will achieve a more holistic understanding of past and current environmental conditions than the examination of each dataset individually. However, these combined datasets are not a panacea capable of explaining all glaciological processes, nor can they be successfully integrated in all cases. Linking optical imagery and sediment core data may prove difficult when the paleenvironmental record available from sediments is much lengther (e.g. thousands of years) than the remote sensing record, or when both datasets do not have coverage of the same study area. Obtaining sediment cores remains logistically and financially challenging, particularly for lengthy records in hostile field environments. Moreover, limitations to the precision of analytical tools for sediment cores, such as X-ray fluorescence scanners, may restrict the temporal resolution of paleoenvironmental records such that events occurring over the remote sensing record cannot be observed in both datasets. Numerical dating methods also limit the precision with which imagery timeseries and sediment core records can be aligned. A conceptual challenge of this approach may be to correctly interpret remote sensing and sediment core data in union because the combined information they provide (e.g. multispectral indices, geochemical proxies) has multiple, context-dependent interpretations. The ease with which integration can be achieved will rely on correctly identifying sources of uncertainty, such as sensor viewing characteristics and calibration in optical imagery and dating uncertainty in sediment cores. Further advances in remote sensing technology will alleviate some of these challenges. The continued development of imaging platforms will provide a much wider array of spectral bands with precise interpretations, improve data accessibility, and will enhance temporal and spatial resolution to allow alignment of sediment core and remote sensing datasets. The application of hyperspectral imagery is especially promising as it can be used more precisely to identify lithological characteristics (Ting-ting and Fei, 2012). Increasing the precision of numerical dating and paleoenvironmental analyses used in sedimentological investigations (e.g. resolution of x-ray fluorescence scanners to resolve events occurring within the extent of imagery timeseries) will also enhance our ability to align multiple datasets. Numerical and conceptual models may prove valuable in bridging the gap between remote sensing and sediment core data as each provides insight about different aspects of the same overall process (e.g. Dowdeswell and others, 2015; Giles, 2017). ## Conclusion Optical remote sensing and sedimentological core analyses used to interpret paleoenvironmental conditions are complementary in nature. When examined alone, sedimentary records derived from cores require careful interpretation and are temporally restricted in their resolution, which may make it difficult to connect past and contemporary processes. In contrast, remote sensing approaches used to determine environmental conditions and/or changes in contemporary glacial processes have limited temporal scope and often require additional ground-truthed data. Hence, given the rapid increase in the availability of open-access optical imagery there is a timely opportunity to employ integrated methods to fill the knowledge gaps that exist when remotely sensed data or sedimentologic records are examined in isolation. The examples we provide, and the growing body of literature combining remote sensing with other traditional glaciological and sedimentological methods, suggest that such an approach is feasible to improve our understanding of glacial environments and their response to climate change. These datasets are best integrated in scenarios which: (i) investigate glacial processes that operated in the recent past (decades to centuries); (ii) employ novel paleoenvironmental sediment proxies, or those with conflicting interpretations, which benefit from validation with remotely sensed data; (iii) are constrained by the limited availability of sediment core or remote sensing data; (iv) examine processes which vary or operate across different temporal and spatial scales; and/or (v) have been studied locally and require contextualization to draw conclusions about processes operating regionally or globally. While recent research has begun combine these approaches (e.g. Lopez-Moreno and others, 2017; Gage and others, 2022a, 2022b; Van Wyk De Vries and others, 2022), there still remain many opportunities to integrate remotely sensed data and paleoenvironmental data obtained from sedimentary cores. Future work should seek to combine these data from the outset to enhance the breadth and quality of information available to investigate the record of current and past processes in glacial environments. ## Acknowledgements This research was funded by a Global Science Initiative Grant from the Faculty of Science, McMaster University. We thank reviewers Harold Lovell and Max Van Wyk de Vries and Chief Editor Hester Jiskot for their insightful comments on the manuscript. ## References * (1) * Andresen CS and 10 others (2012) Rapid response of Helheim Glacier in Greenland to climate variability over the past century. _Nature Geoscience_ 5(1), 37-41. doi: 10.1038/pnas10349 * Asvery RS and others (2019) * Asvery RS and others (2019) Croadace IW, Lawmark L, Tallingni R and Editschka B (2019) Current perspectives on the capabilities of high resolution XRF core scanners. _Quaternary International_**514**, 5-15. doi: 10.1016/j.quantic.2019.04.002 * Croudace IW and Rothwell RG (eds) (2015) _Micro-XRF Studies of Sediment Cores: Applications of a non-destructive tool for the environmental sciences_. Netherlands, Dordrechtar. Springer, doi: 10.1007/978-94-017-849-5. * Davies BJ and others (2017) Davies BJ and 7 others (2017) _Ice-dammared lateral lake and epib shelf lake insights into Holocene dynamics of Maragretite Trough Ice Stream and George VI Ice Shelf_, Alexander Island, Antarctic Peninsula. _Quaternary Science Reviews_**17**, 189-219. doi: 10.1016/quasicirc.2017.10.016 * Dowdeswell JA and others (2015) Dowdeswell JA and 6 others (2015) Sediment-rich multirelate plumes and ice-proximal fans at the margins of modern and ancient theater glaciers: observations and modelling. _Sclometology_**62**(6), 1665-1692. doi: 10.1111/sdl.21298 * Evertowski and Amor (2020) Evertowski MW and Tomorzyk AM (2020) Reconstruction of temporarily stabilized ice-cored moorings in front of pylemental glaciers gravitational mass movements as the most important geomorphological agents for the redistribution of sediments (a case study from Ebbabreen and Ragnarbren, Salvador). _Geomorphology_**350**, 106952. doi: 10.1016/j.geomor.2019.106952 * Fahnestock and others (2016) Fahnestock M and 5 others (2016) Rapid large-area mapping of ice flow using Landsat & Remote Sensing of Environment 185, 84-94. doi: 10.1016/j.rse.2015.11.023 * Flink AE et al. (2018) Flink AE, Hill P, Noornets R and Kirchner N (2018) Holocene glacial evolution of Mohnbuka in eastern Spilsbergen. _Boras_**47**(2), 390-409. doi: 10.1111/b0.2277 * Foga et al. (2014) Foga S, Stearns LA and van der Veen CJ (2014) Application of satellite remote sensing techniques to quantify terminus and Ice M\\(\\ddot{\\text{a}}\\)lange behavior at Rielhem Glacier, East Greenland. _Marine Technology Society Journal_**48**(5), 81-91. doi: 10.4031/MTS.48.5.3 * Gage HJ et al. (2022) Gage HJ, Narro Perez RA, Eyles C and Davila Roller L (2022) Climate Change-Induced Retrect Drives Water Contamination from Tropical Glaciers. _A&U Fall Meeting Abstracts_**2022**, C45A-03. * Gage HJM et al. (2022b) Gage HJM, Narro Perez RA, Eyles CH and Davila Roller LR (2022b) Proglacial Lake Sediment Records of Contemporary Climate Change in Lake Shaling, Cordillanc Blanca, Peru. _GSA Controls_# * Garg PK et al. (2017) Garg PK, Shukla A and Jasnotia AS (2017) Influence of topography on glacier changes in the central Himalaya, India. _Global and Planetary Change_**159**, 196-212. doi: 10.1016/j.globa.2017.07.007 * Past, present and future. _Remote Sensing of Environment_**191**, 30-37. doi: 10.1016/j.nrc.2017.01.003 * Gribenlsl and others (2016) Gribenlsl N and 12 others (2016) Complex patterns of glacier advances during the late glacial in the Chagan Urban Valley, Russian AIai. _Quaternary Science Reviews_**19**, 289-385. doi: 10.1106/j.nrc.2016.07.032 * Gu et al. (2023) Gu C, Li S, Liu M, Hu K and Wang P (2023) Monitoring glacier late outbreak flood (GLOF) of lake MERDacher using dense Chinese high-resolution satellite images. _Remote Sensing_**15**(7), 1941. doi: 10.3399/rs15071941 * Hall et al. (1990) Hall DK, Bindschafter RA, Foster TL, Chang ATC and Siddalinghali H (1990) Comparison of in situ andather-derived references of Forthundes Glacier, Greenland. _International Journal of Remote Sensing_**1** (3), 493-504. doi: 10.1080/01431169008955035 * Holobicka-H and others (2021) Holobicka I-H and 8 others (2021) Multi-sensor remote sensing to map glacier debris cover in the Greater Caucasus, Georgia. _Journal of Glaciology_**67** (264), 685-696. doi: 10.1017/j.2022.014.21 * Jain and Kim (2019) Jain SK and MH KA (2019) Glacier and glacial lake classification for change detection studies using satellite data: a case study from Baspa basin, western Himalaya. _Geacontra Journal_**34**(4), 391-414. doi: 10.1080/10106049.2017.104145 * Jawaik SD et al. (2022) Jawaik SD, Wahlhede SF, Luis AJ and Balakrishna K (2022) Effect of image-processing routines on geographic object-based image analysis for mapping glacier surface facies from Sushbard and the Himalayas. _Remote Sensing_**14**(17), 4603. doi: 10.3390/rs14174403 * Jennings et al. (2022) Jennings SA, Hambey MJ, Moorman BJ, Holt TO and Glasser NF (2022) Upscaling ground-based structural glaciological investigations via satellite remote sensing to larger-scale tex masses: blot Island, Canadian Arctic. _Earth Surface Processing and Landforms_**47**(8), 2130-2150. doi: 10.1002/ssp.5367 * Jones (1983) Jones EB (1983) _Snowpack Ground-Truth Manual_. NAS 1.26170584., Washington, D.C., USA: NASA. Available at [https://ntrs.nasa.gov/citations/19840003501](https://ntrs.nasa.gov/citations/19840003501) * Karalis et al. (2022) Karalis J, Lamsters K, Jelkins J, Sobota I and Dietrijs P (2022) UAV and GPR data integration in glacier geometry reconstruction: a case study from Irenebrew, Sushbard. _Remote Sensing_**14**(4), 456. doi: 10.3390/rs14030456 * Kaushik and Singh (2019) Kaushik S, Joshi PK and Singh T (2019) Development of glacier mapping in Indian Himalaya: a review of approaches. _International Journal of Remote Sensing_**40**(7), 6697-6634. doi: 10.1080/01431161.2019.158211 * Lisiecki et al. (2022) Lisiecki LE, Jones AM, Rand D, Lee T and Lawrence CE (2022) Comparing age model techniques for the last glacial cycle: a case study of ten Iberian Margin sediment cores. _Quaternary Science Reviews_**287**, 107559. doi: 10.1016/j.qualiveivev.2022.107559 * Livingstone et al. (2010) Livingstone SI, Evans DJA, Cofaigh CO and Hopkins J (2010) The Brampton kame belt and Penning extermal mother channel system (Cumbris, UK); morphology, sedimentology and formation. _Proceedings of the Geologists' Association_**121**(4), 423-443. doi: 10.1016/j.ppeda.2009.10.005 * Lopez-Moreno and others (2017) Lopez-Moreno JI and 9 others (2017) Individual and depositional processes associated with recent glacier recession in Yanamary catchment, Cordillanc Blanca (Peru). _Science of The Total Environment_**579**, 272-282. doi: 10.1016/j.scitorev.2016.11.107 * Lovell and others (2018) Lovell H and S others (2018) Multiple Late Holocene surges of a High-Arctic glacier glacier system in Sushbard. _Quaternary Science Reviews_**201**, 162-185. doi: 10.1016/j.quasicrev.2018.10.024 * Negg et al. (2012) Negg HS, Thakur NK, Ganju and A Snehmani (2012) Monitoring of Ganju glacier using remote sensing and ground observations. _Journal of Earth System Science_**121**(4), 855-866. doi: 10.1007/s1204-012-0199-1 * Noornets and Kirchner (2021) Noornets R, **Rink and A and Kirchner N (2021) Glacial dynamics and deglaciation history of Hamberbaktas reconstructed from submarine landforms and sediment cores. E Stephengren, Sushbard. _Boras_**50**(1), 29-50. doi: 10.1111/b0.12488 * Piet and others (2021) Piet L and G others (2021) High-resolution fjord sediment record of a receding glacier with spring intermediate porological lake (Steffen Fjord, Chilean Patrongia). _Earth Surface Processes and Landforms_**46**(1), 239-251. doi: 10.1002/esp.5015 * Punkari (1980) Punkari M (1980) The ice lobes of the Scandinavian ice sheet during the deglaciation in Finland. _Boras_**9**(4), 307-310. doi: 10.1111/j.1502-3885.1980.tb00710x * Raccutiana and others (2022) Raccutiana AE and **5 others** (2022) Debris-covered glacier systems and associated glacial lake outburst flood hazards: challenges and prospects. _Journal of the Geological Society_**179**(3), jg2021-084. doi: 10.1144/jg2021-084 * Rivera et al. (2020) Rivera A, Acuna C, Cassas G and Bown E (2020) Use of remotely sensed and field data to estimate the contribution of Chilean glaciers to equstatic sea-level rise. _Annals of Glaciology_**34**, 367-372. doi: 10.3189/17275640278187734 * Sam et al. (2016) Sam I, Bhardhard A, Singh S and Kumar R (2016) Remote sensing flow velocity of debris-covered glaciers using Landsat 8 data. _Progress in Physical Geography: Earth and Environment_**40**(2), 305-321. doi: 10.1177/030135539849 * Schafer and Gilbert (2008) Schafer E and Gilbert R (2008) Proglacial sediment trapping in recently formed Silt Lake, upper Lilloet Valley, Coast Mountains, British Columbia. _Earth Surface Processes and Landforms_**33**(10), 1542-1556. doi: 10.1002/esp.125 * Schafer et al. (2007) Schafer E, Menous B and Slyzaker O (2007) Extreme sediment delivery events recorded in the contemporary sediment record of a Montana lake, southern Coast Mountains, British Columbia. _Canadian Journal of Earth Sciences_**43**(12variability in eastern Iceland. _Quaternary Science Review_**38**, 76-86. doi: 10.1016/j.quaternic.2012.02.001 * Sugden (1978)**Sugden D** (1978)** Glacial erosion by the Laurentide ic sheet. _Journal of Glaciology_**20**(83), 367-391. doi: 10.3189/S0022143000013915 * Takano et al. (2015)**Takano Y, Kojima H, Takeda E, Yokoyama Y and Fukui M** (2015)**Bigogochemistry and immology in Antarctic subglacial weathering: molecular evidence of the linkage between subglacial silica input and primary producers in a prem premaily ice-covered lake. _Progress in Earth and Planetary Science_**2**(1), 86. doi: 10.1186/s40645-015-0036-7 * Telling et al. (2017)**Telling J, Gienien C, Fountain A and Finengen D** (2017)**Analyzing glacier surface motion using LIDAR data. _Remote Sensing_**9**(3), 283. doi: 10.3390/rs903283 * Ting-ting and Fei (2012)**Ting-ting Z and Fei L** (2012)**Application of hyperspectral remote sensing in mineral identification and mapping. _Proceedings of 2012 2nd International Conference on Computer Science and Network Technology_, 103-106. doi: 10.1109/ICCNT.2012.6525900 * Vale et al. (2021)**Vale AB, Arnold NS, Rees WG and Lea JM** (2021)**Remote detection of surge-related glacier terminus change across high mountain Asia. _Remote Sensing_**13**(7), 1390. doi: 10.3390/rs1371309 * Van Wyk and others (2022)**Van Wyk De Vries M and 7 others** (2022)**Physical limology and sediment dynamics of Lago Argentino, the world's largest ice-contact lake. _Journal of Geophysical Research: Earth Surface_**127**(3), 22. doi: 10.1029/2022JF00698 * Van Wyk et al. (2023)**Van Wyk de Vries M, Ito E, Shapley M, Romero M and Brigone G** (2023)**Investigating paleoclimate and current climatic controls at Lago Argentino using sediment pixel time series time series. _Journal of Paleoclimology_**70**(4), 311-330. doi: 10.1007/s10933-023-00296-7 * Wufa and others (2021)**Wufa A and 7 others** (2021)**Changes in glacial meltwater runoff and its response to climate change in the mashman region detected using unmanned aerial vehicles (UAVs) and satellite remote sensing. _Water_**13**(13), 1753. doi: 10.3390/vel3131753 * Zhao and others (2020)**Zhao X and 5 others** (2020)**Spatiotemporal variability of glacier changes and their controlling factors in the Knehenjunga region, Himalaya based on multi-source remote sensing data from 1975 to 2015. _Science of The Total Environment_**745**, 140995. doi: 10.1016/j.scitore.2020.140995 * Zhu and others (2020)**Zhu T and 5 others** (2020)**Accumulation of pollutants in proglacial lake sediments: impacts of glacial meltwater and anthropogenic activities. _Environmental Science & Technology_**54**(13), 7901-7910. doi: 10.1021/acs.ets.ets.001849 * Zhu et al. (2014)**Zhu D, Tian I, Wang J, Wang Y and Cui J** (2014)**Rapid glacier retreat in the Naimona'Nyi region, western Himalayas, between 2003 and 2013. _Journal of Applied Remote Sensing_**8**(10), 083508. doi: 10.1117/
In this letter we make the case that closer integration of sediment core and passive optical remote sensing data would provide new insights into past and contemporary glacio-sedimentary processes. Sediment cores are frequently used to study past glacial processes and environments as they contain a lengthy geochemical and sedimentological record of changing conditions. In contrast, optical remote sensing imagery is used extensively to examine contemporary glacial processes, including meltwater dynamics, glacial retreat, calving, and ice accumulation. While paleenvironmental data from sediment cores and optical remote sensing imagery are rarely used in tandem, they are complementary. Sediment core records are spatially discrete, providing long-term paleoenvironmental proxy data which require assumptions about environment-sediment linkages. Optical imagery offers precise, spatially extensive data to visualize contemporary processes often limited in their temporal extent. We suggest that methodologies which integrate optical remotely sensing with sediment core data allow direct observation of processes interpolated from sedimentological analysis and achieve a more holistic perspective on glacial processes. This integration addresses the limitations of both data sources and can achieve a stronger understanding of glacier dynamics by expanding the spatiotemporal extent of data, reducing the uncertainty of interpretations, and broadening the local analyses to regional and global scales.
Provide a brief summary of the text.
251
elsevier/d66ab42e_c70b_4728_988e_6fc997a5d6c7.md
# A hybrid deep convolutional neural network for accurate land cover classification Naftaly Wambugu Department of Geography & Environmental Management and Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Yiping Chen Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China Zhenlong Xiao Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China Mingqiang Wei Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China Saifullah Aminu Bello Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China Jose Marcato Junior Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China Jonathan Li Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China ## 1 Introduction Earth observation imagery plays an essential role in developing accurate and timely thematic maps for land cover. It provides a precise understanding of anthropogenic processes on Earth's surface that is consistent and spatially continuous for a different range of spatial resolutions and time scales (Xiang et al., 2019). Thematic maps are mainly derived from the classification of RS images, which is effectively achieved through computer-aided analysis. Advances in sensor technology on satellites and aerial RS over the years have caused increasingly massive, accessible, and affordable imagery with high spatial and temporal resolutions. Besides, improved image quality and quantity, coupled with massive, accessible, and affordable computation power through the graphics processing units (GPUs) and parallel computing platforms, have led to superior computer algorithms. This, in turn, has inspired improvement in image analysis tasks such as scene understanding, detecting and segmenting objects, and pixel-level image classification (Xu et al., 2019). Pixel-level image classification is a vital process in land cover mapping that assigns every image's pixel to a predefined class label where same-labeled pixels possess similar characteristics. VHRS image classification has various applications, such as mapping land use and land cover (LULC) (Weigand et al., 2020), vegetation classification (Flood et al., 2019), tracking watercovures and water bodies (Mishra et al., 2020; Pereira et al., 2019), urban ecology monitoring and understanding (Alshehhi and Marpu, 2021; Li et al., 2018), among others. Correct and updated information about the land cover is essential for classifying, planning, predicting, tracking, and formulating ways to use the Earth's resources better and for the greater interest of humanity (Huang et al., 2019; Ojha et al., 2019; Yin et al., 2014). Solving land cover classification can help to overcome many obstacles relating to urban planning, environmental engineering, and natural landscapemonitoring, among other applications (Huang et al., 2018). Despite the great opportunities that land cover classification offers, classifying VIRS imagery poses a significant challenge due to the images' heterogeneity property, multi-object class imbalance, and varied distribution. Besides, effective classification of VIRSR demands massive computation power, superior and robust algorithms with greater accuracy. Traditional methods of collecting and classifying land cover data are human-dependent, less efficient, and demanding on time and cost (Sang et al., 2020). In this work, a novel cascaded residual dilated network (CRD-Net) is proposed to handle the intricate task of land cover classification using VIRS images. Our proposed framework is validated on the ISPRS Potsdam and Vaihingen land cover datasets by attaining competitive classification results in the two datasets without any post-processing strategy. The following is the outline of our work's contribution: 1. We propose a hybrid network for land cover classification using VIRS images, which uses spatial attention blocks to improve feature learning ability by focusing on essential learnable features; coupled with an intermediary loss function that enhances the network training procedure. 2. The proposed cascaded residual module enlarges the receptive field thus improving the network's multi-scale inference and elevates contextual information without falling into a gridding problem. 3. Extensive experiments on ISPRS 2D Postdam semantic labeling dataset and the Vaihingen dataset show that our proposed CRD-Net outperforms previous methods for land-cover classification. Our work is presented as follows: Section 2 highlights the related work, indicating our novel contribution. Section 3 presents the study areas and dataset descriptions, the methods are described in Section 4. Experiments and discussions are presented in Section 5. Finally, conclusions are highlighted in Section 6. ## 2 Related work Recent image processing tasks have witnessed great advances. This advancement comes from the increased image datasets and more available computation power through high-powered GPUs that have continuously facilitated the training of superport DCNNs for image analysis tasks (Cheng et al., 2017; Guo et al., 2017). Image classification often relies on a larger receptive field's (RF) ability to obtain high-level features. Larger receptive fields ought to capture long-range semantic features to classify large objects correctly. Additionally, the low-level feature's fine spatial details are critical for optimal pixel-level classification. Several works have attempted exploiting semantic features extracted by DCNNs extensively. FCN (Sherrah, 2016) proposed a fully convolutional network without pooling layers for dense labeling of high-resolution aerial images, thus avoiding deconvolution or interpolation operations to overcome spatial information loss. SegNet (Badrinarayanan et al., 2017) utilized the encoder-decoder network for pixel-wise classification using pooling indices on the encoder phase and up-sampling on the decoder, while PSPNet (Zhao et al., 2017) employs feature pyramid pooling to achieve context aggregation by enlarging the kernel size. In contrast, DeepLab+ (Chen et al., 2018), also called atrous spatial pyramid pooling (ASPP), exploited parallel dilated convolutions using varying dilation rates to probe multiscale image features. DeepLabv3+ (Chen et al., 2018) extends previous DeepLab versions by proposing the depthwise separable convolution in the ASPP and decoder modules, resulting in a more efficient and robust encoder-decoder based network. Whereas these methods yield good results in various tasks, they may not adaptively capture all valid features necessary for pixel-level classification for VIRS images (Liu et al., 2019). In the land cover classification tasks, RefineNet (Lin et al., 2017) used a multi-path refinement approach to explicitly exploit the information from the downsampling phase to attain original image resolution using long-range skip connections. This overcame repeated downsampling operations such as pooling or convoluted strides, which lead to a reduction in the initial image's spatial resolution. GFRNet (Islam et al., 2017)proposed memory gates between the layers to handle multiscale contexts to optimize the selection criteria of pixels forwarded in the DCNN network,while SCAttNet (Li et al., 2021), employed spatial and attention mechanisms to refine features adaptively using a light-weight network. Whereas using gates can filter the unnecessary features from passing through the network, it can slow down the network and may not handle complex VIRS data satisfactorily. In other works, generative adversarial networks (GANs) and conditional random fields (CRF) were combined to refine classification maps for hyperspectral image classification (Zhong et al., 2020). Dilated convolution (DC) (Yu and Koltun, 2016), has become a core approach in many multi-class segmentation tasks due to its power in multi-contextual feature aggregation without loss of spatial information. As a result, DC has been explored in various image analysis and classification tasks (Duarte et al., 2018; Hamaguchi et al., 2018; Zhou et al., 2018). Also, notable success in the application of attention mechanisms in natural language processing, has greatly inspired its broader adoption for image analysis tasks. For example, Wang et al. (2017) obtained more image discriminative feature representation by stacking attention modules to form attention-aware features for image classification, while Zhao et al. (2018) employed a bi-directional information propagation path to aggregate long-range contextual information using a point-wise spatial attention mechanism, that helped in fusing global and local information to understand complex natural scenes better. Following the intuition of (Liu et al., 2021), we propose a hybrid architecture that progressively learns more discriminative features while integrating complementary features in each network stage. Moreover, inspired by (Lee et al., 2015), we employ intermediary loss at the intermediate layers of the encoder sub-network to improve the training process and promote deeper supervision. The proposed architecture exploits the power of the attention mechanism in precise feature learning and exploits extensive information flow between the nested dilated layers to fully exploit multi-scale contexts. ## 3 Study areas In this work, ISPRS 2D Semantic Labeling Contest - Potsdam and ISPRS Vaihingen datasets from the International Society for Photogrammetry and Remote Sensing (ISPRS) (Rottensteiner et al., 2012) are used. The standard benchmark datasets comprise aerial images over the urban area of Potsdam city and the Vaihingen region in Germany. The Vaihingen region dataset is obtained from a city with many separate buildings and smaller villages with several separate multi-story buildings. Meanwhile, the Potsdam city dataset comprises a historical city whose building blocks are vast and dense with narrow streets. Each dataset contains six labeled land cover classes that are most popular. The six categories have been defined as impervious surfaces, buildings, low vegetation, trees, cars, and clutter with the white, blue, cyan, green, yellow, and red color codes respectively. The clutter class includes some water bodies and other incoherent objects from the task. Fig. 1 shows the localization of study areas of the two datasets. ### Potsdam dataset The Potsdam land cover dataset was developed to enhance automated delineation of urban objects from RS data. This dataset contains very high-resolution objects which are heterogeneous, thus making the classification task quite challenging. The dataset is focused on elaborate 2D per-pixel labeling on multiple classes. It seeks to support scientific methods and superior models working towards full automation for 2D object recognition and image classification. The Potsdam datasetcontains 38 tiles of 6,000 px \\(\\times\\) 6,000 px with 5 cm ground resolution. We used 14 of such images as test images and used only RGB images in our experiments. ### Vaihingen dataset The ISPRS 2D Semantic Labeling Challenge provides the Vaihingen dataset for image classification and 2D labeling. The dataset contains 33 image tiles of 2,494 px \\(\\times\\) 2,064 px with 9 cm ground resolution. 16 of the 33 tiles have been labeled. Only near-infrared, red, and green (IRRG) bands were used in our experiments. ## 4 Proposed method In this work, a hybrid network called CRD-Net is proposed, which comprises the following components: a) an encoder-decoder sub-network to recover the lost spatial details caused by down-sampling operations with dual spatial attention blocks to guide the network in focusing on essential features; b) intermediary loss function connected to the spatial attention blocks for improved feature learning. c)the CRD module to attain better multi-scale contextual response and information flow between the layers. Each component of the proposed network is discussed in the later sections and the pipeline of the CRD-Net architecture is presented in Fig. 2. ### Encoder-decoder with a dual spatial attention mechanism The encoder-decoder paradigm is common for image classification networks owing to its ability to probe image features and harness the required high-level discriminative information (Zhou et al., 2018). In Figure 1: Study areas of the Potsdam dataset and the Vaihingen dataset (Rottensteiner et al., 2012). Figure 2: The proposed Cascaded Residual Dilated Network Architecture. addition, the decoder subnet's ability to recover spatial details lost through pooling or stride convolution operations on the encoder subnet makes it preferred for most semantic segmentation tasks (Chen et al., 2018). In general, the encoder-decoder network contains an encoder subnet that successively shrinks the feature maps and exploits high-semantic details as the network gets deeper. On the other hand, the decoder subnet recovers the crucial spatial details lost in the encoder segment of the network. Based on this idea, our network architecture uses an encoder with a pretrained ResNet-101 network with five residual blocks marked ENC1 to ENCS as the backbone, which works as an effective feature extractor. We use a pretrained network, since utilizing pretrained parameters and weights can help reduce the massive training data required to train deep networks from scratch and facilitate faster model convergence (Briedhle et al., 2021). We employ two spatial attention blocks named Satt1 and Satt2 in our network architecture to generate spatial feature maps from spatial relationships of the features as shown in Fig. 2. Fig. 3 shows The spatial attention block aims at learning a weight map representing the relative importance of activation for the spatial dimensions. The convolutional image feature maps from ENC2 and ENCS are branched out into three copies representing the key, value, and query as illustrated in Fig. 3. The three copies are represented as f(x), h(x), and g(x), respectively to form the attention block named Satt1. We then apply the dot product attention to generate the resultant attention feature maps. Equally, the convolutional feature maps for ENC4 and ENCS are branched out into three copies of f(x), h(x), and g(x), representing the key, value, and query values in the second spatial attention block named Satt2. The two spatial attention blocks are connected to two intermediary losses labeled Inter1_1 and Inter1_2 through outputs S101 and S201. The up-sampled output of Satt1 is concatenated with the output of Satt2 and then connected to the CRD module in a cascaded fashion. This helps the model focus selectively on discriminative features and ignore redundant and less important information (Vaswani et al., 2017). Besides, weighting the channels of the feature maps selectively can significantly improve the feature learning in residual modules and have shown significant improvement in semantic segmentation tasks (Zhong et al., 2020). Our networks extensively exploit residual connections in the encoder-decoder subnetwork and the CRD module owing to the success of deep residual networks in image classification for both spectral and spatial data (Zhong et al., 2018). ### Deep supervision with intermediary loss Loss functions inform how erroneous the classification prediction is from the ground truth. This is achieved through backpropagation. Most deep ConvNets implement loss function at the output layer, where the loss is propagated backward to earlier layers. However, single supervision at the output layer may not adequately learn complex features in the hidden layers, resulting in classification errors (Liu et al., 2020). To better evaluate the loss in earlier layers and supervise the network, a loss objective is introduced at the intermediate layers of the deep neural network to improve the learning process of hidden layers making it more transparent and direct (Lee et al., 2015). This follows the intuition that a discriminative classifier working as a proxy can learn high discriminative features from hidden layers of the network and can better provide inference during hidden middle layers weight updates. Besides, the intermediary loss function introduced at intermediary layers of the network can significantly improve the supervision in deep networks at the layer level compared to relying on the results of the backpropagation process from the output layer (Muhammad et al., 2018). Following (Zhao et al., 2017), two intermediary losses, Inter1_1 and Inter1_2, are introduced at the output of spatial attention Satt1 and Satt2 to effect direct supervision in the intermediate layers, as shown in Fig. 2. The learning process is decomposed where the two intermediary losses pass through the intermediate layers, thus optimizing the network training process. We derive a multi-task loss function (\\(L_{\\text{Total}}\\)) by combining the weights of the three losses as defined in Equation (1). \\[L_{\\text{Total}}=\\sum\ olimits_{i=1}^{\\prime}i_{i}L_{i} \\tag{1}\\] where \\(L_{i}\\) is the loss and \\(\\lambda_{i}\\) is the weight associated with task i. Backpropagation seeks to achieve convergence at the least loss weight value; different loss weights for different tasks can be distributed across several tasks, with each task having a significant influence on the network training (Chennupati et al., 2019). Besides, adaptive tuning of task weights can help optimize the learning process. To achieve this, we employ dynamic weight adjustments (Guo et al., 2018) to update the weights \\(\\alpha\\), \\(\\beta\\), and \\(\\gamma\\). The total loss in our network is defined by \\[Total\\ Loss\\ =\\ (\\alpha\\times\\text{MainLoss})\\ +\\ (\\beta\\times\\text{Inter}L_{ \\lambda})+\\ (\\gamma\\times\\text{Inter}L_{\\lambda}) \\tag{2}\\] where \\(\\alpha\\), \\(\\beta\\), and \\(\\gamma\\) are the respective weights in our network, and Inter1\\(1\\), Inter1_2, and MainLoss are loss values of the output layer, the Satt1 spatial block, and Satt2 spatial block, respectively. By feeding the output of the attention mechanism to the intermediate loss functions, the proposed network can learn features more precisely guided by the intermediary loss that enhances deeper supervision at the intermediate layers of the network. We scale the weights \\(\\alpha\\), \\(\\beta\\), and \\(\\gamma\\) for intermediary losses Inter1, Figure 3: Spatial-attention block used in the CRD-Net. InterL2, and the MainLoss respectively to evaluate their influence on the network's training process, as highlighted under the ablation studies section. The three losses InterL1, InterL2, and the MainLoss cumulatively, contribute to the final prediction. Weighted cross-entropy (WCB) loss (Martinez and Stiefelhagen, 2018) sums up pixels loss in a given mini-batch. In datasets with a high variation of pixels per class on the training set, class balancing is applied where the loss is weighted differently based on the true class. In this case, weights for classes with fewer pixels are elevated, while weights for classes with more pixels are diminished. We use weighted cross-entropy loss with median frequency balancing (MFB) (Eigen and Fergus, 2015). The WCE loss is defined by \\[Loss=-\\frac{1}{N}\\sum_{i=1}^{N}W_{i}\\ p_{i}\\ log\\left(\\ \\frac{e_{ii}}{\\sum_{j=1}^{N}e^{ij}}\\right) \\tag{3}\\] where \\(N\\) represents the commutative classes, \\(W_{i}\\) denotes the class weight \\(i,p_{i}\\) represents the prediction while \\(\\hat{\\mu_{i}}\\) is the ground truth distribution of class \\(i\\). ### Cascaded residual dilated module We propose the CRD module to progressively enlarge the RF in a scalable manner to help our network capture multi-scale contextual information and enhance elaborate information flow without increasing the network complexity or falling into gridding problem. Dilated convolution helps aggregate multiscale contextual details without sacrificing the image resolution. The CRD module consists of dilated convolutional layers with no pooling or subsampling and gradually expands the RF without resulting in loss of spatial resolution or coverage. The dilated kernel with parameter \\(r>1\\) causes an enlargement in the RF without raising the number of parameters or computation requirements; different rates can be set to adjust the receptive field range. A standard dilated convolution is obtained as defined in Equation (4) \\[F=(r-1)(k-1)+k \\tag{4}\\] where \\(r\\) denoted the dilation rate, and \\(k\\) represents the kernel size, and \\(F\\) represents the receptive field. For a standard convolution operation with \\(k\\times k\\) kernel, \\(S\\) denotes the stride, which can have the following instances: \\(S>1\\), implies a down-sampling operation, \\(S=1\\), maintains the resolution of the feature map (considering adequate padding), and \\(0<S<1\\), implying up-sampling which increases the feature map size. Enlarging a kernel of \\(k\\times k\\) filter to a kernel of \\(k+(k\\)-1) (\\(f\\)-1), with \\(r\\) representing the dilated stride allows flexible aggregation of multiscale contextual details from the receptive field while maintaining the same resolution dimensions. Although the CRD module presents great benefit in expanding the receptive field, it can generate holes called gridding artifacts (Chen et al., 2018; Yu and Koltun, 2016), where neighboring output units are processed from separate input sets resulting in different actual RF. This implies that some kernel responses do not act on some regions in the receptive field causing variability in kernel responses from the receptive field. To cure the gridding problem, the proposed CRD module progressively concatenates the residual connections with the resultant and previous feature maps of cascaded dilated layers. The resultant improved information flow between the dilated convolutional layers ensures that all kernel responses are obtained from the full receptive field thus overcoming the gridding problem. Moreover, since image classification of VHSR images requires a descriptor with sufficient short, medium, and long-range semantics, the residual connections enhance information flow and boost significant features between the dilated layers following the intuition of (Wang et al., 2019). The proposed CRD module is illustrated in Fig. 4. Each layer receives the feature map from the two concatenated spatial attention blocks as input and performs a dilated convolution operation with rates of \\(r_{1}\\), \\(r_{2}\\)\\(r_{3}\\) and \\(r_{4}\\). Through residual connections, the resultant and previous feature maps of every dilated layer are combined. By gradually increasing the dilation rate in the cascading layers, the network is robust to achieve an effective full receptive field where lower dilation rates obtain fine details and small objects spatial dimensions, while larger dilation rates capture the larger objects' features resulting in a robust feature descriptor. The hierarchical fusion of all the layers from smaller to larger dilation rates allows the participation of the dilated convolution layer pixels in probing the multiscale features before concatenation. This ensures information sharing between the layers, thus overcoming the gridding problem. The CRD model is designed to enlarge the receptive field with fewer parameters based on the dilation convolution. The final feature map is generated from the computation of all features after aggregating the receptive fields of each layer and effectively capturing the multi-scale contextual information. All the feature vectors from different dilated layers are concatenated in the global pooling layer before inputting to the decoder unit. ### Quantitative assessment measures We use precision (PRE), recall (REC), intersection over union (IoU), pixel accuracy (PA), and mean F1-score (mF1) to evaluate the performance of our proposed method. \\[PRE\\ =\\ T\\mathcal{P}\\%(TP\\ +\\ F) \\tag{5}\\] \\[REC\\ =\\ T\\ \\mathcal{P}\\%(TP\\ +\\ FN) \\tag{6}\\] Figure 4: Illustration of 3 \\(\\times\\) 3 CRD module with dilation rates of \\(r_{1}\\), \\(r_{2}\\), \\(r_{3}\\), and \\(r_{4}\\) and the resultant nested kernel. classification results of our proposed architecture under different network configurations in Fig. 7. #### 5.4.1 Evaluating the impact of the spatial attention module. To better understand the impact of spatial attention mechanisms on the CRD-Net architecture, we carried out ablation experiments on the network without incorporating the spatial attention block in the network. The results show that the OA accuracy drops by 0.27% after dropping the spatial attention block from the network. Besides, the influence of spatial attention in focusing on specific regions in the layer is validated from the visualized classification results, where car class in the marked regions are viewed as detached in the CRD-Net model results. Still, the regions are visibly connected in the model with no spatial attention results. This demonstrates the ability of spatial attention in delineating regions, especially in high-resolution images in their natural setting where objects of dissimilar classes may possess similar features or in scenes where intra-class variation is present. #### 5.4.2 Evaluating the impact of deep supervision and intermediary loss The effect of training with no intermediate loss is investigated. We notice that intermediary loss influences network performance. Specifically, training the network using weighted intermediate loss results in an improvement of 0.16% on the classification results. The intermediate loss functions introduced at middle layers help in guiding the training process resulting in improved accuracy. Using relative weights to compute the total loss improved the network performance since the main loss contributes more to the final prediction. When the network is trained using the same weights for all losses, the accuracy is dropped by 0.11%. We observe that integrating deep supervision at the intermediary layers can improve gradient flow, reduce the vanishing gradient problem and improve network convergence. #### 5.4.3 Evaluating the impact of the CRD module Ablation experiments results without the CRD module show a drop in AO by 0.43% while incorporating the CRD module in the proposed network recorded improvement on the OA in all classes. In addition, the CRD module demonstrates improvement in the visual classification Figure 5: Classification results for test images of Potsdam RGB data tiles 4.15, 2.12, and 6.15, respectively. (a) input image tile, (b) ground truth, and (c) prediction. results in extracting pixel-level information effectively. By harnessing the capability of dilated convolution in enlarging RF to harness rich multi contextual information, the proposed network can delineate small objects precisely and capture large objects. #### 5.4.4 Combining the different components to form a hybrid network Our proposed framework comprises of components collectively forming a hybrid network to handle the intricate task of land cover mapping. Since RS data contains many different sized objects whose features and settings are critical for land cover classification, classifying RS images becomes more challenging when some key objects are \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline \\hline Models & OA & Road Surface & Buildings & LowVeg & Trees & Cars & mF1 \\\\ SegNet (Radinarayanan et al., 2017) & 82.9 & 86.2 & 88.3 & 77.2 & 80.0 & 54.2 & 77.0 \\\\ DeepLabV3+ (Chen et al., 2018) & 85.2 & 88.5 & 90.0 & 80.2 & 80.1 & 68.9 & 81.5 \\\\ RefineNet (Lin et al., 2017) & 83.4 & 87.3 & 86.9 & 78.3 & 79.6 & 75.9 & 81.6 \\\\ SCATNet V2 (Li et al., 2021) & 85.5 & 91.2 & 90.3 & 80.0 & 80.3 & 70.5 & 82.1 \\\\ DST 2 (Sherrah, 2016) & 90.3 & 92.5 & 96.4 & 86.7 & 88.0 & 94.7 & 91.8 \\\\ **Ours** & **90.7** & **92.9** & **96.7** & **87.4** & **88.6** & **94.8** & **92.1** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Comparison of OA and per class mF1(in percentage %) between CRD-Net and other published methods on ISPRS Potsdam land cover dataset. Figure 6: Classification results for test images of Vailhingen IRRG image tiles 2, 27, and 38 respectively. (a)input image tile, (b)ground truth, and (c) prediction. invisible, or suppressed due to their size, shadow, occlusion from the surrounding objects, or where the background suppresses the objects of interest. Besides, most RS data contains superfluous objects which can affect the accurate classification of land cover classes. Since spatial information is indispensable for correct pixel-level classification, the proposed hybrid exploits the power of encoder-decoder to recover and restore spatial details suffered from down-sampling operations. Furthermore, the attention mechanism helps the network focus on key discriminative regions in the images by according such areas higher weights while suppressing redundant and less important regions such as backgrounds. Additionally, by utilizing intermediary loss functions, the model improves the learning process, where intermediate loss guides the backpropagation process, by defining how badly the network performs at the intermediary layers of the network as opposed to using a single loss function at the end of the network. Finally, since rich and multi-scale contextual representation plays an essential role in correct classification of objects, especially with varied sizes such as in the case of VHRS images, we employ cascaded dilated convolutions to cause enlargement of the RF to obtain multi-contextual details without loss of spatial information. The results in Table 5 shows that combining different components for the complex task of land classification can result in improved classification results. The hybrid network can learn more discriminative features progressively in each stage as complementary features are integrated, thus harnessing the benefits of each component. However, models accuracy is greatly affected by the shadows cast by elevated objects such as buildings, trees, and vegetation thus causing great difficulty in VHR image classification tasks. The classification accuracy gets compromised if the objects' shadows are not detected and delineated during the classification process. Shadow detection, alignment, and correction for land cover classification using VHRS aerial imagery remains a great area of interest requiring further attention to mitigate shadow-prone errors. Although the proposed hybrid network attained competitive classification results, obtaining optimum results by blending several components requires further attention on how best to integrate the sub-components for better prospects. ## 6 Conclusion In this work, a hybrid network named CRD-Net is presented to tackle the challenging task of land cover classification with VHRS images. The proposed architecture harnesses short-range, mid-range, and long-range semantic information at different stages of the network while preserving the spatial details to generate a robust feature descriptor. The attention mechanism with intermediary loss at the encoder subnet assists in refined feature learning and attaining intermediary layers' deep supervision. Moreover, the network harnesses rich global multi-contextual information using the CRD module without falling into the gridding problem caused by dilation. Future experiments are necessary to validate the proposed framework for land cover mapping in other RS datasets and related tasks. \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline \\hline Models & OA & Road Surface & Buildings & LowVeg & Trees & Cars & mF1 \\\\ \\hline SegNet (Indriangyanan et al., 2017) & 80.3 & 81.1 & 86.4 & 78.0 & 73.9 & 85.7 & 81.0 \\\\ DeepLabVis (Cheng et al., 2018) & 86.8 & 89.3 & 92.8 & 83.4 & 78.4 & 88.2 & 86.4 \\\\ Refineet (Gin et al., 2017) & 84.4 & 87.6 & 88.5 & 81.9 & 79.1 & 87.9 & 85.0 \\\\ G-FRNet V2 (Gian et al., 2017) & 86.8 & 89.2 & 92.7 & 82.8 & 79.0 & 86.3 & 86.0 \\\\ DCM (Gian et al., 2019) & 90.4 & 92.7 & 95.3 & 83.3 & 89.4 & 88.3 & 89.7 \\\\ **Ours** & **90.5** & **92.7** & **95.4** & **83.4** & **89.6** & **88.7** & **90.0** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Comparison of OA and per class mF1 (in percentage %) between CRD-Net and other published methods on ISPRS Vailingen land cover dataset. \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline \\hline Method & OA & Road Surface & Buildings & LowVeg & Trees & Cars & mF1 \\\\ \\hline _CRD-Net_ & 90.5 & 92.7 & 95.4 & 83.4 & 89.6 & 88.7 & 90.0 \\\\ _Without_ & 90.4 & 92.8 & 95.0 & 83.3 & 89.7 & 88.2 & 89.8 \\\\ _Actuality_ & & & & & & & \\\\ _Loss_ & & & & & & & \\\\ _Without_ & 90.2 & 92.4 & 95.0 & 83.3 & 89.4 & 86.8 & 89.3 \\\\ _Spatial_ & & & & & & & \\\\ _attention_ & & & & & & & \\\\ _Without_ & 90.1 & 92.3 & 94.9 & 82.7 & 89.3 & 88.3 & 89.7 \\\\ _CRD_ & & & & & & & \\\\ _CRD-Net_ & 90.2 & 92.4 & 94.8 & 83.2 & 89.6 & 87.5 & 89.5 \\\\ _With_ & & & & & & & \\\\ _ResNet50_ & & & & & & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Comparison of per class OA and mF1 (in percentage %) of CRD-Net with different configurations on ISPRS Vailingen land-cover dataset. Figure 7: Classification results for test images of Vailingen IRRG image tiles 4 and 25, respectively, using different network configurations. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ### Acknowledgement The authors express profound gratitude to ISPRS for availing of the ISPRS benchmark datasets used in this research work. ## Funding This work was supported by the National Natural Science Foundation of China under grants of 41871380, and the Natural Sciences and Engineering Research to the Council of Canada under a grant of 50503-10284. The first author acknowledged the China Scholarship Council (CSC) to support his Ph.D. study at the School of Informatics, Xiamen University. Finally, the authors acknowledge CAPES PrInt (grant number 88881.311850/2018-01). ## References * Alshehhi and Marup (2021) Alshehhi, R., Marup, P.R., 2021. Extraction of urban multi-class from high-resolution images using pyramid generative adversarial networks. Int. J. Appl. Earth Obs. Geolr. 102, 102379 [https://doi.org/10.1016/j.jsp.2021.10237y](https://doi.org/10.1016/j.jsp.2021.10237y) * Badrinarayanan et al. (2017) Badrinarayanan, V., Kendall, A., Cipolla, R., 2017. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481245. [https://doi.org/10.1109/TML.2016.246415](https://doi.org/10.1109/TML.2016.246415) * A dual-CNN approach for combined classification of tree species and standing dead trees from remote sensing data. Int. J. Appl. Earth Obs. Geolr. 98, 102292 [https://doi.org/10.1016/j.2021.02292](https://doi.org/10.1016/j.2021.02292) * Dunkey et al. (2020) Dunkey, A., Ijolovkov, V.I., Ryvedchenko, E., Parinov, A., Drukhinhin, M., Kalinin, A.A., 2020. Alluminations: Fast and Flexible Image Augments. Inf. 11, 125. * Chen et al. (2018) Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, AL., 2018. DeepLink: Semantic Image Segmentation with Deep Convolutional Nets, Arous Convolution, and Fully Connected Tree, IEEE Trans. Pattern Anal. Mach. Intell. 40 (4), 834-848. [https://doi.org/10.1109/TML.2017.2099184](https://doi.org/10.1109/TML.2017.2099184) * Chen et al. (2018) Chen, L.-C., Zhu, V., Papandreou, G., Schroff, F., Adam, H., 2018. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proc. of RCVE, [https://arxiv.org/abs/1802.02611](https://arxiv.org/abs/1802.02611) * Cheng et al. (2017) Cheng, G., Han, J., Lu, X., 2017. Remote Sensing Image Scene Classification: Benchmark and State of the Art. In Proc. of IEEE 105 (10), 1865-1883. [https://doi.org/10.1109/JPROC.2017.2675998](https://doi.org/10.1109/JPROC.2017.2675998) * Chenmuti et al. (2019) Chenmuti, S., Sita, G., Vossegann, S., Ivanakhshov, S., 2019. AuxNet: Auxiliary Tasks Enhanced Semantic Segmentation for Automated Driving. International Conference on Computer Vision Theory and Applications 645-652. [https://doi.org/10.5220/0007841065052](https://doi.org/10.5220/0007841065052) * Duarte et al. (2018) Duarte, D., Nes, F., Ferrie, N., Vossegann, G., 2018. Satellite Image Classification Of Building Dangangs Using Airborne And Satellite Image Samples in A Deep Learning Approach. In ISPRS and Statistics of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2. ISPRS, pp. 89-96. [https://doi.org/10.5194/isprasamals-17-29-2018](https://doi.org/10.5194/isprasamals-17-29-2018) * Eigen et al. (2019) Eigen, D., Fergus, R., Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture. [https://doi.org/10.1109/ICCV.2019.3504](https://doi.org/10.1109/ICCV.2019.3504) * Flood et al. (2019) Flood, N., Watson, F., Collet, L., 2019. Using a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia, Int. J. Appl. Earth Obs. Geolr. 82, 108197 [https://doi.org/10.1016/j.jgg.2019.101897](https://doi.org/10.1016/j.jgg.2019.101897) * Guo et al. (2018) Guo, M., Huque, A., Huang, A., Yeung, S., Li, F.F., 2018. Dynamic Task Prioritization for Multitask Learning. In Proc. of ECCV. Springer, pp. 282-299. [https://doi.org/10.1007/j.778-303-0220-07/17](https://doi.org/10.1007/j.778-303-0220-07/17) * Guo et al. (2017) Guo, Y., Liu, Y., Georgiou, T., Lev, M.S., 2017. A review of semantic segmentation using deep neural networks. Int. J. Multimedia Inf. Retrieval 7, 87-93. [https://doi.org/10.1007/j.7735-017-014-x](https://doi.org/10.1007/j.7735-017-014-x) * Himabadi et al. (2018) Himabadi, R., Fujita, A., Nemiu, K., Zhaimiemi, T., Hikosaka, S., 2018. Effective Use of Dilated Convolutional for Segmenting Small Object Instances in Remote Sensing Imagery. In Proc. of WACV 1442-1450. [https://doi.org/10.1109/WACV.2018.00162](https://doi.org/10.1109/WACV.2018.00162) * Huang et al. (2018) Huang, B., Zhao, B., Song, Y., 2018. Urban band-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 214, 73-86. [https://doi.org/10.1016/j.jmev.2018.04.050](https://doi.org/10.1016/j.jmev.2018.04.050) * Huang et al. (2019) Huang, Z., Dumitru, C.O., Pan, Z., Lei, B., Datcu, M., 2019. Can a Deep Network Understand the Landover Across Somers?. In: Proc. of GRANS, pp. 8947-9850. [https://doi.org/10.1109/GRANS.2019.8899000](https://doi.org/10.1109/GRANS.2019.8899000) * Islam et al. (2017) Islam, M., Rochan, M., Bruce, N., Wang, Y., 2017. Gated Feedback Refinement Network for Dense Image Labeling. In Proc. of CVPR 4877-4885. [https://doi.org/10.1109/CVPR.2017.518](https://doi.org/10.1109/CVPR.2017.518) * Li et al. (2020) Li, B., Su, W., Wu, H., Li, R., Zhang, W., Qin, W., Zhang, S., Wei, J., 2020. Further Exploring Convolutional Neural Networks? Potential for Land-Use Scene Scene. IEEE Geolr. Remote Sens. Lett. 17, 1687-1691. [https://doi.org/10.1109/GRANS.2020.2952660](https://doi.org/10.1109/GRANS.2020.2952660) * Kingma and Ba (2014) Kingma, D., Ba, J., 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations. In Proc. of ICLR. [https://arxiv.org/abs/1412.6980](https://arxiv.org/abs/1412.6980) * Lee et al. (2015) Lee, C.-Y., Xie, S., Gallagher, P.W., Zhang, Z., Tu, Z., 2015. Deeply-Supervised Nets. In: In Proc. of PMLR, pp. 562-570. [http://proceedings.mlr.press/v58/v58.net15a.pdf](http://proceedings.mlr.press/v58/v58.net15a.pdf) * Li et al. (2021) Li, H., Qiu, W., Chen, L., Mei, X., Hong, L., Tao, C., 2021. ICMNet: Semantic Segmentation Network With Spatial and Channel Attention Mechanism for High-Resolution Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 905-909. [https://doi.org/10.1109/GRANS.2020.2988924](https://doi.org/10.1109/GRANS.2020.2988924) * Li et al. (2018) Li, Z., Zhou, C., Yang, X., Chen, X., Meng, F., Lu, C., Pan, T., Qi, W., 2018. Urban landscape extraction and analysis of the meg meg meg-depth of China's coastal regions using high-resolution satellite imagery: A case of Shanghai, China. Int. J. Appl. Earth Obs. Geolr. 124, 10-150. [https://doi.org/10.1109/GRANS.2018.03.002](https://doi.org/10.1109/GRANS.2018.03.002) * Lin et al. (2017) Lin, G., Milan, A., Shen, C., Feid, L., 2017. RefineNet: Multi-path Refinement Networks for High-Resolution Semantic Segmentation. In Proc. of CVPR, 1925-1934. [https://doi.org/10.1109/CVPR.2017.549](https://doi.org/10.1109/CVPR.2017.549) * Liu et al. (2020) Liu, O., Fu, M., Jiang, H., Gong, X., 2020. Densely Dilated Spatial Pooling Convolutional Network using benign loss functions for imbalanced volumetric prostate segmentation DRPs network with benign loss for prostate segmentation. Current Biometri. 15, 17589-1769. [https://doi.org/10.1217/15/893515660200127145](https://doi.org/10.1217/15/893515660200127145) * Liu et al. (2021) Liu, Q., Kampfferer, M., Jensen, R., Salberg, A.-B. Dense Dilated Convolutions Merging Network for Semantic Mapping of Remote Sensing Images. [https://doi.org/10.1109/GRANS.2021.88990046](https://doi.org/10.1109/GRANS.2021.88990046) * Liu et al. (2021) Liu, X., Chen, Y., Wei, W., Wang, C., Gonelves, W., Junior, J., Li, J., 2021. Building Instance Extraction Method Based on Improved Hybrid Task Cascade. IEEE Geol. Remote Sens. Lett. 19, 1-5. [https://doi.org/10.1109/GRANS.2021.3069004](https://doi.org/10.1109/GRANS.2021.3069004) * Martinez and Stieilhagen (2018) Martinez, M., Stieilhagen, B., 2018. Taming the Cross Entropy Loss. In Proc. of OCR. [https://doi.org/10.1007/grns.303-1029-243](https://doi.org/10.1007/grns.303-1029-243) * Mishra et al. (2020) Mishra, V., Liang, A., Shen, H., Erentigen, E.A., Markert, K.N., 2020. Evaluating the performance of high-resolution satellite imagery in detecting ephement water bodies over west Africa. Int. J. Appl. Earth Obs. Geolr. 93, 102218 [https://doi.org/10.1016/j.jgg.2020.102218](https://doi.org/10.1016/j.jgg.2020.102218) * Muhammad et al. (2018) Muhammad, U., Wang, W., Maidi, A., 2018. Feature Fusion With Deep Supervision for Remote-Sensitive Image Case Classification. In Proc. of ICTAI, pp. 249-253. [https://doi.org/10.1109/ICTAI.2018.00046](https://doi.org/10.1109/ICTAI.2018.00046) * Ohja et al. (2019) Ohja, S.K., Chala, K., Vemwani, M., Kavaghi, N.S.V., Kumar, B.L.W., 2019. Land Use Prediction on Satellite Images using Deep Neural Nets. In: In Proc. of ICCS, pp. 999-1003. [https://doi.org/10.1109/ICAI.2019.9065698](https://doi.org/10.1109/ICAI.2019.9065698) * Pereira et al. (2019) Pereira, P.A.J., Costa, C.A., Poerster, S., Bronskiy, A., de Araajo, J.C., 2019. Estimation of suspended sediment concentration in an intermittent river using multi-temporal high-resolution satellite imagery. In: J. Appl. Earth Obs. Geolr. 79, 153-161. [https://doi.org/10.1109/ICAI.1929.02.09](https://doi.org/10.1109/ICAI.1929.02.09) * Reddi et al. (2018) Reddi, SFusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 12, 1709-1724. [https://doi.org/10.1109/S7TA8.2019.29111313](https://doi.org/10.1109/S7TA8.2019.29111313). * Yin et al. (2014) Yin, H., Hingmacher, D., Kennedy, R.E., Sulla-Maehne, D., Hostert, P., 2014, Mapping Annual Land Use and Land Cover Changes Using MODIS Time Series. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 7, 3421-3427. [https://doi.org/10.1109/JSTARS.2014.248411](https://doi.org/10.1109/JSTARS.2014.248411). * Yi et al. (2016) Yi, F., Koltun, V., 2016, Multi-Scale Context Aggregation by Dilated Convolutions. In Proc. of 1st. Conf. on Learning Representations, ICLR. * Zhao et al. (2017) Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J., 2017, Pyramid Scene Parsing Network. In Proc. of CVPR 2016-2023. [https://doi.org/10.1109/CVPR.2017.660](https://doi.org/10.1109/CVPR.2017.660). * Zhao et al. (2018) Zhao, H., Zhang, Y., Liu, S., Shi, J., Joy, C., Lin, D., Jia, J., 2018, PSANet: Point-wise Spatial Attention Network for Scene Parsing. In Proc. of ECCV 270-286. [https://doi.org/10.1007/978-3-030-012403.17](https://doi.org/10.1007/978-3-030-012403.17). * Zhong et al. (2018) Zhong, Z., Lin, Z.Q., Bidart, R., Hu, X., Daya, I.B., Li, Z., Zheng, W.S., Li, J., Wong, A., 2018, Observation-Attention Networks for Semantic Segmentation. [https://doi.org/10.1109/CVPR.2018.0020.01308](https://doi.org/10.1109/CVPR.2018.0020.01308). * Zhong et al. (2018) Zhong, Z., Jonshan, Li, Luo, Zhining, Chapman, Michael, et al., 2018, Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote. Sens. 56 (2), 847-858. [https://doi.org/10.1109/TGRS.2017.2755542](https://doi.org/10.1109/TGRS.2017.2755542). * Zhou et al. (2018) Zhou, L., Zhang, C., Wu, M., Dahlinker, Lihalte with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. [https://doi.org/10.1109/CVPRW.2018.00034](https://doi.org/10.1109/CVPRW.2018.00034).
Land cover classification provides updated information regarding the Earth's resources, which is vital for agricultural investigation, urban management, and disaster monitoring. Current advances in sensor technology on satellite and aerial remote sensing (RS) devices have improved the spatial-spectral, radiometric, and temporal resolutions of images over time. These improvements offer invaluable chances of understanding land cover information. However, land cover classification from RS images is an intricate task because of the high intra-class disparities, low inter-class similarities, and image variation types. We propose a cascaded residual dilated network (CRD-Net) for land cover classification using very high spatial resolution (VHRS) images to address these challenges. The proposed hybrid network follows the encoder-decoder concept with a spatial attention block to guide the network on learnable discriminate features coupled with an intermediary loss to enhance the training process. Moreover, a cascaded residual dilated module increases the network's receptive field to enrich multi-contextual features further, thus boosting the resultant feature descriptor. Extensive experimental results demonstrate that the proposed CRD-Net outperformed state-of-the-art methods, achieving an overall accuracy (OA) of 90.73% and 90.51% on the ISPRS Potsdam land cover dataset and ISPRS Vahingen dataset, respectively.
Summarize the following text.
261
arxiv-format/1110_4970v1.md
# Studying Satellite Image Quality Based on the Fusion Techniques Firouz Abdullah Al-Wassai Research Student, Computer Science Dept. (SRTMU), Nanded, India [email protected] N.V. Kalyankar Principal, Yeshwant Mahavidyala College Nanded, India [email protected] Ali A. Al-Zaky Assistant Professor, Dept. of Physics, College of Science, Mustansiriyah Un., Baghad - Iraq [email protected] ## I Introduction: Image fusion is a process, which creates a new image representing combined information composed from two or more source images. Generally, one aims to preserve as much source information as possible in the fused image with the expectation that performance with the fused image will be better than, or at least as good as, performance with the source images [1]. Image fusion is only an introductory stage to another task, e.g. human monitoring and classification. Therefore, the performance of the fusion algorithm must be measured in terms of improvement or image quality. Several authors describe different spatial and spectral quality analysis techniques of the fused images. Some of them enable subjective, the others objective, numerical definition of spatial or spectral quality of the fused data [2-5]. The evaluation of the spatial quality of the pan-sharpened images is equally important since the goal is to retain the high spatial resolution of the PAN image. A survey of the pan sharpening literature revealed there were very few papers that evaluated the spatial quality of the pan-sharpened imagery [6]. Consequently, there are very few spatial quality metrics found in the literatures. However, the jury is still out on the benefits of a fused image compared to its original images. There is also a lack of measures for assessing the objective quality of the spatial resolution of the fusion methods. Therefore, an objective quality of the spatial resolution assessment for fusion images is required. Therefore, this study presented a new approach to assess the spatial quality of a fused image based on High pass Division Index (HPDI). In addition, many spectral quality metrics, to compare the properties of fused images and their ability to preserve the similarity with respect to the original MS image while incorporating the spatial resolution of the PAN image, should increase the spectral fidelity while retaining the spatial resolution of the PAN). They take into account local measurements to estimate how well the important information in the source images is represented by the fused image. In addition, this study focuses on cambering that the best methods based on pixel fusion techniques (see section 2) are those with the following feature fusion techniques: Segment Fusion (SF), Principal Component Analysis based Feature Fusion (PCA) and Edge Fusion (EF) in [7]. The paper organized as follows.Section II gives the image fusion techniques; Section III includes the quality of evaluation of the fused images; Section IV covers the experimental results and analysis then subsequently followed by the conclusion. ## II Image Fusion Techniques Image fusion techniques can be divided into three levels, namely: pixel level, feature level and decision level of representation [8-10]. The image fusion techniques based on pixel can be grouped into several techniques depending on the tools or the processing methods for image fusion procedure. In this work proposed categorization scheme of image fusion techniques Pixel based image fusion methods summarized as the following: 1. Arithmetic Combination techniques: such as Bovey Transform (BT) [11-13]; Color Normalized Transformation (CN) [14, 15]; Multiplicative Method (MLT) [17, 18]. 2. Component Substitution fusion techniques: such as HIS, HIS, HSV, HLS and YIQ in [19]. 3. Frequency Filtering Methods :such as in [20] High-Pass Filter Additive Method (HPFA), High Frequency- Addition Method (HFA), High Frequency Modulation Method (HFM) and The Wavelet transform-based fusion method (WT). 4. Statistical Methods: such as in [21] Local Mean Matching (LMM), Local Mean and VarianceMatching (LMVM), Regression variable substitution (RVS), and Local Correlation Modeling (LCM). All the above techniques employed in our previous studies [19-21]. Therefore, the best method for each group selected in this study as the following: * Arithmetic and Frequency Filtering techniques are High -Frequency- Addition Method (HFA) and High Frequency Modulation Method (HFM) [20]. * The Statistical Methods it was with Regression variable substitution (RVS) [21]. * In the Component Substitution fusion techniques the IHS method by [22] it was much better than the others methods [19]. To explain the algorithms through this study, Pixels should have the same spatial resolution from two different sources that are manipulated to obtain the resultant image. Here, The PAN images have a different spatial resolution from that of the original multispectral MS images. Therefore, resampling of MS images to the spatial resolution of PAN is an essential step in some fusion methods to bring the MS images to the same size of PAN, thus the resampled MS images will be noted by \\(\\mathrm{M_{k}}\\) that represents the set of DN of band \\(k\\) in the resampled MS image. ## III Quality Evaluation of the Fused Images This section describes the various spatial and spectral quality metrics used to evaluate them. The spectral fidelity of the fused images with respect to the original multispectral images is described. When analyzing the spectral quality of the fused images we compare spectral characteristics of images obtained from the different methods, with the spectral characteristics of resampled original multispectral images. Since the goal is to preserve the radiometry of the original MS images, any metric used must measure the amount of change in DN values in the pan-sharpened image \\(\\mathrm{F_{k}}\\)compared to the original image\\(\\mathrm{M_{k}}\\). Also, In order to evaluate the spatial properties of the fused images, a panchromatic image and intensity image of the fused image have to be compared since the goal is to retain the high spatial resolution of the PAN image. In the following \\(\\mathrm{F_{k}}\\), \\(\\mathrm{M_{k}}\\)are the measurements of each the brightness values pixels of the result image and the original MS image of band\\(k\\), \\(\\mathrm{\\bar{M}_{k}}\\) and \\(\\mathrm{\\bar{F}_{k}}\\)are the mean brightness values of both images and are of size n * m. BV is the brightness value of image data \\(\\mathrm{\\bar{M}_{k}}\\) and \\(\\mathrm{\\bar{F}_{k}}\\). ### **Spectral Quality Metrics:** * _Standard Deviation (SD):_ The standard deviation (SD), which is the square root of variance, reflects the spread in the data. Thus, a high contrast image will have a larger variance, and a low contrast image will have a low variance. It indicates the closeness of the fused image to the original MS image at a pixel level. The ideal value is zero. \\(\\sigma=\\sqrt{\\frac{\\sum_{i=1}^{m}\\sum_{j=1}^{m}(BV(m,m)-\\mu)^{2}}{m\\times n}}\\) (1) * _Entropy(En):_ The entropy of an image is a measure of information content but has not been used to assess the effects of information change in fused images. En reflects the capacity of the information carried by images. The larger En mean high information in the image [6]. By applying Shannon's entropy in evaluation the information content of an image, the formula is modified as [23]: \\(\\mathrm{En}=-\\sum_{i=0}^{255}\\mathrm{P(i)log_{2}P(i)}\\) (2) Where P(i) is the ratio of the number of the pixels with gray value equal to \\(i\\) over the total number of the pixels. * _Signal-to Noise Ratio (SNR):_ The signal is the information content of the data of original MS image\\(\\mathrm{M_{k}}\\), while the merging \\(F_{k}\\) can cause the noise, as error that is added to the signal. The \\(RMS_{k}\\) of the signal-to-noise ratio can be used to calculate the signal-to-noise ratio \\(SNR_{k}\\), given by [24]: \\[SNR_{k}=\\frac{\\sum_{i}^{n}\\sum_{j=1}^{m}(F_{k}(i,j))^{2}}{\\sum_{i}^{n}\\sum_{j =1}^{m}(F_{k}(i,j)-M_{k}(i,j))^{2}}\\] (3) * _Deviation Index (D1):_ In order to assess the quality of the merged product in regard of spectral information content. The deviation index is useful parameter as defined by [25,26], measuring the normalized global absolute difference of the fused image\\(F_{k}\\) with the original MS image\\(M_{k}\\) as follows : \\[DI_{k}=\\frac{1}{nm}\\sum_{i}^{n}\\sum_{j}^{m}\\frac{[F_{k}(i,j)-M_{k}(i,j)]}{M_{k }(i,j)}\\] (4) * _Correlation Coefficient (CC):_ The correlation coefficient measures the closeness or similarity between two images. It can vary between -1 to +1. A value close to +1 indicates that the two images are very similar, while a value close to -1 indicates that they are highly dissimilar. The formula to compute the correlation between \\(\\mathrm{F_{k}}\\), \\(\\mathrm{M_{k}}\\) : \\[\\mathit{CC}=\\frac{\\sum_{i}^{n}\\sum_{j=1}^{m}(F_{k}(i,j)-\\mathrm{\\bar{F}_{k}})( M_{k}(i,j)-\\mathrm{\\bar{M}_{k}})}{\\sqrt{\\sum_{i}^{n}\\sum_{j}^{m}(F_{k}(i,j)- \\mathrm{\\bar{F}_{k}})^{2}}\\sqrt{\\sum_{i}^{n}\\sum_{j}^{m}(\\mathrm{\\bar{M}_{k}} (i,j)-\\mathrm ## V Analysises Results ### _Spectral Quality Metrics Results:_ From table1 and Fig. 2 shows those parameters for the fused images using various methods. It can be seen that from Fig. 2a and table1 the SD results of the fused images remains constant for all methods except the IHS. According to the computation results En in table1, the increased En indicates the change in quantity of information content for spectral resolution through the merging. From table1 and Fig.2b, it is obvious that En of the fused images have been changed when compared to the original MS except the PCA. In Fig.2c and table1 the maximum correlation values was for PCA. In Fig.2d and table1 the maximum results of SNR were with the SF, and HFA. Results of the SNR, NRMSE and DI appear changing significantly. It can be observed from table1 with the diagram Fig. 2d & Fig. 5e for results SNR, NRMSE & DI of the fused image, the SF and HFA methods gives the best results with respect to the other methods. Means that this method maintains most of information spectral content of the original MS data set which gets the same values presented the lowest value of the NRMSE and DI ### _Spatial Quality Metrics Results:_ Table 2 and Fig. 4 show the result of the fused images using various methods. It is clearly that the seven fusion methods are capable of improving the spatial resolution with respect to the original MS image. From table2 and Fig. 3 shows those parameters for the fused images using various methods. It can be seen that from Fig. 3a and table2 the MG results of the fused images increase the spatial resolution for all methods except the PCA. from the table2 and Fig.3a the maximum gradient for MG was 25 edge but for SG in table2 and Fig.3b the maximum gradient was 64 edge means that the SG it gave, overall, a better performance than MG to edge detection. In addition, the SG results of the fused images increase the gradient for all methods except the PCA means that the decreasing in gradient that it dose not enhance the spatial quality. The maximum results of MG and SG for sharpen image methods was for the EF as well as the results of the MG and the SG for the HFA and SF methods have the same results approximately. However, the comparing them to the PAN it can be seen that the SF close to the result of the PAN. Other means the SF added the details of the PAN image to the MS image as well as the maximum preservation of the spatial resolution of the PAN. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline \\multirow{2}{*}{**Method**} & \\multirow{2}{*}{**Band**} & \\multirow{2}{*}{**MG**} & \\multirow{2}{*}{**SG**} & \\multirow{2}{*}{**HPDI**} & \\multirow{2}{*}{**FCC**} \\\\ \\cline{3-3} \\cline{5-6} & & & & & \\\\ \\hline \\multirow{3}{*}{**EF**} & R & 25 & 64 & 0 & -0.038 \\\\ \\cline{2-6} & G & 25 & 65 & 0.014 & -0.036 \\\\ \\cline{2-6} & B & 25 & 65 & 0.013 & -0.035 \\\\ \\hline \\multirow{3}{*}{**HFA**} & R & 11 & 51 & -0.032 & 0.209 \\\\ \\cline{2-6} & G & 12 & 52 & -0.026 & 0.21 \\\\ \\cline{2-6} & B & 12 & 52 & -0.028 & 0.211 \\\\ \\hline \\multirow{3}{*}{**HFM**} & R & 12 & 54 & 0.001 & 0.205 \\\\ \\cline{2-6} & G & 12 & 54 & 0.013 & 0.204 \\\\ \\cline{2-6} & B & 12 & 53 & 0.02 & 0.201 \\\\ \\hline \\multirow{3}{*}{**IHS**} & R & 9 & 36 & 0.004 & 0.214 \\\\ \\cline{2-6} & G & 9 & 36 & 0.009 & 0.216 \\\\ \\cline{2-6} & B & 9 & 36 & 0.005 & 0.217 \\\\ \\hline \\multirow{3}{*}{**PCA**} & R & 6 & 33 & -0.027 & 0.07 \\\\ \\cline{2-6} & G & 6 & 34 & -0.022 & 0.08 \\\\ \\cline{2-6} & B & 6 & 35 & -0.021 & 0.092 \\\\ \\hline \\multirow{3}{*}{**RVS**} & R & 13 & 54 & -0.005 & -0.058 \\\\ \\cline{2-6} & G & 12 & 53 & 0.001 & -0.054 \\\\ \\cline{2-6} & B & 12 & 52 & 0.006 & -0.05 \\\\ \\hline \\multirow{3}{*}{**SF**} & R & 11 & 48 & -0.035 & 0.202 \\\\ \\cline{2-6} & G & 11 & 49 & -0.026 & 0.204 \\\\ \\cline{2-6} & B & 11 & 49 & -0.024 & 0.206 \\\\ \\hline \\multirow{3}{*}{**MS**} & R & 6 & 32 & -0.005 & 0.681 \\\\ \\cline{2-6} & G & 6 & 32 & -0.004 & 0.669 \\\\ \\cline{1-1} \\cline{2-6} & B & 6 & 33 & -0.004 & 0.657 \\\\ \\hline \\multicolumn{6}{|c|}{**PAN**} & 1 & 10 & 42 & **—** & **—** \\\\ \\hline \\end{tabular} \\end{table} Table 2: The Spatial Quality Metrics Results for the Original MS and Fused Image Methods Figure 3: Chart Representation of FG Figure 2: Chart Representation of ND, En, CC, SNR, NRMSE & DI of Fused Images Figure 3: Chart Representation of MGAccording to the computation results, FCC in table2 and Fig.2c the increase FCC indicates the amount of edge information from the PAN image transferred into the fused images in quantity of spatial resolution through the merging. The maximum results of FCC From table2 and Fig.2c were for the SF, HFA and HFM. The results of HPDI better than FCC it is appear changing significantly. It can be observed that from Fig.2d and table2 the maximum results of the purpose approach. HPDI it was with the SF and HFA methods. The purposed approach of HPDI as the spatial quality metric is more important than the other spatial quality matrices to distinguish the best spatial enhancement through the merging. Fig. 3: Chart Representation of MG, SG, FCC & HPDI of Fused Images Fig. 4: HFA ## VI Conclusion This paper goes through the comparative studies undertaken by best different types of Image Fusion techniques based on pixel level as the following HFA, HFM, HIS and compares them with feature level fusion methods including PCA, SF and EF image fusion techniques. Experimental results with spatial and spectral quality matrices evaluation further show that the SF technique based on feature level fusion maintains the spectral integrity for MS image as well as improved as much as possible the spatial quality of the PAN image. The use of the SF based fusion technique is strongly recommended if the goal of the merging is to achieve the best representation of the spectral information of multispectral image and the spatial details of a high-resolution panchromatic image. Because it is based on Component Substitution fusion techniques coupled with a spatial domain filtering. It utilizes the statistical variable between the brightness values of the image bands to adjust the contribution of individual bands to the fusion results to reduce the color distortion. The analytical technique of SG is much more useful for measuring the gradient than MG since the MG gave the smallest gradient results. The our proposed a approach HPDI gave the smallest different ratio between the image fusion methods, therefore, it is strongly recommended to use HPDI for measuring the spatial resolution because of its mathematical and more precision as quality indicator. ## References * [1] Leviner M., M. Maltz,2009. \"A new multi-spectral feature level image fusion method for human interpretation\". Infrared Physics & Technology 52 (2009) pp. 79-88. * [2] Aiazzi B., S. Baronti, M. Selva,2008. \"Image fusion through multiresolution oversampled decompositions\". in Image Fusion: Algorithms and Applications \".Edited by: Stathaki T. \"Image Fusion: Algorithms and Applications\". 2008 Elsevier Ltd. * [4] SVab A.and Ostir K., 2006. \"High-Resolution Image Fusion: Methods To Preserve Spectral And Spatial Resolution\". Photogrammetric Engineering & Remote Sensing, Vol. 72, No. 5, May 2006, pp. 565-572. * [5] Shi W., Changqing Z., Caiying Z., and Yang X., 2003. \"Multi-Band Wavelet For Fusing SPOT Panchromatic And Multispectral Images\".Photogrammetric Engineering & Remote Sensing Vol. 69, No. 5, May 2003, pp. 513-520. Fig. 4: The Representation of Fused Images Fig. 4: RF* [8] Hsu S. H., Gau P. W., I-Lin Wu I., and Jeng J. H., 2009,\"Region-Based Image Fusion with Artificial Neural Network\". World Academy of Science, Engineering and Technology, 53, pp 156 -159. * [9] Zhang J., 2010. \"Multi-source remote sensing data fusion: status and trends\", International Journal of Image and Data Fusion, Vol. 1, No. 1, pp. 5-24. * [10] Ehlers M., S. Klonusa, P. Johan, A. strand and P. Rosso,2010. \"Multi-sensor image fusion for pansharpening in remote sensing\". International Journal of Image and Data Fusion, Vol. 1, No. 1, March 2010, pp. 25-45. * [11] Alparone L., Baronti S., Garzelli A., Nencini F., 2004. \" Landsat ETM+ and SAR Image Fusion Based on Generalized Intensity Modulation\". IEEE Transactions on Geoscience and Remote Sensing, Vol. 42, No. 12, pp. 2832-2839. * [12] Dong J.,Zhuang D., Huang Y.,Jingying Fu,2009. \"Advances In Multi-Sensor Data Fusion: Algorithms And Applications \". Review, ISSN 1424-8220 Sensors 2009, 9, pp.7771-7784. * [13] Amarsaikhan D.,H.H. Blotevogel, J.L. van Genderen, M. Ganzorig, R. Gantuya and B. Nergui, 2010. \"Fusing high-resolution SAR and optical imagery for improved urban land cover study and classification\". International Journal of Image and Data Fusion, Vol. 1, No. 1, March 2010, pp. 83-97. * [14] Vrabel J., 1996. \"Multispectral imagery band sharpening study\". Photogrammetric Engineering and Remote Sensing, Vol. 62, No. 9, pp. 1075-1083. * [15] Vrabel J., 2000. \"Multispectral imagery Advanced band sharpening study\". Photogrammetric Engineering and Remote Sensing, Vol. 66, No. 1, pp. 73-79. * [16] Wenbo W.,Y.Jing, K. Tingjun,2008. \"Study Of Remote Sensing Image Fusion And Its Application In Image Classification\" The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008, pp.1141-1146. * [17] Parcharidis I. and L. M. K. Tani, 2000. \"Landsat TM and ERS Data Fusion: A Statistical Approach Evaluation for Four Different Methods\". 0-7803-6359- 000/ 2000 IEEE, pp.2120-2122. * [18] Pohl C. and Van Genderen J. L., 1998. \"Multisensor Image Fusion In Remote Sensing: Concepts, Methods And Applications\".(Review Article), International Journal Of Remote Sensing, Vol. 19, No.5, pp. 823-854. * [20] Firouz A. Al-Wassai, N.V. Kalyankar, A.A. Al-Zuky, 2011a. \"Arithmetic and Frequency Filtering Methods of Pixel-Based Image Fusion Techniques \".IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 1, May 2011, pp. 113- 122. * [21] Firouz A. Al-Wassai, N.V. Kalyankar, A.A. Al-Zuky, 2011c.\" The Statistical methods of Pixel-Based Image Fusion Techniques\". International Journal of Artificial Intelligence and Knowledge Discovery Vol.1, Issue 3, July, 2011 5, pp. 5- 14. * [22] Li S., Kwok J. T., Wang Y., 2002. \"Using the Discrete Wavelet Frame Transform To Merge Landsat TM And SPOT Panchromatic Images\". Information Fusion 3 (2002), pp.17-23. * [23] Liao. Y. C., T.Y. Wang, and W. T. Zheng, 1998. \"Quality Analysis of Synthesized High Resolution Multispectral Imagery\". URL: [http://www.gisdevelopment.net/AARS/ACRS](http://www.gisdevelopment.net/AARS/ACRS) 1998/Digital Image Processing (Last date accessed:28 Oct. 2008). * [24] Gonzales R. C, and R. Woods, 1992. Digital Image Processing. A ddison-Wesley Publishing Company. * [25] De Bethume S., F. Muller, and J. P. Donnay, 1998. \"Fusion of multi-spectral and panchromatic images by local mean and variance matching filtering techniques\". In: Proceedings of The Second International Conference: Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images, Sophia-Antipolis, France, 1998, pp. 31-36. * [26] De Bethune S and F. Muller, 2002. \"Multisource Data Fusion Applied research\". URL:[http://www.Fabricmuller.be/realisations/fusion.html](http://www.Fabricmuller.be/realisations/fusion.html).(Last stat date accessed:28 Oct. 2002). * [27] Sangwine S. J., and R.E.N. Horne, 1989. The Colour Image Processing Handbook. Chapman & Hall. * [28] Ryan. R., B. Baldridge, R.A. Schwengerdt, T. Choi, D.L. Helder and B. Slawomir, 2003. \"IKONOS Spatial Resolution And Image Interpretability Characterization\", Remote Sensing of Environment, Vol. 88, No. 1, pp. 37-52. * [29] Pradham P., Younan N. H. and King R. L., 2008. \"Concepts of image fusion in remote sensing applications\". Edited by: Shathaki T. \"Image Fusion: Algorithms and Applications\". 2008 Elsevier Ltd. * [30] Mark S. Nand A. S. A.,2008 \"Feature Extraction and Image Processing\". Second edition, 2008 Elsevier Ltd. * [31] Richards J. A.. X. Jia, 2006. \"Remote Sensing Digital Image Analysis An Introduction\".4th Edition, Springer-Verlag Berlin Heidelberg 2006. * [32] Li S. and B. Yang, 2008. \"Region-based multi-focus image fusion\". in Image Fusion: Algorithms and Applications \".Edited by: Shathaki T. \"Image Fusion: Algorithms and Applications\". 2008 Elsevier Ltd. * [33] Zhou J., D. L. Civico, and J. A. Silander. \"A wavelet transform method to merge landsat TM and SPOT panchromatic data\". International Journal of Remote Sensing, 19(4), 1998. ## Short Biodata of the Author * [34] Firouz Abdullah Al-Wassai. Received the B.Sc. degree in, Physics from University of Sana'a, Yemen, Sana'a, in 1993. The M.Sc.degree in, Physics from Bagdad University, Iraq, in 2003, Research student.Ph.D in the department of computer science (S.R.T.M.U), Nanded, India. Dr. N.V. Kalyankar, Principal,Yeshwant Mahvidyalaya, Nanded(India) completed M.Sc.(Physics) from Dr. B.A.M.U, Aurangabad. In 1980 he joined as a leturer in department of physics at Yeshwant Mahvidyalaya, Nanded. In 1984 he completed his DHE. He completed his Ph.D. from Dr.B.A.M.U. Aurangabad in 1995. From 2003 he is working as a Principal to till date in Yeshwant Mahvidyalaya, Nanded. He is also research guide for Physics and Computer Science in S.R.T.M.U, Nanded. 03 research students are successfully awarded Ph.D in Computer Science under his guidance. 12 research students are successfully awarded M.Phil in Computer Science under his guidance He is also worked on various bodies in S.R.T.M.U, Nanded. He is also published 34 research papers in various international/national journals. He is peer team member of NAAC (National Assessment and Accreditation Council, India ). He published a book entitled \"DBMS concepts and programming in Foxpro\". He also get various educational wards in which \"Best Principal\" award from S.R.T.M.U, Nanded in 2009 and \"Best Teacher\" award from Govt. of Maharashtra, India in 2010. He is life member of Indian \"Fellowship of Linnen Society of London(F.L.S.)\" on 11 National Congress, Kolkata (India). He is also honored with November 2009. Dr. Ali A. Al-Zuky. B.Sc Physics Mustansiriyah University, Baghhad, Iraq, 1990. M Sc. In1993 and Ph. D. in1998 from University of Baghhad, Iraq. He was supervision for 40 postgraduate students (MSc. & Ph.D.) in different fields (physics, computers and Computer Engineering and Medical Physics). He has More than 60 scientific papers published in scientific journals in several scientific conferences.
Various and different methods can be used to produce high-resolution multispectral images from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS), mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its original images. There is also a lack of measures for assessing the objective quality of the spatial resolution assessment for fusion images is required. So, this study attempts to develop a new qualitative assessment to evaluate the spatial quality of the pan sharpened images by many spatial quality metrics. Also, this paper deals with a comparison of various image fusion techniques based on pixel and feature fusion techniques. Measure of image quality; spectral metrics; spatial metrics; Image Fusion.
Condense the content of the following passage.
147
arxiv-format/2311_18341v2.md
# Learning Robust Precipitation Forecaster by Temporal Frame Interpolation Lu Han1, Xu-Yang Chen1, Han-Jia Ye, De-Chuan Zhan2 National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China {hanlu, chenxy, yehj}@lamda.nju.edu.cn, [email protected] Equal contributionCorresponding author Footnote 1: footnotemark: ## 1 Introduction Weather forecasting has long been a critical domain with far-reaching implications across various sectors, including agriculture, transportation, and public safety. Accurate weather predictions are essential for planning and decision-making, helping to mitigate the impacts of adverse weather conditions and capitalize on favorable ones. Traditionally, weather forecasting has relied on a combination of meteorological science and empirical techniques. These methods often include numerical weather prediction (NWP) models [8] that simulate the atmosphere's physical processes. While effective, these approaches can be limited by computational constraints and the inherent complexity of weather systems. The advent of machine learning (ML) has ushered in a transformative era in weather forecasting. ML techniques, leveraging large datasets and powerful computational resources, have shown remarkable potential in improving the accuracy and efficiency of weather predictions. These advancements are particularly significant in the context of complex, dynamic systems like weather, where traditional models struggle to capture the intricate, non-linear interactions within the atmosphere. For example, the use of ConvLSTM to capture spatiotemporal correlations has shown promising results in weather forecasting, garnering significant attention [12]. Innovations such as MetNet [13], which processesinputs from an area larger than the target region, have set a new standard in deep learning-based weather forecasting. A notable limitation is that, unlike human vision systems [5], the performance of data-driven deep learning models significantly deteriorates when faced with data differing from the training phase distribution. Weather forecast models are particularly sensitive to spatial-temporal shifts due to varying regional weather and climate characteristics, making practical application challenging. The _Weather4cast_ competition [3] focuses on the solving of spatial-temporal shifts in precipitation forecasting tasks. It has advanced modern algorithms in AI and machine learning through a highly topical interdisciplinary competition challenge: The prediction of hi-res rain radar movies from multi-band satellite sensors, requiring data fusion of complementary signal sources, multi-channel video frame prediction, as well as super-resolution techniques. In 2023, this year's competition builds upon its predecessor's challenge, introducing a novel aspect: the prediction of actual rainfall amounts, a departure from the previous focus on binary rainfall masks, posing a more fine-grained task and requiring more accurate and robust forecasting models when faced with spatial-temporal shifts 3. Footnote 3: [https://weather4cast.net/](https://weather4cast.net/) The transfer learning benchmark for the _Weather4cast_ competition involves utilizing training data collected from multiple regions, specifically R15, R34, R76, R4, R5, R6, and R7, spanning the years 2019 and 2020. The performance of the model will be evaluated on unseen data from regions R8, R9, and R10, during the years 2019 and 2020, as well as on data from all ten regions in the year 2021. The shift in regions and years requires the model to learn the meta-knowledge that is shared among all the regions and years. The model should also be robust against the variation in the data. While traditional weather forecasting methods have laid a foundational baseline, they are limited in managing the complexity and noise in satellite and radar data. Our model, consisting of a novel Temporal Frame Interpolation (TFI) method, a multi-level dice loss, and carefully chosen training strategies, provides a novel solution for these limitations. TFI enhances forecast robustness by interpolating between adjacent frames of satellite and ground radar data, creating synthetic samples. Such mixing of training samples overcomes the insufficient samples caused by the sample rate, as well as increasing the robustness of models against unseen variations on images. The newly proposed multi-level dice loss extracts the ordinal relationship between different rainfall rates, enabling more accurate prediction. Our method has achieved significant success, securing first place in the transfer learning leaderboard of the competition. Our main contributions are as follows: * We propose a novel method called Temporal Frame Interpolation (TFI), which enhances the robustness and accuracy of precipitation forecasting models by creating synthetic samples between adjacent frames. * We propose Multi-Level Dice (ML-Dice) loss that takes the ordinal relationship between rainfall levels into account, achieving better forecasting results. * We explore multiple combinations of training strategies, including input cropping, output cropping, and geometric augmentations. By integrating these strategies along with our proposed TFI and multi-level dice loss, our model can achieve the best result on the transfer learning leaderboard of _Weather4cast'23_ competition, showing the practical applicability and superiority of our method over existing approaches under rigorous and competitive conditions. ## 2 Problem Statement In the NeurIPS 2023 _Weather4cast_ competition, participants face the intricate task of forecasting rainfall rates. This requires interpreting the temporal progression of multi-band satellite observations, encompassing visible, near-infrared (Near-IR), water vapor, and infrared (IR) spectrums. These multi-spectral satellite measurements originate from the MeteoSat satellites, operated by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT). The objective is to accurately estimate rainfall rates, which are derived from ground-based radar reflectivity data collected by the Operational Program for Exchange of Weather Radar Information (OPERA) radar network. Formally, the precipitation model is given a sequence of multi-band satellite image \\(X_{t}=(\\mathbf{x}_{t},\\mathbf{x}_{t+1},\\mathbf{x}_{t+2},\\mathbf{x}_{t+3})\\), where each \\(\\mathbf{x}_{t}\\in\\mathbb{R}^{11\\times 252\\times 252}\\) is the 11-channel satellite image with 252\\(\\times\\)252 pixels at time \\(t\\). Each satellite image captures a 15-minute interval. The OPERA ground-radar rainfall rates as targets \\(Y_{t}=(\\mathbf{y}_{t+4},\\mathbf{y}_{t+5},\\ldots,\\mathbf{y}_{t+3+T})\\) with each radar image \\(\\mathbf{y}_{t}\\in\\mathbb{R}^{252\\times 252}\\) are of T-time steps (\\(T=32\\) for core tack and \\(T=16\\) for transfer learning track). The radar images are also 252\\(\\times\\)252 pixel patches but note that the spatial resolution of the satellite images (12 km) is about six times lower than the resolution of the ground-radar rain rate products (2 km). The ground-radar rainfall products with a 252\\(\\times\\)252 pixel patch correspond to a 42\\(\\times\\)42 pixel patch-centered region in the coarser satellite resolution. A large input area surrounding the target region can supply sufficient synoptic-scale and mesoscale context information for a prediction of future weather, particularly for long lead times. The precipitation forecasting model \\(f\\) needs to predict the future rainfall rates \\(Y\\) based on the short history given by multi-band satellite image \\(X\\), i.e., the model \\(f:\\mathbb{R}^{4\\times 11\\times 252\\times 252}\\mapsto\\mathbb{R}^{T\\times 252 \\times 252}\\). ## 3 Method In this section, we first give an introduction to the proposed Temporal Frame Interpolation (TFI) method in section 3.1. Then, we describe the Multi-Level Dice loss (ML-Dice) for exploiting the ordinal relationship between target rainfall rates in section 3.2. Next, we thoroughly describe the other details of our method, including the selection of network structure, input/output processing, and augmentation strategy in section 3.3. ### Temporal Frame Interpolation In the task of rainfall prediction, the model is tasked with extracting features from high-dimensional sequential image data to predict future high-dimensional rainfall images. Training on such high-dimensional data often poses the risk of overfitting. This is exacerbated by the sampling intervals of satellite imagery; sampling data every 15 minutes leads to insufficient training data for a comprehensive understanding of satellite image sequences, making it challenging to develop models with robust generalization capabilities. Figure 1: Illustration of Temporal Frame Interpolation (TFI). TFI overcomes the insufficient samples caused by the finite sampling rate. It creates new synthetic samples within the sampling interval by interpolating adjacent frames of both input satellite images and output ground radar data with mixup factor \\(\\lambda\\). TFI helps the model better understand the sequence of satellite images and reduces the likelihood of overfitting. Existing research indicates that deep learning networks, such as Convolutional Neural Networks (CNNs), often struggle to make accurate predictions on interpolated data, even when it is strikingly similar to the original data before interpolation. This limitation is particularly evident in image processing tasks, suggesting a gap in these models' ability to handle sequentially correlated data effectively. Traditional mixup methods [15] are effective in addressing overfitting by training models on convex combinations of pairs of examples and their labels. However, mixup techniques are primarily designed for classification tasks and may not be directly applicable to the unique requirements of rainfall prediction. The physical characteristics of different regions and years can vary significantly, making the mixup of different samples potentially misleading. To address these challenges, we propose the Temporal Frame Interpolation (TFI) method. TFI operates by performing linear interpolation between two adjacent frames within the same region. Given the spatial consistency and temporal proximity, we posit that the distribution of satellite imagery data does not significantly change between these frames. Consequently, the corresponding rainfall amounts can be assumed to have a linear relationship with the interpolated satellite images. This interpolation not only enriches the training dataset but also reduces the likelihood of overfitting by providing a more continuous representation of temporal changes. We concrete the implementation of TFI here. During the training time, we extend the sample of satellite and radar images to an additional time step. That is, we additionally add the \\(\\mathbf{x}_{t+4}\\) and \\(\\mathbf{y}_{t+3+T}\\) as the additional frame of each sample. Then, like the process of mixup, we perform linear interpolation on both the input and output images with a time step shift: \\[\\hat{X}_{t+\\lambda}=(1-\\lambda)X_{t}+\\lambda X_{t+1},\\quad\\hat{Y}_{t+\\lambda} =(1-\\lambda)Y_{t}+\\lambda Y_{t+1} \\tag{1}\\] where \\(\\lambda\\sim Be(a,b)\\) (\\(Be\\) stands for Beta distribution) is the random sampled mixup factor. In our solution, we fix \\(a=1,b=1\\), which is equal to a sample from a uniform distribution ranging in \\([0,1]\\). The overview of the Temporal Frame Interpolation method can be found in fig. 1. ### Multi-Level Dice Loss This year's _Weather4cast_ competition moves from binary classification to zero-inflated regression. Instead of the conventional RMSE (Root Mean Square Error) typically associated with regression tasks, the competition employs the mean CSI (Critical Success Index) score at various rainfall intensity thresholds, specifically at 0.2, 1, 5, 10, and 15, to reward correct prediction over the whole gamut of rainfall intensities. This switch makes the task more like a multi-class classification task. Although we can still use the regression model to accurately predict the precipitation, existing research has revealed that the regression objective and deep learning models are not robust against noise [2], especially for the long-term time series forecasting tasks [4]. Hence, our decision is to formulate precipitation forecasting as a multi-class classification task since the absolute value within each classification bin does not impact the CSI score; rather, the critical factor is the accuracy of classifying data points into their respective bins. Formally, instead of directly predicting the continuous rainfall rate, the model outputs the probability of rainfall belonging to different bins where each bin represents a range of rainfall rates. In this competition, since the thresholds 0.2, 1, 5, 10, and 15 divide the rates into 6 bins, i.e., \\(\\text{Bin}=([0,0.2),[0.2,1),[1,5),[5,10),[10,15),[15,+\\infty)\\)), the model will output a 6-dimensional probabilistic vector for each pixel. Denote the encoder as \\(g\\), then: \\[P_{t}=g(X_{t}) \\tag{2}\\] where \\(P_{t}\\in\\mathbb{R}^{T\\times 252\\times 252\\times 6}\\) represents the probability of belonging to each bin of each pixel. For training the model, we choose the dice loss to alleviate the imbalance problem that lies in the prediction task. However, the dice loss can not fully exploit the characteristics of the fine-grain precipitation task. Thus we next propose a new multi-level dice loss. Dice loss is a performance metric commonly used in the field of image segmentation, particularly in medical image analysis. It's derived from the Dice coefficient, which is a statistical tool used to measure the similarity between two sets. Dice loss was originally proposed to solve binary classification tasks, a direct way to extend the binary dice loss to the multi-class scenario is to compute the average dice loss of each class like in segmentation tasks 4. This kind of Dice loss is computed as follows: \\[L_{\\text{Dice}}=1-\\frac{1}{6}\\sum_{i}^{6}\\frac{\\sum_{n}\\mathbb{I}(y_{n}\\in\\text{ Bin}_{i})p_{n,i}}{\\sum_{n}\\mathbb{I}(y_{n}\\in\\text{Bin}_{i})+\\sum_{n}p_{n,i}}, \\tag{3}\\] where \\(p_{n,i}\\) is the probability of pixel \\(n\\) belonging to Bin\\({}_{i}\\). However, this Dice loss does not take the ordinal relationship of rainfall rates into consideration. For example, if the model mistakenly predicts an [10, 15] rainfall rate while its true ground label is at [1, 5]. This Dice will only punish the output on [1, 5], [10, 15], which is not reasonable and not robust, since the severity of overestimating and underestimating precipitation forecasts is not the same. Overestimating precipitation will not affect the CSI on lower thresholds, but underestimation will affect all the higher thresholds. To overcome the drawbacks, we propose Multi-level Dice loss (ML-Dice). Instead of requiring the model to predict the exact bin of the rainfall, ML-Dice punishes the prediction that does not locate itself right to each threshold. Denoting the vector of thresholds as \\(\\mathbf{s}=(0.2,1,5,10,15)\\), ML-Dice is computed as: \\[L_{\\text{ML-Dice}}=1-\\frac{1}{5}\\sum_{i}^{5}\\frac{\\sum_{n}\\mathbb{I}(y_{n}> \\mathbf{s}_{i})\\hat{p}_{n}(y>\\mathbf{s}_{i})}{\\sum_{n}\\mathbb{I}(y_{n}>\\mathbf{s}_{i})+ \\sum_{n}\\hat{p}_{n}(y>\\mathbf{s}_{i})}, \\tag{4}\\] where \\(\\hat{p}_{n}(y>\\mathbf{s}_{i})=\\sum_{m=i+1}^{6}p_{n,m}\\) is the model's prediction of the probability that the output rate exceeds the threshold \\(\\mathbf{s}_{i}\\). Unlike the vanilla Dice loss in eq. (3), ML-Dice gradually reduces the loss as the prediction approaches the ground target. For example, the loss reduces if the prediction is corrected from [10, 15] to [5, 10] while the target is [1, 5], since it is additionally correct on threshold 10. Therefore ML-Dice exploits the ordinal relationship between the bins and enables to model to better trade-off between suboptimal predictions. The vanilla Dice loss in eq. (3) can not exploit such relationships since it equally punishes [5, 10] and [10, 15]. To further enhance the robustness of Dice loss, we utilize the \\(logcosh\\) transformation on \\(L_{\\text{ML-Dice}}\\)[6]. Inference.During inference time, we select the bin with the highest probability and generate the rainfall rates as the median of the selected bin. ### Other Model details Network structure.In the initial phase of the competition, we observed that the 3D U-Net architecture [1], as utilized in the official baseline 5, is strong enough to achieve robust results. However, as the competition progresses, we found that the 2D U-Net [10] exhibits swifter convergence and delivers a little superior results. Consequently, we decided to transition to the 2D U-Net architecture. Footnote 5: [https://github.com/agruca-polsl/weather4cast-2023](https://github.com/agruca-polsl/weather4cast-2023) Input and output processing.Following last year's winners, we crop the input image to be **126x126** to avoid redundant information provided by the surrounding region [7]. We also constrain the model to output a small **42x42** patch and use and simple upsampling strategy to restore the 252x252 prediction [9; 13]. Augmentation strategies.Inspired by last year's winning solution on Transfer leaderboard [11], we have employed geometric transformations for data augmentation. The augmentation methods we selected include _Vertical flip_, _Horizontal flip_, _Vertical flip+Horizontal flip_. These geometric transformations do not change the center of a satellite image and the corresponding target radar image can be achieved by simple transformations without loss of any information. Note that due to the restriction of time, we haven't tested all the possible augmentations. But the chosen have been proven effective. ## 4 Experiments We conducted all experiments using the training set, core validation set, and held-out set of the _Weather4cast'23 Transfer_ dataset. The performance evaluation metric of all experiments was selected as the mean Critical Success Index (mCSI) over five thresholds 0.2, 1, 5, 10, 15. ### Experimental Setting The datasets used in Weather4cast'23 do not change compared to Weather4cast'22. Training set.The training set encompasses both satellite and terrestrial radar data, encompassing a span of two years, 2019 and 2020, across seven distinct European regions. The satellite data was sourced from a geostationary meteorological satellite managed by EUMETSAT, while ground truth radar data was acquired from OPERA, a collaborative weather radar information exchange program. This satellite dataset includes a variety of spectral bands, capturing information across the visible spectrum, infrared, and water vapor channels, such as IR016 and VIS006, among others. Each spectral frequency provides unique insights due to their distinct interactions with the atmospheric constituents. For example, the visible (VIS) bands harness solar radiation to yield data on albedo and surface texture but are limited to daylight hours for data collection. In contrast, infrared (IR) bands detect radiation re-emitted from objects, which can be translated into measures of brightness temperature, while water vapor (WV) bands quantify the water vapor present in the higher atmospheric layers. These bands are instrumental for differentiating clouds from the surroundings and examining cloud dynamics via single or combined channel analysis. Satellite imagery has a spatial footprint of 12 km x 12 km, captured at quarter-hourly intervals, whereas radar imagery, with a resolution of 2 km x 2 km, focuses on a central subset of the satellite's coverage. Transfer test set.The test set for the transfer leaderboard consists of regions not present in the training dataset (indicating a spatial shift), labeled R08, R09, and R10, in addition to a set of regions from the year 2021 (indicating a temporal shift), including R15, R34, R76, R04, R05, R06, R07, R08, R09, and R10. Core validation set.The core validation set consists of data from all the seen regions (R15, R34, R76, R4, R5, R6, and R7) in 2019 and 2020. Details on ImplementationAs referenced in section 3.3, our early competition strategy involved utilizing a variant of the 3D-UNet architecture [1], which later transitioned to a 2D-UNet model. It was this latter model that yielded the leading results on the leaderboard. However, certain ablation studies were conducted using the 3D-UNet configuration. We configured our single GPU batch size at eight and ran the models for a total of 90 epochs. The initial learning rate was established at 0.0001, with a weight decay set at 0.02 and a dropout rate maintained at 0.4. The optimization was managed via AdamW, featuring a learning rate scaled down by a factor of 0.9 whenever the validation loss exceeded that of the previous epoch. Furthermore, an early stopping mechanism was enforced to halt training if the validation loss plateaued. All experiments were executed on 4 Nvidia RTX 4090 GPUs. to 42x42, and geometric Augmentation (Aug), as well as newly proposed Multi-Level Dice loss (ML-Dice) and Temporal Frame Interpolation (TFI). ResultsTable 2 shows the results. We can draw the following conclusions. (1) Impact of Temporal Frame Interpolation (TFI): The top-performing method is UNet2D with a combination of Multi-Level Dice loss (ML-Dice), Input Cropping (IC), Output Cropping (OC), Augmentation (Aug), and Temporal Frame Interpolation (TFI), scoring highest in both mCSI (0.1069) and mF1 (0.1738). The inclusion of TFI in this configuration suggests its significant contribution to improving the model's performance. (2) Effectiveness of Multi-Level Dice loss (ML-Dice): Models utilizing ML-Dice consistently outperform those using standard Dice loss. This indicates the effectiveness of ML-Dice in improving the model's accuracy. (3) Comparison of UNet2D and UNet3D Architectures: UNet2D models appear to outperform UNet3D models under similar configurations. This might suggest that convolution on the temporal dimension may not fully explore the temporal information. (4) A clear trend shows that as more components (IC, OC, Aug) are added, the performance improves, regardless of the UNet version or loss function used. This underlines the importance of these preprocessing and augmentation techniques in enhancing model performance. Each additional component (IC, OC, Aug) contributes to a steady increase in both mCSI and mF1 scores. For example, comparing UNet3D (Dice) with each subsequent addition shows a gradual improvement, from 0.0598 to 0.0932 in mCSI and from 0.0981 to 0.1539 in mF1. (5) The least effective configuration is UNet3D with only Dice loss, while the most effective is UNet2D with the full suite of components including TFI. This wide range in performance across configurations underscores the impact of methodological choices on the model's predictive capabilities. Learning dynamics.To show the influence of each component on the learning dynamics of the model. We plot the mCSI over epochs on two networks in fig. 1(a) and fig. 1(b) respectively. The UNet3D model with the full suite of enhancements (ML-Dice+IC+OC+Aug) outperforms other configurations throughout the training process. It shows a steady increase in the validation mCSI over epochs. This indicates that the model is learning effectively from the training data when all enhancements are used. The configurations with intermediate numbers of enhancements (Dice+IC+OC+Aug and Dice+IC+OC) have similar trajectories, with the former consistently outperforming the latter, again highlighting the benefit of augmentation (Aug) in improving model performance. The configurations with fewer enhancements (e.g., Dice+IC+OC and Dice+IC) demonstrate lower performance, and the gap in performance widens as the training progresses. The model with only the Dice loss (UNet3D (Dice)) starts at the lowest performance and shows the least improvement over time, suggesting that \\begin{table} \\begin{tabular}{l c c} \\hline \\hline Method & mCSI & mF1 \\\\ \\hline UNet2D (ML-Dice+IC+OC+Aug+TFI) & **0.1069** & **0.1738** \\\\ UNet2D (ML-Dice+IC+OC+Aug) & 0.1034 & 0.1701 \\\\ UNet3D (ML-Dice+IC+OC+Aug) & 0.1014 & 0.1657 \\\\ UNet3D (Dice+IC+OC+Aug) & 0.0986 & 0.1615 \\\\ UNet3D (Dice+IC+OC) & 0.0932 & 0.1539 \\\\ UNet3D (Dice+IC) & 0.0800 & 0.1297 \\\\ UNet3D (Dice) & 0.0598 & 0.0981 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Mean CSI and mean F1 on the validation set of core track. the additional components contribute significantly to the model's learning capability. The presence of Temporal Frame Interpolation (TFI) in the UNet2D model shows a clear advantage over the model without TFI. The model with TFI maintains a higher average CSI throughout the training epochs. In conclusion, the ablation results from the figures and the table demonstrate that the combination of TFI, ML-Dice and the selected training strategies significantly enhances the performance of UNet models for the task of rainfall prediction. The proposed TFI and ML-Dice also alleviate the overfitting on training data and accelerate the convergence of the model. ## 5 Conclusions Over the past years, accurate predictions of rain events have become ever more critical, with climate change increasing the frequency of unexpected rainfall. _Weather4cast_ 2023 competition aims to build machine learning models with the ability of super-resolution rain movie prediction under spatio-temporal shifts. This year's metric requires finer-grain prediction of the precipitation, posing an even bigger challenge to the model's robustness. In this paper, we propose a solution with a newly proposed temporal frame interpolation that enhances the robustness by creating synthetic between sampling intervals. We also propose the multi-level dice loss that takes into account the ordinal relationship between rainfall rates, improving the prediction accuracy. By combining several explored training strategies and proposed TFI and multi-level dice loss, our model can achieve the best result on the _transfer learning leaderboard of Weather4cast'23 competition_, showing the practical applicability and superiority of our method over existing approaches under rigorous and competitive conditions. In conclusion, the Temporal Frame Interpolation method, supported by newly proposed ML-Dice loss and data augmentation strategy, offers a promising direction for future developments in high-dimensional weather forecasting tasks. We hope that our contributions will serve as a stepping stone for subsequent innovations in the field. ## References * [1] Ozgun Cicek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: Learning dense volumetric segmentation from sparse annotation. In Sebastien Ourselin, Leo Joskowicz, Mert R. Sabuncu, Gozde B. Unal, and William M. Wells III, editors, _MICCAI_, volume 9901 of _Lecture Notes in Computer Science_, pages 424-432, 2016. * [2] Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao. Deep ordinal regression network for monocular depth estimation. In _CVPR_, pages 2002-2011. Computer Vision Foundation / IEEE Computer Society, 2018. * [3] Aleksandra Gruca, Federico Serva, Llorenc Lliso, Pilar Ripodas, Xavier Calbet, Pedro Herruzo, Jiri Pihrt, Rudolf Raevskyi, Petr Simanek, Matej Choma, et al. Weather4cast at neurips 2022: Super-resolution rain movie prediction under spatio-temporal shifts. In _NeurIPS 2022 Competition Track_, pages 292-313. PMLR, 2022. * [4] Lu Han, Han-Jia Ye, and De-Chuan Zhan. The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. _arXiv preprint arXiv:2304.05206_, 2023. * [5] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. _arXiv preprint arXiv:1903.12261_, 2019. * [6] Shruti Jadon. A survey of loss functions for semantic segmentation. In _CIBCB_, pages 1-7. IEEE, 2020. * [7] Yang Li, Haiyu Dong, Zuliang Fang, Jonathan Weyn, and Pete Luferenko. Super-resolution probabilistic rain prediction from satellite data using 3d u-nets and earthformers. _arXiv preprint arXiv:2212.02998_, 2022. * [8] Peter Lynch. The origins of computer weather prediction and climate modeling. _Journal of computational physics_, 227(7):3431-3444, 2008. * [9] Jiri Pihrt, Rudolf Raevskiy, Petr Simanek, and Matej Choma. Weatherfusionnet: Predicting precipitation from satellite data. _arXiv preprint arXiv:2211.16824_, 2022. * [10] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _MICCAI_, pages 234-241. Springer, 2015. * [11] Minseok Seo, Doyi Kim, Seungheon Shin, Eunbin Kim, Sewoong Ahn, and Yeji Choi. Domain generalization strategy to train classifiers robust to spatial-temporal shift. _arXiv preprint arXiv:2212.02968_, 2022. * [12] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. _NIPS_, 28, 2015. * [13] Casper Kaae Sonderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. Metnet: A neural weather model for precipitation forecasting. _arXiv preprint arXiv:2003.12140_, 2020. * Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI_, volume 10553 of _Lecture Notes in Computer Science_, pages 240-248. Springer, 2017. * [15] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In _ICLR_. OpenReview.net, 2018.
Recent advances in deep learning have significantly elevated weather prediction models. However, these models often falter in real-world scenarios due to their sensitivity to spatial-temporal shifts. This issue is particularly acute in weather forecasting, where models are prone to overfit to local and temporal variations, especially when tasked with fine-grained predictions. In this paper, we address these challenges by developing a robust precipitation forecasting model that demonstrates resilience against such spatial-temporal discrepancies. We introduce Temporal Frame Interpolation (TFI), a novel technique that enhances the training dataset by generating synthetic samples through interpolating adjacent frames from satellite imagery and ground radar data, thus improving the model's robustness against frame noise. Moreover, we incorporate a unique Multi-Level Dice (ML-Dice) loss function, leveraging the ordinal nature of rainfall intensities to improve the model's performance. Our approach has led to significant improvements in forecasting precision, culminating in our model securing _1st place_ in the transfer learning leaderboard of the _Weather4cast'23_ competition. This achievement not only underscores the effectiveness of our methodologies but also establishes a new standard for deep learning applications in weather forecasting. Our code and weights have been public on [https://github.com/Secilia-Cxy/UNetTFI](https://github.com/Secilia-Cxy/UNetTFI).
Give a concise overview of the text below.
270
cambridge_university_press/67ecb460_b3a2_4ab2_80af_9ec46b67729b.md
# Initialization of ice-sheet forecasts viewed as an inverse Robin problem Robert J. ARTHERN G. Hilmar GUDMUNDSSON ## Introduction We consider a new approach to initializing simulations of ice sheets. Our focus is on simulations used for prediction of 21st-century climate and sea level. A close analogy can be drawn between ice-sheet prediction and weather forecasting. For a short-term weather forecast, the initial state of the atmosphere is a critical piece of information and must be derived from observations; the model equations are then integrated numerically to predict how the state will evolve. By contrast, in longer-term simulations of 21st-century climate, the precise specification of the initial state has so far been considered relatively unimportant. This is because only the statistics of future climate are sought, not details of the actual storms and weather events that will occur decades from now. Taking this statistical point of view, errors in the initial state of the atmosphere decorrelate and become unimportant after a few days or weeks. Because ice sheets change more slowly than the atmosphere, predicting their behaviour over the coming century has more in common with short-term weather prediction: small errors in the initial state of an ice sheet could systematically affect a climate forecast throughout the 21st century, perhaps even growing unstably to dominate the predicted sea-level contribution. The ocean circulation in any coupled simulation that includes ice sheets would also be affected. Without a realistic initial configuration, models will not simulate the evolution of the ice sheets accurately, and will give poor predictions of freshwater forcing of ocean circulation and sea-level rise over the coming century. Initializing an ice-sheet model by a period of'spin-up' over many thousands of years takes little account of the available observations, and is therefore unlikely to represent the current geometry and flow adequately. The present-day geometry of the ice sheets is known fairly accurately from satellite and airborne remote-sensing methods. The horizontal component of the surface velocity can be estimated from satellite radar interferometry and other methods (e.g. Goldstein and others, 1993). The vertical component of velocity can be inferred from the downslope motion combined with satellite radar altimetry (e.g. Wingham and others, 2006) and observations of the rate of snow accumulation on the ice sheet (e.g. Arthern and others, 2006). Here, we propose a simple algorithm that can make use of these data to initialize an ice-sheet model. An initialization scheme, based around a linearized shallow-ice model, has been described previously (Arthem and Hindmarsh, 2006). One motivation for this study is to take advantage of better measurements and interpolation methods by improving the description of ice flow. Basal drag caused by sliding is considered, and all components of the stress tensor are retained, including membrane stresses caused by lateral extension, compression and shearing (e.g. Hindmarsh, 2006). Throughout this paper we assume that continuous fields of surface elevation, ice thickness and surface velocity are available at the time of model initialization. Methods for the spatio-temporal interpolation of scattered observations of this type have been developed (e.g. Arthern and Hindmarsh, 2003). In this context, continuous approximations to the required fields can be estimated, albeit subject to some level of measurement and interpolation error. Despite the available observations, several sources of uncertainty remain in specifying the initial state of the ice sheet. Chief among these is the drag coefficient that determines the slipperiness of the lubricating sediment beneath the ice. This relates the basal stress to the basal velocity. Another source of uncertainty is the relationship between velocity and stresses near the grounding line, where ice becomes afloat as it reaches the ocean: forces imposed by laterally confined, floating ice shelves, or poorly mapped sediment wedges, are not accurately known. A third significant source of uncertainty, despite useful information from laboratory experiments, is the viscosity of the ice sheet. This depends upon the temperature of the ice, and its crystal fabric, and these are not well defined from observations. Thetemperature can also affect basal drag enormously, especially when the bed makes the transition from melted to frozen, or vice versa. In this paper, we treat the drag coefficient as an undetermined coefficient in a Robin (or 'third kind') boundary condition (Gustafson and Abe, 1998; Gustafson, 1999). The Robin condition may be viewed as a linear combination of Dirichlet (velocity) and Neumann (stress) boundary conditions; each is related to the other via the drag coefficient. Here we present a new algorithm to solve for the basal drag coefficient, \\(\\beta\\), and the viscosity, \\(\\mu\\). More generally, the approach solves for any spatially variable Robin coefficient, and could perhaps be used to parameterize stresses near the grounding line. The algorithm can be viewed as a generalization of the inverse method proposed by MacAyeal (1992, 1993), which has been applied successfully to many ice streams and ice shelves. However, the new algorithm should address a wider class of problems than either the shallow-ice or other asymptotically approximated models (e.g. Muszynski and Birchfield, 1987; MacAyeal, 1989), including the initialization of higher-order models of ice flow (e.g. Patlyn and others, 2008), and the full system of Stokes equations. A practical advantage of the new approach is that it can be implemented numerically using only the forward solver of the Stokes equations, with no need to develop a separate adjoint model. The only requirement placed upon the solver is that boundary conditions of Dirichlet, Neumann and Robin types (i.e. fixed velocity, fixed stress and linearly related stress and velocity) can be implemented in the momentum-conservation equation. In this respect it is similar to a recent algorithm proposed by Maxwell and others (2008), which has been applied to a transverse cross section of a glacier. Our algorithm differs from that of Maxwell and others (2008) in several respects. The problem is viewed as an 'inverse Robin problem' (Chaabane and Jaoua, 1999). This is translated into a variational problem through the use of a particular form of cost function, first introduced by Kohn and Vogelius (1984). We provide an explicit expression for the gradient of this cost function, and show that this gradient can be calculated efficiently. An analogous problem, in the context of electric impedance tomography, has been studied by Chaabane and Jaoua (1999) for the Laplace equation. The analysis and notation used in this paper closely follows theirs. Here, we formulate an inverse Robin problem for Stokes flow as commonly encountered in glaciological applications. We also describe an algorithm for the numerical solution of the problem. As a proof of concept, we describe synthetic inversion of basal properties: first for an illustrative one-dimensional (1-D) example; second for a more realistic, three-dimensional (3-D) example, with nonlinear ice and till rheology. In the latter example we use a commercially available solver for the Stokes equations. ## Theory Assume the ice sheet occupies a volume, \\(\\Omega\\), of known shape, as shown in Figure 1, bounded by a surface, \\(\\Gamma_{\\rm s}\\), where we have observations, and a surface, \\(\\Gamma_{\\rm b}\\), where we wish to solve for the coefficient that linearly relates stresses on the boundary to velocities there, i.e. the parameter in a Robin-type boundary condition. Using Cartesian coordinates, \\((x,\\,y,\\,z)\\), denote the outward normal vector on both surfaces by \\(\\dot{\\mathbf{e}}\\), the material velocity by \\(\\mathbf{v}\\), strain-rate tensor by \\(\\dot{\\mathbf{v}}\\equiv\\frac{1}{2}\\left[\\left(\ abla\\mathbf{v}\\right)+\\left( \ abla\\mathbf{v}\\right)^{\\ast}\\right]\\), stress tensor by \\(\\boldsymbol{\\sigma}\\), and pressure by \\(\\boldsymbol{p}\\equiv-{\\rm Trace}(\\boldsymbol{\\sigma})\\,/3\\). For now, assume an incompressible material, so that \\(\ abla\\cdot\\mathbf{v}=0\\), and a linear viscous rheology, so deviatoric stress is given by \\(\\boldsymbol{\\sigma}+\\boldsymbol{p}\\boldsymbol{1}=2\\mu\\dot{\\mathbf{e}}\\), with identity \\(\\boldsymbol{\\mathbf{l}}\\), and viscosity \\(\\mu\\). Further assuming a constant density, \\(\\rho\\), and gravitational field, \\(\\mathbf{g}\\), the momentum-conservation equation corresponding to force balance can be written \\(\ abla\\cdot\\boldsymbol{\\sigma}+\\rho\\mathbf{g}=0\\). To begin with, consider a simplified problem, in which the force per unit area on the boundary, \\(\\Gamma_{\\rm b}\\), acts in the opposite direction to the velocity, and in proportion to it, so that \\(\\boldsymbol{\\sigma}\\cdot\\dot{\\mathbf{h}}+\\partial\\mathbf{v}=0\\), with \\(\\beta>0\\), a scalar that may vary spatially; this relationship defines the Robin boundary condition. Later we can adapt this to the more realistic case of sliding past an impenetrable boundary, \\(\\Gamma_{\\rm i}\\), that supports normal and tangential stresses, such as the base of the ice sheet, but for now we ignore the distinction between \\(\\Gamma_{\\rm i}\\) and the rest of \\(\\Gamma_{\\rm b}\\). For the simplified problem the initialization consists of selecting viscosity, \\(\\mu\\), in the interior of the domain, and the Robin parameter, \\(\\beta\\), on the boundary. The inverse Robin problem (\\(\\mathcal{IP}\\)) can be stated as follows: \\[\\left\\{\\begin{array}{l}\\mbox{Given the prescribed boundary stress, $\\boldsymbol{\\tau}$,}\\\\ \\mbox{and velocity measurements, $\\boldsymbol{\\hat{t}}$, on $\\Gamma_{\\rm s}$,}\\\\ \\mbox{find functions $\\beta$ on $\\Gamma_{\\rm b}$, and $\\mu$ in $\\Omega$,}\\\\ \\mbox{such that the solution, $\\boldsymbol{\\sigma}$,}\\end{array}\\right. \\tag{1}\\] \\[\\left\\{\\begin{array}{l}\\mbox{Given the prescribed boundary stress, $\\boldsymbol{\\tau}$,}\\\\ \\mbox{and velocity measurements, $\\boldsymbol{\\hat{t}}$, on $\\Gamma_{\\rm s}$,}\\\\ \\mbox{find functions $\\beta$ on $\\Gamma_{\\rm b}$, and $\\mu$ in $\\Omega$,}\\\\ \\mbox{such that the solution, $\\boldsymbol{\\sigma}$,}\\end{array}\\right.\\] (2) \\[\\left\\{\\begin{array}{l}\\mbox{such that the solution, $\\boldsymbol{\\sigma}$,}\\\\ \\mbox{$\\boldsymbol{\\sigma}$}+\\rho\\mathbf{l}\\end{array}\\right.\\] (3) \\[\\left\\{\\begin{array}{l}\\mbox{$\\boldsymbol{\\sigma}$}+\\rho \\mathbf{l}\\end{array}\\right.\\] (4) \\[\\left\\{\\begin{array}{l}\\mbox{$\\boldsymbol{\\sigma}$}+\\rho \\mathbf{l}\\end{array}\\right.\\] (5) \\[\\left\\{\\begin{array}{l}\\mbox{$\\boldsymbol{\\sigma}$}+\\rho\\mathbf{l} \\end{array}\\right.\\] (6) \\[\\left.\\begin{array}{l}\\mbox{$\\boldsymbol{\\sigma}$}+\\rho\\mathbf{l} \\end{array}\\right.\\] (7) \\[\\left.\\begin{array}{l}\\mbox{$\\boldsymbol{\\sigma}$}+\\rho\\mathbf{g} \\end{array}\\right.\\] (8) \\[\\left.\\begin{array}{l}\\mbox{$\\boldsymbol{\\sigma}$}+\\hat{\\mathbf{n}}+ \\partial\\mathbf{v}\\end{array}\\right.\\] (9) \\[\\left.\\begin{array}{l}\\mbox{also satisfies $\\mathbf{v}$}=\\mathbf{f}\\mbox{ on $\\Gamma_{\\rm s}$.}\\end{array}\\right. \\tag{10}\\] We assume the availability of an ice-sheet model, optimized for the efficient solution of the Neumann problem Figure 1: Geometric depiction of the problem. The ice volume, \\(\\Omega\\), has known shape. Measurements of the surface velocity and the stress are available on \\(\\Gamma_{\\rm s}\\) (red dotted). On the rest of the boundary, \\(\\Gamma_{\\rm b}\\) (blue dashed), including the impenetrable part, \\(\\Gamma_{\\rm i}\\), stresses are defined by a Robin-type boundary condition, with unknown Robin coefficient, \\(\\beta\\). The viscosity, \\(\\mu\\), within \\(\\Omega\\) is also unknown. We seek solutions for \\(\\beta\\) on \\(\\Gamma_{\\rm b}\\), and \\(\\mu\\) within \\(\\Omega_{\\rm s}\\) so that velocities and stresses on \\(\\Gamma_{\\rm s}\\) are consistent with the observational data. (\\(\\mathcal{NP}\\)), and that this model can also be used to solve a Dirichlet problem (\\(\\mathcal{DP}\\)), forced by the surface velocity measurements, \\(\\mathbf{f}\\): \\[\\left(\\mathcal{DP}\\right)\\left\\{\\begin{array}{rcll}\\frac{1}{2}\\left[\\left( \ abla\\mathbf{v}\\right)+\\left(\ abla\\mathbf{v}\\right)^{\\ast}\\right]&=&\\mathbf{ \\hat{e}}&\\text{in }\\Omega,\\\\ \\sigma+\\mathbf{p}\\parallel&=&2\\mu\\mathbf{\\hat{e}}&\\text{in }\\Omega,\\\\ \\text{Trace}(\\boldsymbol{\\sigma})+3\\mathbf{p}\\equiv\ abla\\cdot\\mathbf{v}&=&0& \\text{in }\\Omega,\\\\ \ abla\\cdot\\boldsymbol{\\sigma}+\\rho\\mathbf{g}&=&0&\\text{in }\\Omega,\\\\ \\mathbf{v}&=&\\mathbf{f}&\\text{on }\\Gamma_{\\text{s}},\\\\ \\boldsymbol{\\sigma}\\cdot\\hat{\\mathbf{n}}+\\beta\\mathbf{v}&=&0&\\text{on }\\Gamma_{\\text{b}}.\\end{array}\\right. \\tag{2}\\] Writing the solutions of the Dirichlet and Neumann problems \\(\\mathcal{DP}\\) and \\(\\mathcal{NP}\\) as \\(\\left[\\mathbf{v}^{0},\\hat{\\boldsymbol{e}}^{\\text{D}},\\boldsymbol{\\sigma}^{0 },p^{\\text{D}}\\right]\\) and \\(\\left[\\mathbf{v}^{N},\\hat{\\boldsymbol{e}}^{N},\\boldsymbol{\\sigma}^{N},p^{ \\text{N}}\\right]\\), respectively, the Appendix shows that the inverse problem (\\(\\mathcal{IP}\\)) can be solved by minimizing the Kohn and Vogelius (1984) cost function \\[J(\\mu,\\beta) = \\int_{\\Omega}2\\mu\\left|\\hat{\\boldsymbol{e}}^{\\text{N}}-\\hat{ \\boldsymbol{e}}^{\\text{D}}\\right|_{\\text{F}}^{2}+\\int_{\\Gamma_{\\text{b}}} \\beta\\left|\\mathbf{v}^{N}-\\mathbf{v}^{\\text{D}}\\right|_{\\text{F}}^{2}, \\tag{3}\\] in which the subscript \\(\\text{F}^{\\prime}\\) denotes the Frobenius norm, as defined in the Appendix. Furthermore, the gradient of this cost function, with respect to the parameters \\(\\mu\\) and \\(\\beta\\), can be expressed as a Gateaux, or directional, derivative (e.g. Zorn, 1946), defined by \\(\\text{d}/\\equiv\\text{d}_{\\mu}J+\\text{d}_{\\beta}J\\), with \\[\\text{d}_{\\mu}J(\\mu,\\beta,\\mu^{\\prime}) \\equiv \\lim_{\\varepsilon\\to 0},\\,\\,\\frac{\\left(\\mu+\\varepsilon\\mu^{ \\prime},\\frac{\\beta}{\\varepsilon}\\right)-J(\\mu,\\beta)}{\\left(\\mu,\\beta \\right)}\\] \\[= \\int_{\\Omega}2\\mu^{\\prime}\\left(\\left|\\hat{\\boldsymbol{e}}^{0} \\right|_{\\text{F}}^{2}-\\left|\\hat{\\boldsymbol{e}}^{N}\\right|_{\\text{F}}^{2}\\right);\\] \\[\\text{d}_{\\beta}J(\\mu,\\beta,\\beta^{\\prime}) \\equiv \\lim_{\\varepsilon\\to 0},\\,\\,\\frac{\\left(\\mu,\\beta+\\varepsilon\\beta^{ \\prime}\\right)-J(\\mu,\\beta)}{\\varepsilon}\\] \\[= \\int_{\\Gamma_{\\text{b}}}\\beta^{\\prime}\\left(\\left|\\mathbf{v}^{0} \\right|_{\\text{F}}^{2}-\\left|\\mathbf{v}^{N}\\right|_{\\text{F}}^{2}\\right),\\] where \\(\\varepsilon\\) is a small parameter that tends to zero in the limit. A more realistic case is that some part of the boundary, \\(\\Gamma_{\\text{b}}\\), is impenetrable, such as the base of the ice sheet. On the impenetrable part, \\(\\Gamma_{i}\\), conditions of zero normal velocity, and tangential shear stress proportional to velocity, can be applied: \\[\\mathbf{v}\\cdot\\mathbf{\\hat{n}} = 0\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, In fact, the above example can be integrated directly, to reveal that there is no unique solution for nondimensional \\(\\mu\\) and \\(\\beta\\) which solves the inverse problem \\(\\mathcal{IP}\\), but rather a locus of solutions defined by \\(\\beta=2\\mu/(2\\mu-1)\\). This locus is shown as a white dashed curve in Figure 2. As anticipated from the theory, it corresponds closely to the minimum of the cost function, and to the solutions converged upon by Algorithm 1. Figure 2 illustrates that to solve for the drag coefficient, \\(\\beta\\), prior information about the viscosity, \\(\\mu\\), is required. Similarly, to solve for \\(\\mu\\) requires prior information about \\(\\beta\\). Nevertheless, if \\(\\mu\\) (or \\(\\beta\\)) is already known, perhaps from laboratory experiments, Algorithm 1 can be used to determine the other, by setting the initial estimate \\(\\mu_{0}\\) (or \\(\\beta_{0}\\)) to the known value, and the corresponding descent parameter \\(\\alpha_{\\mu}\\) (or \\(\\alpha_{\\beta}\\)) to zero: examples are shown in Figure 2. A more realistic test of the algorithm is shown in Figure 3. This shows a synthetic inversion in a 3-D setting, with nonlinear viscosities for both ice and basal sediment (till). Synthetic data were generated using a commercial Stokes solver to simulate flow down an ice stream \\(100\\,\\mathrm{km}\\) long, \\(30\\,\\mathrm{km}\\) wide and \\(1.6\\,\\mathrm{km}\\) thick, over an isolated patch of low-viscosity till (Fig. 3a). The slope was \\(0.01\\) with elevation decreasing in the \\(x\\) direction, driving flow from left to right in Figure 3. Ice rheology was prescribed by a nonlinear Glen flow law (\\ This perturbation, centred on \\(x=50\\,\\mathrm{km}\\), \\(y=15\\,\\mathrm{km}\\), had amplitude equal to the background value (\\(5.3\\times 10^{-7}\\,\\mathrm{m}\\,\\mathrm{d}^{-1}\\,\\mathrm{k}\\mathrm{R}^{-3}\\)), so that \\(C\\) is double the background value at its peak, with an e-fold decay radius of \\(5\\,\\mathrm{km}\\). The rate coefficient, \\(C\\), was then recovered by applying Algorithm 1, using only those data available at the surface. In this experiment, \\(A\\) is assumed known, and no attempt is made to recover a spatially varying viscosity in the interior. The initial guess for \\(C\\) was spatially uniform at the unperturbed background value. The rate coefficient at each location was updated iteratively, from \\(C_{n}\\) to \\(C_{n+1}\\), by adding a contribution in the direction that reduces the cost function. Because an increase in \\(C\\) in effect corresponds to a reduction in \\(\\beta\\), a change of sign was made to Step 6 of Algorithm 1, replacing it as follows: \\(C_{n\\uparrow 1}\\to C_{n}+\\alpha_{C}(|\\Psi_{\\mathrm{b}}^{\\mathrm{O}}|^{2}_{ \\mathrm{F}}-|\\Psi_{\\mathrm{b}}^{\\mathrm{O}}|^{2}_{\\mathrm{F}})\\), with \\(\\alpha_{C}>0\\). The basal velocity vectors, \\(\\Psi_{\\mathrm{b}}^{\\mathrm{O}}\\) and \\(\\Psi_{\\mathrm{b}}^{\\mathrm{O}}\\), were from the Dirichlet and Neumann computations, respectively, and their Frobenius norms are simply their magnitude. If the step-size parameter, \\(\\alpha_{C}\\), is too large, there is a risk of overshooting the minimum, even if the search direction points locally downhill. We set \\(\\alpha_{C}=10^{-7}\\,\\mathrm{m}^{-1}\\,\\mathrm{day}\\,\\mathrm{k}\\mathrm{P}\\mathrm{a }^{-3}\\), but if a step failed to reduce the cost function, \\(\\alpha_{C}\\) was halved and another trial step taken. If this second step failed, the value of \\(\\alpha_{C}\\) minimizing \\(J\\) in the current search direction was estimated from a quadratic fit, and \\(C_{n+1}\\) updated using this value. The extension of our theory to the nonlinear sliding law is heuristic, since we have not differentiated the cost function appropriate to the nonlinear case. Despite this, it works well in practice. Results of the inversion are shown in Figure 2(b). The close match to Figure 2(a) demonstrates the successful convergence of the algorithm, even for a 3-D problem with nonlinear rheologies of ice and till. We investigated the effect of measurement errors upon the algorithm by perturbing the synthetic surface observations with randomly generated noise, before applying the inversion algorithm. Comparing Figure 2(a) with Figure 2(c) shows that the recovered rate coefficients suffer some degradation when measurement errors are present. Nevertheless, the inversion is still able to detect the presence of the low-viscosity sediment. Comparison of Figure 2(c) with Figure 2(d) shows that broad-scale features are recovered first. As iterations are continued, the inversion produces sharper features with higher amplitude. Figure 2(d) shows that this can eventually produce spurious features, by overfitting to measurement errors. To avoid overfitting, a stopping criterion is needed. For a similar problem Maxwell and others (2008) propose a heuristic criterion based upon the mismatch between Neumann and observed velocities at the surface. Our calculations provide support for this approach. Figure 4 shows the root-mean-square (rms) mismatch between the surface velocity data, \\(\\mathbf{f}\\), and the surface velocities computed from the Neumann problem (\\(\\mathcal{NP}\\)). For the noise-free test case, the mismatch decreases iteration by iteration. However, when noise is added, the mismatch stagnates at a level consistent with the rms noise level (dashed horizontal line). Thus, a conservative estimate of the observational error on \\(\\mathbf{f}\\) can serve to define a stopping criterion. As an example, the first iteration to reach the rms noise level is plotted in Figure 2(c). ## Discussion and Conclusions For the purpose of initializing ice-sheet forecasts, we have described a simple algorithm to invert for the basal drag coefficient and ice viscosity. If Algorithm 1 is used to estimate the basal drag coefficient and/or refine the ice viscosity, the ice-sheet forecast will begin in a state of force balance, with present-day geometry and surface velocities. This is preferable to a state beginning with present geometry, but with unrealistic viscosity or drag coefficient, since this would produce an 'initialization shock' at the beginning of the forecast and a spurious contribution to the sea-level forecast. Use of Algorithm 1 is also preferable to selecting an initial state for a 21st-century simulation simply by'spinning up' the ice-sheet model over many thousands of years. An ice-sheet model may have more than a million variables describing its geometry and flow. Without the constraint of observations, there is slim chance of obtaining initial conditions close enough to the present-day values to be useful for prediction. A practical advantage of Algorithm 1 is its ease of implementation. It only uses the forward solver of the Stokes equations, so there is no requirement to develop and test a separate adjoint model. This also means that Algorithm 1 benefits automatically from any parallelization, adaptive grid refinement or asymptotic approximations employed by the forward model. Conveniently, the terms in Equation (4) are proportional to deformational and frictional heating rates, which are routinely evaluated in many ice-sheet models. This preliminary investigation does not present an exhaustive test of the ability of Algorithm 1 to recover viscosity and basal drag coefficient from realistic glacological data. In particular, we have assumed that continuous fields of surface elevation, surface velocity and thickness are available. For some applications, error-prone surface velocity data could be incompatible with incompressibility, perhaps leading the Dirichlet problem to be ill-posed unless this constraint is Fig. 4: Convergence for synthetic inversions shown in Figure 3. The plot shows the rms mismatch (\\(\\mathrm{m}\\,\\mathrm{d}^{-1}\\)) between Neumann and observed surface velocities. Without measurement errors, convergence continues throughout. When measurement errors are introduced, by adding noise to the synthetic data, the convergence stagnates and a stopping criterion is needed. The horizontal dashed line shows the rms of the added noise. Large symbols show the iterations plotted in Figure 3. enforced. If the geometry of the bed is poorly known it would be preferable to solve for bed elevation and drag coefficient simultaneously (Gudmundsson and Raymond, 2008; Raymond and Gudmundsson, 2009). This would require modification of our algorithm, and it may be necessary to take account of prior knowledge of the spatial statistics for bed elevation and its uncertainty (Raymond and Gudmundsson, 2009). To speed convergence, a more sophisticated algorithm, such as conjugate gradients, could be used to minimize the cost function. Deriving the precise derivative of the cost function appropriate for the nonlinear sliding law may also improve the performance. The sensitivity to the stopping criterion suggests that it would be helpful to optimize this aspect further. Chaabane and others (1999) present a more complete theoretical analysis for the Laplace equation. It would be worth considering which of their results transfer to the glaciological situation considered here. The application to a vector velocity field represents a qualitative difference from the scalar Laplace equation. In particular, further theoretical and numerical work is needed to establish the resolving power of the method, the sensitivity to measurement errors and whether additional regularization is needed. Nevertheless, we hope the inverse Robin problem specified above, the descent algorithm and the preliminary results, will motivate further numerical and theoretical calculations of this kind, so that simulations of 21st-century climate can be undertaken with properly initialized ice-sheet models. ## Acknowledgements We are grateful to two anonymous reviewers who each suggested a number of improvements. Funding was provided by the Natural Environment Research Council. ## References * Arthern and Hindmarsh (2003) Arthern, R.J. and R.C.A. Hindmarsh. 2003. Optimal estimation of changes in the mass of ice sheets. _J. Geophys. Res._, **108**(F1), 6007. (10.1029/2003JF00021.) * Arthern and Hindmarsh (2006) Arthern, R.J. and R.C.A. Hindmarsh. 2006. Determining the contribution of Antarctica to sea-level rise using data assimilation methods. _Philos. Trans. R. Soc. London, Ser. A_, **364**(1844), 1841-1865. * Arthern et al. (2006) Arthern, R.J., D.P. Winebrenner and D.G. Vaughan. 2006. Antarctic snow accumulation mapped using polarization of 4.3 cm wavelength microwave emission. _J. Geophys. Res._, **111**(D6), D06107. (10.1029/2004JD005667.) * Chabane and Joua (1999) Chabane, S. and M. Joua. 1999. Identification of Robin coefficients by the means of boundary measurements. _Inverse Probl._, **15**(6), 1425-1438. * Goldstein et al. (1993) Goldstein, R.M., H. Engelhardt, B. Kamb and R.M. Frolich. 1993. Satellite radar interferometry for monitoring ice sheet motion: application to an Antarctic ice stream. _Science_, **262**(5139), 1525-1530. * Gudmundsson and Raymond (2008) Gudmundsson, G.H. and M. Raymond. 2008. On the limit to resolution and information on basal properties obtainable from surface data on ice streams. _Crowp._, **2**(1), 167-178. * Gustafson (1999) Gustafson, K.E. 1999. _Introduction to partial differential equations and Hilbert space methods. Third edition._ New York, Dover Publications. * was it Robin's _Math. Intell._, **20**(1), 63-71. * Hindmarsh (2006) Hindmarsh, R.C.A. 2006. The role of membrane-like stresses in determining the stability and sensitivity of the Antarctic Ice Sheets: back pressure and grounding line motion. _Philos. Trans. R. Soc. London, Ser. A_, **364**(1844), 1733-1767. * Kierzenka and Shampine (2001) Kierzenka, J. and L.F. Shampine. 2001. A BVP solver based on residual control and the MATLAB PSE. _ACM Trans. Math. Softw._, **27**(3), 299-316. * Kohr and Vogelius (1984) Kohr, R. and M. Vogelius. 1984. Determining conductivity by boundary measurements. _Commun. Pure Appl. Math._, **37**(3), 289-298. * MacAyeal (1989) MacAyeal, D.R. 1989. Large-scale ice flow over a viscous basal sediment: theory and application to Ice Stream B, Antarctica. _J. Geophys. Res._, **94**(4), 4071-4087. * MacAyeal (1992) MacAyeal, D.R. 1992. The basal stress distribution of Ice Stream E, Antarctica, inferred by control methods. _J. Geophys. Res._, **97**(81), 595-603. * MacAyeal (1993) MacAyeal, D.R. 1993. A tutorial on the use of control methods in ice-sheet modeling. _J. Claciol._, **39**(131), 91-98. * Maxwell et al. (2008) Maxwell, D., M. Trufler, S. Avdonin and M. Stuefer. 2008. An iterative scheme for determining glacier velocities and stresses. _J. Glaciol._, **54**(188), 888-898. * Muszynski and Birchfield (1987) Muszynski, I. and G.E. Birchfield. 1987. A coupled marine ice-stream-ice-shelf model. _J. Glaciol._, **33**(113), 3-15. * Patlyn and others (2008) Patlyn, F. and 2008. Benchmark experiments for higher-order and full-Stokes ice sheet models (SUMP-HOM). _Cryosphere_, **2**(1), 95-108. * Raymond and Gudmundsson (2009) Raymond, M.J. and G.H. Gudmundsson. 2009. Estimating basal properties of ice streams from surface measurements: a non-linear Bayesian inverse approach applied to synthetic data. _Cryosphere_, **3**(2), 265-278. * Wingham et al. (2006) Wingham, D.J., A. Shepherd, A. Muir and G.J. Marshall. 2006. Mass balance of the Antarctic ice sheet. _Philos. Trans. R. Soc. London, Ser. A_, **364**(1844), 1627-1635. * Zorn (1946) Zorn, M.A. 1946. Derivatives and Frechet differentials. _Bull. Am. Math. Soc._, **52**(2), 133-137. ## Appendix This appendix relates the inverse problem (\\(\\mathcal{IP}\\)) to the variational problem of minimizing the Kohn and Vogelius (1984) cost function. To identify the minimum of the cost function we write it as an integral over \\(\\Gamma_{\\mathrm{s}}\\), the surface of the ice sheet, perturb this expression, and apply the fundamental lemma of the calculus of variations. The resulting expression solves the inverse problem (\\(\\mathcal{IP}\\)). Finally we outline the steps used to derive Equation (4), the gradient of the cost function with respect to small changes in viscosity and basal drag. Velocities, stresses and parameters are implicitly assumed continuous and differentiable throughout. To simplify notation, define the Frobenius product \\(\\hat{\\mathbf{e}}:\\boldsymbol{\\sigma}\\equiv\\mathrm{Trace}\\big{(}\\hat{\\mathbf{e}}^ {\\ast}\\boldsymbol{\\sigma}\\big{)}\\), and the Frobenius norm \\(|\\boldsymbol{\\sigma}|_{\\mathrm{f}}^{2}\\equiv\\boldsymbol{\\sigma}\\boldsymbol{ \\sigma}\\colon\\boldsymbol{\\sigma}\\). Generalized superscripts (A,B) \\(\\in\\) (D,N,D,\\(\\prime\\),N') denote solutions from either the Neumann (N) or Dirichlet (D) problems, or their first-order perturbations (N' and D', respectively, defined later in this appendix). Here are two useful identities: the first, from Leibniz's product rule for differentiation, is \\[\\boldsymbol{\ abla}\\cdot\\boldsymbol{\\mathsf{v}}^{\\mathsf{A}}\\cdot\\boldsymbol{ \\mathsf{o}}^{\\mathsf{B}}=\\boldsymbol{\\mathsf{v}}^{\\mathsf{A}}\\cdot\\boldsymbol{ \ abla}\\cdot\\boldsymbol{\\mathsf{o}}^{\\mathsf{B}}+\\hat{\\mathbf{e}}^{\\mathsf{A}} \\colon\\boldsymbol{\\mathsf{o}}^{\\mathsf{B}}; \\tag{1}\\] the second, from the divergence theorem applied to the volume, \\(\\Omega\\), is \\[\\int_{\\Omega}\\boldsymbol{\ abla}\\cdot\\boldsymbol{\\mathsf{v}}^{\\mathsf{A}} \\cdot\\boldsymbol{\\mathsf{o}}^{\\mathsf{B}}=\\int_{\\Gamma_{\\mathrm{s}}+\\Gamma_{ \\mathrm{b}}}\\boldsymbol{\\mathsf{v}}^{\\mathsf{A}}\\cdot\\boldsymbol{\\mathsf{o}}^{ \\mathsf{B}}\\cdot\\hat{\\mathbf{n}}. \\tag{2}\\] Equations (1) and (2), and substitutions from the Neumann and Dirichlet problems (\\(\\mathcal{NP}\\) and \\(\\mathcal{DP}\\)) allow the cost function, \\(J\\), to be written as a surface integral. \\[J(\\mu,\\beta) = \\int_{\\Omega}2\\mu\\left|\\mathbf{\\xi}^{\\mathrm{N}}-\\mathbf{\\xi}^{\\mathrm{D}} \\right|^{2}_{\\mathrm{f}}+\\int_{\\Gamma_{\\mathrm{v}}}\\beta\\left|\\mathbf{\\mathsf{V}}^{ \\mathrm{N}}-\\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\right|^{2}_{\\mathrm{f}}\\] \\[= \\int_{\\Omega}\ abla\\cdot\\left(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}-\\mathbf{ \\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot 2\\mu\\left(\\mathbf{\\xi}^{\\mathrm{N}}-\\mathbf{\\dot{\\mathbf{ \\mathsf{e}}}}^{\\mathrm{D}}\\right)\\] \\[-\\int_{\\Omega}\\left(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}-\\mathbf{\\mathsf{v}}^ {\\mathrm{D}}\\right)\\cdot\ abla\\cdot\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}}+p^{ \\mathrm{N}}\\mathbf{\\mathsf{I}}-\\mathbf{\\mathsf{v}}^{\\mathrm{D}}-p^{\\mathrm{D}}\\mathbf{ \\mathsf{I}}\\right)\\] \\[+\\int_{\\Gamma_{\\mathrm{v}}}\\beta\\left|\\mathbf{\\mathsf{V}}^{\\mathrm{N} }-\\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\right|^{2}_{\\mathrm{f}}\\] \\[= \\int_{\\Gamma_{\\mathrm{v}}}\\left(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}-\\mathbf{ \\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}}-\\mathbf{ \\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\hat{\\mathbf{\\mathsf{n}}}\\] \\[+\\int_{\\Gamma_{\\mathrm{v}}}\\left(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}-\\mathbf{ \\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}}-\\mathbf{ \\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\hat{\\mathbf{\\mathsf{n}}}\\] \\[+\\int_{\\Gamma_{\\mathrm{v}}}\\beta\\left|\\mathbf{\\mathsf{V}}^{\\mathrm{N} }-\\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\right|^{2}_{\\mathrm{f}}\\] \\[= \\int_{\\Gamma_{\\mathrm{v}}}\\left(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}-\\mathbf{ \\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}}-\\mathbf{ \\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\hat{\\mathbf{\\mathsf{n}}}.\\] Small perturbations, \\(\\mu^{\\prime}\\) and \\(\\beta^{\\prime}\\), to \\(\\mu\\) and \\(\\beta\\) will affect the solutions \\(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}\\) and \\(\\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\) by small amounts \\(\\mathbf{\\mathsf{v}}^{\\mathrm{N}^{\\prime}}\\) and \\(\\mathbf{\\mathsf{v}}^{\\mathrm{D}^{\\prime}}\\). These changes will perturb the cost function, \\(J\\), by an amount \\(\\mathsf{d}J\\). To first order: \\[\\mathsf{d}J=\\mathsf{d}_{\\mu}J+\\mathsf{d}_{\\beta}J = \\int_{\\Gamma_{\\mathrm{v}}}\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}^{ \\prime}}-\\mathbf{\\mathsf{v}}^{\\mathrm{D}^{\\prime}}\\right)\\cdot\\left(\\mathbf{\\mathsf{v }}^{\\mathrm{N}}-\\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\hat{\\mathbf{\\mathsf{n}}}\\] \\[+\\int_{\\Gamma_{\\mathrm{v}}}\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}}- \\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}^{ \\prime}}-\\mathbf{\\mathsf{v}}^{\\mathrm{D}^{\\prime}}\\right)\\cdot\\hat{\\mathbf{\\mathsf{n}}}\\] \\[= \\int_{\\Gamma_{\\mathrm{v}}}\\mathbf{\\mathsf{v}}^{\\mathrm{N}^{\\prime}} \\cdot\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}}-\\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\right) \\cdot\\hat{\\mathbf{\\mathsf{n}}}\\] \\[-\\int_{\\Gamma_{\\mathrm{v}}}\\left(\\mathbf{\\mathsf{v}}^{\\mathrm{N}}- \\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\right)\\cdot\\mathbf{\\mathsf{v}}^{\\mathrm{D}^{\\prime}} \\cdot\\hat{\\mathbf{\\mathsf{n}}}\\;.\\] Minimizing \\(J\\) subject to the constraints \\(\\mathcal{D}\\mathcal{P}\\) and \\(\\mathcal{N}\\mathcal{P}\\) requires \\(\\mathsf{d}J=0\\) for arbitrary \\(\\mathbf{\\mathsf{V}}^{\\mathrm{N}^{\\prime}}\\) and \\(\\mathbf{\\mathsf{v}}^{\\mathrm{D}^{\\prime}}\\) (and hence \\(\\mathbf{\\mathsf{v}}^{\\mathrm{D}^{\\prime}}\\)). This requires \\(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}=\\mathbf{\\mathsf{v}}^{\\mathrm{D}}=\\mathbf{\\mathsf{f}}\\), and \\(\\mathbf{\\mathsf{v}}^{\\mathrm{D}}\\cdot\\hat{\\mathbf{\\mathsf{n}}}=\\mathbf{\\mathsf{v}}^{\\mathrm{ N}}\\cdot\\hat{\\mathbf{\\mathsf{n}}}=\\mathbf{\\mathsf{v}}\\) on \\(\\Gamma_{\\mathrm{S}}\\). In that case \\(J\\) is zero from Equation (A) while elsewhere it is positive, so the stationary point must be a minimum. The variational problem of minimizing \\(J\\), subject to the constraints of \\(\\mathcal{D}\\mathcal{P}\\) and \\(\\mathcal{N}\\mathcal{P}\\), solves the inverse problem \\(\\mathcal{I}\\mathcal{P}\\). To derive Equation (4), introduce the first-order perturbations of the Neumann and Dirichlet problems by substituting perturbed quantities (\\(\\mu+\\mu^{\\prime}\\), \\(\\beta+\\beta^{\\prime}\\), \\(\\mathbf{\\mathsf{V}}^{\\mathrm{N}}+\\mathbf{\\mathsf{v}}^{\\mathrm{N}^{\\prime}}\\), etc.) into \\(\\mathcal{D}\\mathcal{P}\\) and \\(\\mathcal{N}\\mathcal{P}\\), subtracting the unperturbed equations, then neglecting terms that involve products of perturbations: \\[\\left(\\mathcal{D}\\mathcal{P}^{\\prime}\\right)\\left\\{\\begin{array}{cl}\\frac{1}{2 }\\left[\\left(\\mathbf{\ abla}\\mathbf{\\mathsf{v}}^{\\prime}\\right)+\\left(\\mathbf{\ abla}\\mathbf{ \\mathsf{v}}\\right)^{*}\\right]&=\\mathbf{\\dot{\\mathbf{\\mathsf{e}}}}^{\\prime}&\\text{in }\\Omega,\\\\ \\mathbf{\\mathsf{o}}^{\\prime}+p^{\\prime}\\mathbf{\\mathsf{I}}&=2\\mu\\mathbf{\\dot{\\mathbf{\\mathsf{e}}}}^ {\\prime}+2\\mu^{\\prime}\\mathbf{\\dot{\\mathbf{\\mathsf{e}}}}^{\\mathrm{D}}&\\text{in }\\Omega,\\\\ \\mathbf{\\mathsf{T}}\\mathrm{e}(\\mathbf{\\mathsf{o}}
As simulations of 21st-century climate start to include components with longer timescales, such as ice sheets, the initial conditions for those components will become critical to the forecast. This paper describes an algorithm for specifying the initial state of an ice-sheet model, given spatially continuous observations of the surface elevation, the velocity at the surface and the thickness of the ice. The algorithm can be viewed as an inverse procedure to solve for the viscosity or the basal drag coefficient. It applies to incompressible Stokes flow over an impenetrable boundary, and is based upon techniques used in electric impedance tomography; in particular, the minimization of a type of cost function proposed by Kohn and Vogelius. The algorithm can be implemented numerically using only the forward solution of the Stokes equations, with no need to develop a separate adjoint model. The only requirement placed upon the numerical Stokes solver is that boundary conditions of Dirichlet, Neumann and Robin types can be implemented. As an illustrative example, the algorithm is applied to shear flow down an impenetrable inclined plane. A fully three-dimensional test case using a commercially available solver for the Stokes equations is also presented. + Footnote †: journal: Journal of Glaciology, Vol. 56, No. 197, 2010
Provide a brief summary of the text.
265
mdpi/033953cf_8199_4ca6_b40b_8ec834fc8f9b.md
The Dynamical Linkage of Atmospheric Blocking to Drought, Heatwave and Urban Heat Island in Southeastern US: A Multi-Scale Case Study Li Dong 1International Center for Climate and Global Change Research, School of Forestry and Wildlife Sciences, Auburn University, Auburn, AL 36849, USA; 12Department of Geosciences, Auburn University, Auburn, AL 36849, USA; 1 Chandana Mitra 1International Center for Climate and Global Change Research, School of Forestry and Wildlife Sciences, Auburn University, Auburn, AL 36849, USA; 12Department of Geosciences, Auburn University, Auburn, AL 36849, USA; 1 Seth Greer 2Department of Geosciences, Auburn University, Auburn, AL 36849, USA; 1 Ethan Burt 2Department of Geosciences, Auburn University, Auburn, AL 36849, USA; 2 Received: 8 November 2017; Accepted: 19 January 2018; Published: 22 January 2018 ## 1 Introduction Atmospheric blocking is a large-scale atmospheric dynamic feature, which is commonly identified in the mid-troposphere over the mid-latitudes as shown in many studies such as Rex [1], Austin [2], Colucci [3], Dole and Gordon [4]. In terms of the midtropospheric geopotential height field, atmospheric blocking can be characterized as several types including the Greek letter \\(\\Omega\\) shape, high-over-low dipole pattern, and cutoff high or low pattern. When atmospheric blocking takes place, the normally west-to-east moving weather system would be stalled over a certain region for consecutive days up to even months. In turn, this often leads to extreme weather events such as drought, heat waves, floods, and cold air outbreaks. For instance, the 2011-2014 California drought was closely associated with a predominant blocking ridge which persisted over the North Pacific Ocean for a span of several months, as studied by Wang et al. [5], Teng and Branstator [6], Seager et al. [7], Williams et al. [8]. This drought has caused severe damage on the State's water supply, agricultural production, and many other economic factors. Another example is the 2003 European heatwave, as examined by Chase et al. [9], Fischer and Seneviratne [10], which resulted from a striking atmospheric blocking dominant over the North Atlantic for most of July and August of 2003. This blocking-associated heatwave event has caused more than 70,000 fatalities in the European countries [11]. These blocking-associated extreme weather events exemplify the importance of studying the dynamical linkage between them and blocking. The climatology of atmospheric blocking in both Northern and Southern Hemisphere have been reported in details in numerous studies including Barriopedro et al. [12], Renwick [13], Saez de Adana and Colucci [14]. Generally, in terms of location of blocking occurrence in the Northern Hemisphere, there are two primary basins featuring the peak frequency of blocking, namely, North Pacific Ocean and North Atlantic Ocean, respectively. In terms of seasonal variability of blocking, the peak season for blocking occurrence is generally in winter, often associated with cold air outbreak or drought. When blocking takes place in summer, it could lead to heatwave, drought or flood. Atmospheric blocking also has variability at inter-annual and inter-decadal scales. There have been many studies concerning the influence of El Nino-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO) on blocking frequency of occurrence and intensity as reported by studies including Barriopedro et al. [12], Renwick [13], Saez de Adana and Colucci [14], Dong et al. [15]. So far numerous studies have been carried out focusing on how extreme weather events would respond to global warming. In particular, the center of the debate is how the polar amplification would affect future extreme weather events via modulating atmospheric blocking. Here the polar amplification refers to the polar region being warmed faster than elsewhere due to ice sheet albedo feedback. Francis and Vavrus [16] pointed out that, due to the reduced zonal wind caused by the polar amplification, the atmospheric blocking tends to occur more often such that more frequent extreme weather events would occur. Nevertheless, Wallace et al. [17] opposed this finding by stating that we must evaluate the change of blocking in response to climate change from the perspective of dynamic mechanisms of blocking, in a systematical way. There are both linear and nonlinear feedbacks involved within the response of blocking to climate change. The interactions of these processes are so complex that they are worth of in-depth investigation with both modeling and observational studies. Hassanzadeh et al. [18] further strengthened the point of Wallace et al. [17], using an idealized general circulation model (GCM) to simulate the response of blocking to the weakened meridional temperature gradient caused by global warming. In terms of spatial scale, atmospheric blocking is a large-scale dynamic feature spanning a few hundred kilometers. While for drought and heatwaves, they are more of regional features at a few ten kilometers. Regarding previous studies on the causality of extreme weather events by blocking, so far the minimum spatial scale involved tends to be at the regional scale. Nevertheless, the impact of atmospheric blocking on extreme weather events at further smaller spatial scale, such as city-scale, has not yet been quantified, to the best of the authors' knowledge. For instance, the urban heat island (UHII), which features elevated temperature in metropolitan areas than the surrounding rural areas as revealed by many studies [19; 20; 21], is widely known to exacerbate heatwaves and drought, but it is commonly studied merely at a local scale instead of a spectrum of multiple scales including the scale of atmospheric blocking. Hence, in the present study, we are interested in investigating the cross-scale dynamical relations among atmospheric blocking, drought, heatwave and UHI. In this work, we hypothesize that atmospheric blocking will intensify and sustain the associated extreme weather events (drought, heatwave and UHI in this case) across a spectrum of spatial scales. Specifically, we are intended to address the question: What is the dynamical impact of atmospheric blocking on extreme weather events across multiple scales? In this study, we selected the record-breaking 2007 drought in the Southeastern US for a case study. During this event, a striking atmospheric blocking high took place in August of 2007 and it is largely responsible for the concurrent exceptional drought conditions, severe heatwave and enhanced UHI intensity (UHII) in that region. As for the UHII assessment, in this study we focus on the metropolitan area of Birmingham, AL, USA as our study site. This is because of the unique concurrent drought and heatwave conditions present in Alabama during the middle of August of 2017. The remainder of this study is organized as follows. Section 2 explains data used in this study with details. The definition and detection of blocking are also described in detail in this section. In Section 3, the dynamical linkage of blocking to the 2007 August drought, heatwave in Southeastern US, and UHI in Birmingham, AL, USA is investigated. Discussion is provided in Section 4. Finally, conclusions and further work are presented in Section 5. ## 2 Data In this study, we are focused on the period of 9-17 August 2007 during which a blocking, drought and heatwave concurrently took place in the Southeastern US. In addition, the variations of UHII during this unique period will be examined as well. The National Centers for Environmental Prediction (NCEP)-National Center for Atmospheric Research (NCAR) daily reanalysis data is used to detect blocking in the Northern Hemisphere as well as analyze the drought, heatwave and UHI selected in the present study. The daily fields of NCEP-NCAR reanalysis are mostly available at 17 vertical pressure levels (\\(p\\) = 1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30, 20 and 10 hPa) on a 2.5\\({}^{\\circ}\\)\\(\\times\\) 2.5\\({}^{\\circ}\\)[22]. The observed precipitation data is from Global Precipitation Climatology Center (GPCC). It is on the global grid with 2.5\\({}^{\\circ}\\)\\(\\times\\) 2.5\\({}^{\\circ}\\) spatial resolution. The UHI is expressed as the temperature difference between urban and rural areas in a metropolitan area. Here temperature can be measured in either air temperature or land surface (or skin) temperature [23]. Here we chose to use the land surface temperature (LST) which is converted from the remote sensing data of Landsat 5 TM following Sobrino et al. [24]. The details of LST conversion are provided in Section 3. The Landsat program provides the longest continuous space-based observation of Earth's land [25]. ## 3 Results ### Linkage of Blocking to Southeastern Drought of 2007 The southeast of the US was struck by a record intense drought during the 2006-2008. Figure 1 shows the Palmer Drought Severity Index (PDSI) [26] time series in Alabama during January of 2005 till 2009. The PDSI values are associated with various drought conditions with PDSI ranging from \\(-3.0\\) to \\(-3.99\\) categorized as severe drought, and \\(-4.0\\) or less as extreme drought. It is apparent that the 2006-2008 drought stands out with persistent low PDSI values. In particular, the drought intensity reaches the peak for August and September of 2007, with PDSI being near or exceeding \\(-5\\), which means exceptional drought. Figure 2 shows the 2-D drought intensity across the Southeastern US region for the week of 14 August 2007. The various drought intensity levels are represented by colors ranging from abnormally dry to exceptional drought. Specifically, the State of Alabama is noticeable for being under exceptional drought condition during this period. This 2-year drought, starting roughly from the beginning of 2006 and diminishing at the end of 2008, set record dry conditions in the Southeastern US and led to serious economic consequences. Based on Manuel [27], this drought caused losses to major field crops including corn, wheat, soybeans, cotton and hey totaled more than $1.3 billion. Furthermore, the severe water supply shortage due to this drought has even sparkled the Tri-State Water Wars among Alabama, Georgia and Florida [28]. The 2006-2008 Southeastern US drought is likely triggered by La Nina [29], at least partially, in early 2006. A strong-to-moderate La Nina condition was present in the equatorial Pacific at that time, leading to the below normal precipitation over the Southeastern US. On top of that, the occurrence of a blocking high during August 2007 significantly exacerbated the existing drought conditions [29]. Following Watson and Colucci [30], we used a modified version of the blocking detection algorithm which was originally proposed by Tibaldi and Molteni [31]. Atmospheric blocking is defined as the persistence of negative zonal index, which refers to that the 500-mb geopotentialheights are higher at 60\\({}^{\\circ}\\) N than at 40\\({}^{\\circ}\\) N and this structure spans at least 20 degrees of longitude for at least five consecutive days. The block onset is defined as the first day on which this definition is satisfied for a particular blocking event. Details on blocking detection criteria can be found in Watson and Colucci (2007). Figure 3 shows the onset and evolution of a blocking event during 10-17 August 2007, which includes the pre-blocking and blocking period. The block-onset day is identified as 13 August 2007, which features a marked ridge dominant over the entire US, with a deepened low pressure located upstream of the blocking high system. The reversal of the westerly wind can be readily observed over the block-onset region, which is one of the most characteristic conditions of blocking. This blocking event lasts five days and ends on 17 August 2007. Due to the presence of this blocking event, weather systems containing precipitation are diverted away from the standing blocking such that the region under blocking is commonly associated with below normal precipitation and Figure 1: Time series of Palmer Drought Severity (PDSI) Index for Alabama. Adopted from ncdc.noaa.gov. Figure 2: Drought distribution over the Southeastern US during the week of 14 August 2007, issued by the US Drought Monitor at droughtmonitor.unl.edu. Figure 4 depicts the observed precipitation anomaly for August 2007 apart from the long-term mean August precipitation (1981-2010). It is noticeable that the Southeastern US region features severe precipitation deficit during this month, with the largest departure from normal precipitation being around 100 mm. Given the long-term monthly precipitation is around the same order of magnitude compared to this precipitation deficit, this indicates the very severe drought conditions. For instance, over the northern Alabama, the precipitation deficit for August 2007 is nearly equal to the long-term mean precipitation for August, implying the strikingly dry conditions over this period. This is clearly the manifestation of the intensified drought conditions by the blocking event which occurred concurrently over the same period, bringing more days with clear skies and minimal precipitation due Figure 3: 500-mb geopotential heights during the evolution of the 2007 August blocking event from 10 August (**a**) to 17 August (**h**). The contour interval is 60 m. The block onset is on 13 August 2007. The block-onset region is outlined with a red rectangle on the block-onset day in (**d**). to large subsidence. In terms of time scales, despite that the 2007 August blocking event only lasts for about one week, in contrast to the 2-year Southeastern US drought, the blocking is still capable of worsening the drought conditions during the blocking and even the post-blocking period. For instance, the soil conditions may become much drier during blocking period and this could have longer effect on the drought conditions even after the blocking decays. Hence, this shows clear evidences that blocking could intensify drought conditions during the concurrent period. ### Linkage of Blocking to Southeastern Heatwave of 2007 Heat wave can be defined as the daily maximum temperature of more than five consecutive days exceeding the average maximum temperature by 5 \\({}^{\\circ}\\)C (World Meteorological Organization). The possible mechanisms governing heat waves are reasonably summarized by a recent review article by Horton et al. (2017). They pointed out that the underlying drivers for heat waves are atmospheric blocking, which is commonly associated with clear skies with increasing solar insulation on surface, adiabatic warming accompanied by sinking motion, and warm air advection. In terms of mechanisms of persistent boreal summer heat events associated with blocking, they summarized two alternatives: one is resonant circulation regimes leading to persistent blocking and the other is reduced baroclinicty and decreased eddy kinetic energy featured by the boreal summer season. A record-breaking heat wave arrived across the Southeast of the US in early August of 2007. Frequent impressive strings of greater-than-38 \\({}^{\\circ}\\)C heat were reported. As a result of intense heat and minimal rainfall, severe stress is placed upon pastures, livestock and immature summer crops over Figure 4: Observed precipitation anomaly of August 2007, as shown in (**a**), apart from the long term mean August precipitation as shown in (**b**). The contour interval is 20 mm. this region [33]. In specific, Birmingham, Alabama, reached all-time record maximum temperature exceeding 38 \\({}^{\\circ}\\)C for at least 10 consecutive days from 6-15 August 2007, as depicted by Figure 5. The August 2007 blocking event is indeed largely responsible for this historic heatwave over the Southeastern US. Figure 6 shows the 1000-mb (near surface) temperature anomaly apart from the 1981-2010 long-term mean temperature during 10-17 August 2007. It is evident that the above normal temperature anomaly is found over the middle and Southeastern states. Especially, starting from the blocking onset on 13 August 2007, this temperature anomaly field is intensified and migrates more towards the Southeast, with the largest temperature departure exceeding 8 \\({}^{\\circ}\\)C. This breaking heatwave lasted till 17 August 2007, which overlaps the ending time of the blocking event. As the block started to dissipate after 17 August 2007, the heatwave intensity also started to decrease and eventually dissolve. In order to interpret how this marked temperature anomaly is formed through the influence of blocking over the Southeastern US, we need to look further upon the interaction between blocking and heatwave during August 2007. Figure 7 depicts the low-level (700 mb) vertical motion field during 10-17 August 2007. It is worthwhile to note that when the vertical motion (omega) is negative in sign, it refers to the ascent motion, whereas positive signs correspond to the descent motion. During the three-day pre-block period, i.e., 10-12 August as shown in Figure 7a-c, the Southeastern US is primarily dominated by rising motion at low levels. Starting on the block-onset day, 13 August, the sinking motion begins to prevail over the Southeast and further intensifies through 16 August, which is one day prior to block decay. The adiabatic warming associated with this sweeping subsidence during the blocking period undoubtedly contributes to the formation of the above-normal-temperature anomaly during the August 2007 heatwave in the Southeastern US. Moreover, Figure 8 shows the 850-mb relative humidity overlaid with 850-mb wind vectors during the concurrent blocking and heatwave period in August 2007. A low-level anticyclone is readily observed over the Eastern US during this period, which is attributable to the blocking structure. On 14 August, one day prior to block onset, a very dry center with less than 30% relative humidity is dominant over the Eastern US. With the low-level northernly flow, the very dry and hot air is advected to Alabama and neighboring states, leading to the elevated low-level air temperature over this area. Hence, both the adiabatic warming associated with strong subsidence and the dry-and-warm air advection due to blocking primarily account for the formation of the 2007 August heatwave over the Southeastern US. This is consistent with results from Gouveia et al. [34] and Amraoui et al. [35] who studied the two heatwaves in Greece during July and August 2007 respectively, during which a drought was present concurrently. Their studies showed that the enhanced subsidence and warm-and-dry air advection collectively contribute to the formation of heatwaves in Peloponnees Peninsula of Greece. Figure 5: Daily maximum and minimum temperatures of August 2007 in Birmingham, AL, USA in comparison to the long-term mean daily maximum and minimum temperatures of August, in units of \\({}^{\\circ}\\)C. The data source is National Weather Services. Figure 6: 1000-mb temperature anomaly from the long-term mean temperature from 10 August 2007 (**a**) to 17 August 2007 (**h**), in units of \\({}^{\\circ}\\)C, during the evolution of August 2007 blocking event. Figure 7: 700-mb vertical motion (omega) from 10 August 2007 (**a**) to 17 August 2007 (**h**), in units of Pa/s, during the evolution of August 2007 blocking event. Figure 8: 850-mb relative humidity (%) from 10 August 2007 (**a**) to 17 August 2007 (**h**), overlaid with 850-mb wind vector (ms\\({}^{-1}\\)), during the evolution of August 2007 blocking event. ### Linkage of Blocking to UHI in Birmingham, AL, USA In this section, we desire to look further by exploring the impact of blocking on the urban heat island, at a local scale, during the concurrent drought and heatwave. Here we chose to look closely at Birmingham, AL, which is the largest city in Alabama. Birmingham has an estimated population of 212,157, and a metro population of 1,136,650, which is nearly 1/4 of the state's total population. This Southeastern US city is located at 33.5207\\({}^{\\circ}\\) N, 86.8025\\({}^{\\circ}\\) W and has an urban footprint of 385 square kilometer. Figure 9a shows the map of Birmingham, in relative to Alabama and the US, respectively. In order to accurately identify the specific UHI locations within Birmingham, land type classification is performed to distinguish the urban from the rural type. At first, an unsupervised classification using 12 classes is done to the Landsat image. Then these classes are reclassified to generate three overall classes, which are urban, rural and water, as shown in Figure 9b. The urban classification is found to mainly overlap the Birmingham metropolitan area. Three sites are chosen for investigating the variation of the UHII with the concurrent blocking, drought and heatwave. They primarily represent urban, suburban and rural areas, respectively, as shown in Figure 10a. Specifically, the criteria for choosing urban sites are areas covered by greater than 75% urban surfaces and dominated by concrete, asphalt and other impervious surfaces, whereas the criteria for rural sites are areas less than 20% urban surfaces and dominated by soil and vegetation, according to Hug Hug (2016). The details of the three selected sites for this study are followed. The urban site was selected for downtown Birmingham, being located within US highway 280 (east) and Interstate 65 (west), as shown in Figure 10b. To the north boundary is Interstate 20 and the south includes University Boulevard. Much of the surface characteristics include asphalt pavement for roads, concrete and steel buildings, and rooftops. Some of the urban characteristics include buildings from the University of Alabama at Birmingham and its medical center, and the historical railroad infrastructure, in addition to the business district. There exists very few vegetative landscape, with the exception of Kelly Ingram Park and Linn Park. By having such a large study area that extends over 7.8 square kilometer, selection bias was limited from including statistical outliers. Figure 9: The map of Birmingham, AL, USA in (**a**), in relative to Alabama and the US, adopted from Google Earth Imagery; The land use land cover classification of Birmingham, AL, in (**b**) with red for urban, green for rural and blue for water classifications. Interpolating from the approximate distance relationships of similar geographic regions [37], the suburban area was selected as a comparison site, as shown in Figure 10c, exhibiting more rural physical characteristics. To the north boundaries of Monclair Rd, the area encompasses much of the northern region of Mountain Brook, which is a suburban of Birmingham. Another characteristic of Mountain Brook is the affluence of the area, which has been shown to relate to more vegetation [38]. The suburban site is located within 2 miles (2.876 km) from the urban downtown site. A principal characteristic of UHII is the anthropogenic effect that is caused from the engineered landscape. By selecting a rural area outside of any towns, the effects caused by housing structures and roads are severely limited, which affords the opportunity to obtain a true temperature value of a natural landscape without the effects brought by human activities [39]. This site is located 14.5 miles (23.39 km) from the urban downtown site, as shown in Figure 10d. The thermal data in the Landsat satellite imagery is stored as Digital Numbers (DNs). In order to accurately estimate LST, the DNs first need to be converted into radiance values. Next, the processed image is converted to the brightness temperature by using Planck's radiance function. In addition, the Landsat bands are converted to reflectance bands, which are used to calculate the normalized difference vegetation index (NDVI). The land surface emissivity is derived afterwards. With both the brightness temperature and land surface emissivity being available, the LST is computed with the inverse Planck's radiance function. More details on the LST conversion can be found in Sobrino et al. [24]. The converted LST for Birmingham, AL, USA on 14 August 2007 is shown in Figure 11. The corresponding local time is CDT 11:18 a.m. for that day. It is noticeable that the Birmingham metropolitan area stands out as being covered in red in this figure. At the selected urban site, which Figure 10: (**a**) Selected three sites, outlined in purple boxes, for UHII study in Birmingham, AL, USA. From west to east, they are rural, urban and suburban areas, respectively; (**b**) The selected study area for the urban site, Downtown Birmingham, AL; (**c**) The selected study area for the suburban region of Mountain Brook, a subdivision of Birmingham, AL, USA; (**d**) The selected area for the rural site of Birmingham, AL, USA. All figures are adopted from Google Earth Imagery. is Downtown Birmingham, the area-averaged LST is 41 \\({}^{\\circ}\\)C. Over the selected suburban site, the area-averaged LST is 35 \\({}^{\\circ}\\)C; and over the rural site, it is 33 \\({}^{\\circ}\\)C. Therefore, the UHII based on the LST difference between the urban and rural sites is 8 \\({}^{\\circ}\\)C, on this specific day. Hug [36] studied the UHII of Birmingham, AL, USA for 2014 with the aid of the observation and remote sensing data. Specifically, for evaluating the air UHII in Birmingham, Hug [36] used a device called \"iButton\" to collect the hourly 2-m air temperature during March-August 2014 and found that the mean daytime air UHII for the month of August 2014 is about 1.7 \\({}^{\\circ}\\)C. In addition, Hug [36] also used Landsat data to assess the daytime surface (skin) UHII of Birmingham on 26 March 2014 and 16 July 2014, respectively. On 26 March 2014, the daytime surface UHII is found to be about 5.0 \\({}^{\\circ}\\)C. On 16 July 2014, it is about 7.0 \\({}^{\\circ}\\)C for Birmingham, AL. Hence, the surface UHII of Birmingham found in our present study on 14 August 2007 appears to be within the reasonable range in comparison to the study of Hug [36]. Nevertheless, given Hug [36] provides the surface UHII in Birmingham on only two specific days of 2014, it is nearly impossible to derive a baseline value for the Birmingham surface UHII in the summer season such that our current result can compare against. Meanwhile, numerous studies have been previously carried out upon the surface UHII evaluations which are summarized in two review articles by Phelan et al. [19] and Rasul et al. [20], respectively. The review article by Phelan et al. [19] collected the measured surface (skin) UHII of selected cities around the world for recent years. This study pointed out that these major cities experience UHI at a varying degrees, with the seasonal UHII ranging from 4 to 10 \\({}^{\\circ}\\)C. On the extreme end, Tran et al. [23] reported the daytime surface UHII in excess of 12 \\({}^{\\circ}\\)C for Tokyo, Japan in August 2001, while for Seoul of South Korea, Beijing of China and Shanghai of China, the daytime surface UHII is 8 \\({}^{\\circ}\\)C, 10 \\({}^{\\circ}\\)C and 7 \\({}^{\\circ}\\)C, respectively, for August 2001. Furthermore, according to Rasul et al. [20], the daytime surface UHII is reported to be 7.5 \\({}^{\\circ}\\)C for Vancouver of Canada, 7 \\({}^{\\circ}\\)C for Medellin of Colombia and 3.3 \\({}^{\\circ}\\)C for Athens, Greece. Therefore, in perspective of previous studies, the surface UHII experienced by Birmingham, AL, USA on 14 August 2007 as found in the current study tends to fall toward the relatively high extreme of UHII during the unusual period of concurrent blocking, drought and heatwave of 2007. This serves partial evidences that atmospheric blocking may be capable of strengthening the UHII through the heatwave. Nevertheless, more work is warranted in the regards of developing the baseline Figure 11: Converted LST (land surface temperature) for the Birmingham metropolitan area of AL on 14 August 2007. The unit is \\({}^{\\circ}\\)C. The three boxes in purple refer to the selected rural, urban and suburban sites, from west to east, as in Figure 10. values of Birmingham surface UHII in the summer season so that we would be capable of systematically evaluating the amplifying effect of blocking-associated heat wave upon the surface UHII. ## 4 Discussion Atmospheric blocking is an important large-scale dynamic feature which is commonly studied at the mid-tropospheric level. It often leads to extreme weather events including drought, heatwave, floods and cold air outbreak. Many studies have been done to investigate the dynamics involving how blocking initiates various extreme weather events [5; 6; 10; 11; 40; 41]. Nevertheless, in terms of spatial scales, these studies tend to mostly focus on the regional scale and above. Only very few studies are focused on linking blocking to the local scale. For instance, both Amraoui et al. [35] and Gouveia et al. [34] examined the connection among blocking, drought and wildfires over the Mediterranean basin and Greece, respectively. Hence, in the present study, we chose to examine the dynamical linkages of atmospheric blocking to drought, heatwave and urban heat island at multiple-scales through a case study in the Southeastern US. Based on this thorough diagnostic study in the 2006-2008 drought event, we found that the August 2007 blocking has contributed to reinforcing the drought, initializing the heatwave and strengthening the UHI severity during the concurrent period with blocking. Under global warming, more severe droughts and heatwaves tend to be projected for future climate. Nevertheless, Seager et al. [29] investigated a series of past Southeastern US droughts (1856-2007) with both observational and modeling studies and found that the summer-season precipitation variability in the Southeastern of the US appears governed by purely internal atmospheric variability instead of anthropogenic climate change. In particular, they examined the same 2006-2008 drought event as we did here and showed that this severe drought has no signature of global warming caused by human activities. Hence, we need to always exercise with extreme caution when attempting to interpret the origin of the rising severe droughts and heatwaves. In terms of internal atmospheric variability, the ENSO and blocking could be two major possibilities that cause the formation of severe droughts. For instance, the long standing 2011-2014 California drought has attracted much attention due to its prolonged duration and severity. Recent studies by Wang et al. [5], Teng and Branstator [6], Seager et al. [7] all pointed out that this significant drought is originated from a persistent blocking, instead of the ENSO phases. On the other hand, for the 2006-2008 Southeastern US drought, it is believed to be triggered by the La Nina at the beginning, and then intensified by the blocking during the summer of 2008. Therefore, thorough diagnostic study is warranted before the origin of a severe drought can be determined. In terms of the interaction between UHI and heatwave, Li and Bou-Zeid [42] performed a modeling study with the Weather and Research Forecasting (WRF) model on two sites in Maryland of the US and found that the urban and rural temperatures do not respond to the heatwave in a same way. In specific, Li and Bou-Zeid [42] pointed out that the heatwave tends to intensify the UHII through enhancing the urban temperature more than increasing the ambient rural temperatures. This intensified UHI is attributable to the insufficient soil moisture over the urban area (leading to less latent heat by evapotranspiration and more sensible heat) as well as low wind during the heatwave. In the present study, the influence of blocking on UHI is, in fact, fulfilled via the heatwave during the August 2007. The heatwave serves as the intermediate media between blocking and UHI, from the perspective of spatial scales. Meanwhile, the feedbacks between heat waves and UHI continues to be a hot topic drawing a lot of attention from the climate science community. For instance, a recent study by Li et al. [43] pointed out the contrasting responses of urban and rural surface energy budget to heat waves in Beijing, China. In another study, by Founda and Santamouris [44], a positive feedback between heat waves and UHI is discussed over the coastal sites of Greece. In the previous studies on the dynamical relation between blocking and extreme weather events, most of them tended to merely focus on the causality of extreme weather events by atmospheric blocking instead of feedback to the blocking. A few studies, such as Fischer and Seneviratne [10],Hirsch et al. [45], have investigated the feedback of droughts-, heatwaves-associated soil moisture variations and UHI-associated land use land cover (LULC) change to blocking. Specifically, Fischer and Seneviratne [10] studied the interaction between the 2003 European heatwave and the concurrent notable blocking, and found that the soil moisture deficit from the earlier months before the heatwave led to the reduced latent heating and enhanced sensible heating. This causes the formation of a low-level thermal low pressure center and the amplification of the upper-level ridge, which in turn helps amplify the blocking structure. As for the feedback of LULC change to blocking, Hirsch et al. [45] showed that the LULC change alters the atmospheric circulation through the similar soil moisture feedback such that the LULC change would influence the variability of blocking. Hence, it is worth studying how droughts, heatwave and urban heat island feedback to blocking via the surface flux balance and radiation budget in a systematic way. To put all these feedback together, we schematically depict the dynamical interaction among blocking, drought, heatwave and urban heat island through Figure 12. ## 5 Conclusions and Further Work During 13-17 August 2007, a striking atmospheric blocking persisted over the US, exacerbating the existing drought over the Southeastern US. This drought was originally initiated near the beginning of 2006 by La Nina conditions. Due to the presence of the 2017 August blocking, severe precipitation deficit is observed concurrently, which is largely contributed by the blocking-induced increasing surface insolation with more days of clear sky and enhanced subsidence. The presence of the blocking event not only intensified the drought, but also led to a record-breaking heatwave over the Southeast of the US. In particular, on the block onset day 13 August 2007, a well-above normal temperature anomaly was observed over the Southeastern US, which is indicative of a marked heat wave. The excessive heat observed during this heatwave is attributable to the subsidence-associated adiabatic warming as well as the dry-and-warm air advection over the Alabama and neighboring states. This breaking heatwave lasts till 17 August 2007, which coincides with the breaking-down time of the block. At the local scale, we choose Birmingham, AL, as the study area for exploring the blocking influence on surface UHI. Based on the Landsat 5 TM thermal band 6 data, land surface temperature is converted over the Birmingham metropolitan area and the largest surface UHI is found to be 8 \\({}^{\\circ}\\)C in this area. This value appears within the reasonable range of surface UHII values compared to a similar study done in Birmingham, AL, USA. Nevertheless, more study is warranted to develop a long-term surface UHII baseline during the summer season of Birmingham, AL, USA such that we would be able to systematically assess the amplifying effect of blocking-induced heat wave events on the local surface UHII. Hence, the present work provides a unique case study in which blocking, drought, heatwave Figure 12: Schematic illustration of the dynamical linkage and feedback among atmospheric blocking, drought, heatwave and urban heat island across multiple scales. Here land use land change is shortened for LULC. and UHI all occur concurrently, and interplay across a spectrum of spatial scales including large scale, regional scale and local scale. Blocking was found to reinforce the drought, initiate the heatwave and probably amplify the UHI through this unique case. With the warming global climate, recently, the discussion on how atmospheric blocking would respond to polar amplification has become the center of hot debate. In particular, researchers desire to find out whether polar amplification would enhance the frequency or intensity of extreme weather events, via the variations of blocking behavior under global warming. As for future work, we intend to perform a systematic evaluation upon past blocking events (1948-2016) in accordance with past droughs, heatwaves and UHI at selected sites. We plan to address the question: How often are the past extreme weather events caused by atmospheric block and what is the response of blocking to polar amplification? In addition, we would be also interested in further exploring the feedback of extreme weather events (such as droughs and heatwaves) to blocking via atmosphere-land coupling including the feedback of soil moisture and land use land cover change to atmosphere. This study was supported by the Intramural Grants Program of Auburn University, Alabama, United States. The authors would like to acknowledge Joshua S. Watson from National Weather Service of NOAA for contributing his algorithm for blocking detection. Furthermore, we also thank Donn Rodkohr from Auburn University for providing assistance with converting Landsat remote sensing data to LST. In addition, we greatly appreciate the very constructive comments raised by two anonymous reviewers. These comments have significantly helped improve the clarity and quality of this manuscript. Li Dong and Chandana Mitra conceived and designed the experiments; Li Dong performed analysis of the 2007 August blocking event and wrote the paper; Chandana Mitra provided guidance to Seth Greer and Ethan Burt on converting Landsat data to LST; Seth Greer and Ethan Burt performed the conversion of LST, and have equal contributions to this paper. The authors declare no conflict of interest. ## References * (1) * (2) Rex, D.F. Blocking action in the middle troposphere and its effect upon regional climate. I. An aerological study of blocking action. _Tellus_**1950**, \\(2\\), 196-211. * (3) Austin, J.F. The blocking of middle latitude waserly winds by planetary waves. _Q. J. R. Meteorol. Soc._**1980**, _106_, 327-350. * (4) Colucci, S.J. Planetary-scale preconditioning for the onset of blocking. _J. Atmos. Sci._**2001**, _58_, 933-942. * (5) Dole, R.M.; Gordon, N.D. Persistent anomalies of the extratropical Northern Hemisphere wintertime circulation: Geopotential distribution and regional persistence characteristics. _Mon. Weather Rev._**1983**, _111_, 1567-1586. * (6) Wang, S.Y.; Hipps, L.; Gillies, R.R.; Yoon, J. Probable causes of the abnormal ridge accompanying the 2013-2014 California drought: ENSO precursor and anthropogenic warming footprint. _Geophys. Res. Lett._**2014**, _41_, 3220-3226. * (7) Teng, H.; Branstator, G. Causes of extreme ridges that induce California droughs. _J. Clim._**2017**, _30_, 1477-1492. * (8) Seager, R.; Hoerling, M.; Schubert, S.; Wang, H.; Lyon, B.; Kumar, A.; Nakamura, J.; Henderson, N. Causes of the 2011-2014 California drought. _J. Clim._**2015**, _28_, 6998-7024. * (9) Williams, A.P.; Seager, R.; Abatzoglou, J.T.; Cook, B.I.; Smerdon, J.E.; Cook, E.R. Contribution of anthropogenic warming to California drought during 2012-2014. _Geophys. Res. Lett._**2015**, _42_, 6819-6828. * (10) Chase, T.N.; Wolter K.; Rasool, I. Was the 2003 European summer heat wave unusual in a global context? _Geophys. Res. Lett._**2006**, _33_, doi:10.1029/2006GL027470. * (11) Fischer, E.M.; Seneviratne, S.I. Soil moisture-atmosphere interactions during the 2003 European summer heat wave. _J. Clim._**2007**, _20_, 5081-5099. * (12) Garcia-Herrera, R.; Diaz, J.; Trigo, R.M.; Luterbacher, J.; Fischer, E.M. A review of the European summer heat wave of 2003. _Crit. Rev. Environ. Sci. Technol._**2010**, _40_, 267-306. * (13) Barriopedro, D.; Garcia-Herrera, R.; Lupo, A.R.; Hernandez, E. A climatology of Northern Hemisphere blocking. _J. Clim._**2006**, _19_, 1042-1063. * (14) Renwick, J.A. ENSO-related variability in the frequency of South Pacific blocking. _Mon. Weather Rev._**1998**, _126_, 3117-3123. * (14) Saez de Adana, F.J.; Colucci, S.J. Southern Hemisphere blocking onsets associated with upper-tropospheric divergence anomalies. _J. Atmos. Sci._**2005**, _62_, 1614-1625. * (15) Dong, L.; Vogelsang, T.J.; Colucci, S.J. Interdecadal trend and ENSO-related internal variability in Southern Hemisphere blocking. _J. Clim._**2008**, _21_, 3068-3077. * (16) Francis, J.; Vavrus, S.J. Evidence linking arctic amplification to extreme weather in mid-latitudes. _Geophys. Res. Lett._**2012**, _39_, doi:10.1029/2012GL051000. * (17) Wallace, J.M.; Held, I.M.; Thompson, D.W.J.; Trenberth, K.E.; Walsh, J.E. Global warming and winter weather. _Science_**2014**, _343_, 729-730. * (18) Hassanzadeh, P.; Kuang, Z.; Farrell, B. Responses of midlatitude blocks and wave amplitude to changes in the meridional temperature gradient in an idealized dry GCM. _Geophys. Res. Lett._**2014**, _41_, 5223-5232. * (19) Phelan, P.E.; Kaloush, K.; Miner, M.; Golden, J.; Phelan, B.; Silva, H., III; Taylor, R.A. Urban heat island: Mechanisms, implications, and possible remedies. _Annu. Rev. Environ. Resur._**2015**, _40_, 285-307. * (20) Rasul, A.; Balzter, H.; Smith, C.; Remedios, J.; Adamu, B.; Sobrino, J.A.; Srivanit, M.; Wang, Q. A review on remote sensing of urban heat and cool islands. _Land_**2017**, \\(6\\), 38. * (21) Debbage, N.; Shepherd, J.M. The urban heat island effect and city contiguity. _Comput. Environ. Urban Syst._**2015**, _54_, 181-194. * (22) Kalnay, E.; Kanamitsu, M.; Kistler, R.; Collins, W.; Deaven, D.; Gandin, L.; Iredell, M.; Saha, S.; White, G.; Woollen, J.; et al. The NCEP-NCAR 40-Year Reanalysis Project. _Bull. Am. Meteorol. Soc._**1996**, _77_, 437-471. * (23) Tran, H.; Uchihama, D.; Ochi, S.; Yasuoka, Y. Assessment with satellite data of the urban heat island effects in Asian mega cities. _Int. J. Appl. Earth Obs. Geoinf._**2015**, _14_, 34-48. * (24) Sobrino, J.A.; Jimenez-Munoz, J.C.; Paolini, L. Land surface temperature retrieval from LANDSAT TM 5. _Remote Sens. Environ._**2004**, _40_, 434-440. * (25) Thorne, K.; Markham, B.; Barker, P.S.; Biggar, S. Radiometric calibration of Landsat. _Photogramm. Eng. Remote Sens._**1997**, _63_, 853-858. * (26) Palmer, W.C. _Meteorological Drought_; Department of Commerce: Washington, DC, USA, 1965; 58p. * (27) Manuel, J. Drought in the Southeast: Lessons for water management. _Environ. Health Perspect._**2008**, _116_, 168-171. * (28) Tri-State Water Wars among Alabama, Georgia and Florida. Available online: [https://www.southernenvironment.org/cases-and-projects/tri-state-water-wars-al-ga-fl](https://www.southernenvironment.org/cases-and-projects/tri-state-water-wars-al-ga-fl) (accessed on 1 November 2017). * (29) Seager, R.; Tzanova, A.; Nakamura, J. Drought in the Southeastern United States: Causes, variability over the Last Millennium, and the potential for future hydroclimate change. _J. Clim._**2009**, _22_, 5021-5045. * (30) Watson, J.S.; Colucci, S.J. Evaluation of ensemble predictions of blocking in the NCEP global spectral model. _Mon. Weather Rev._**2002**, _130_, 3008-3021. * (31) Tibaldi, S.; Molteni, F. On the operational predictability of blocking. _Tellus_**1990**, _42_, 343-365. * (32) Horton, R.M.; Mankin, J.S.; Lesk, C.; Coffel, E.; Raymond, C. A review of recent advances in research on extreme heat events. _Curr. Clim. Chang. Response_**2016**, \\(2\\), 242-259. * (33) National Drought Summary from United States Drought Monitor. Available online: [http://droughtmonitor.unl.edu/DroughtSummary.aspx](http://droughtmonitor.unl.edu/DroughtSummary.aspx) (accessed on 1 November 2017). * (34) Gouveia, C.M.; Bistinas, I.; Liberato, M.L.R.; Bastos, A.; Koutsias, N.; Trigo, R.M. The outstanding synergy between drought, heatwaves and fuel on the 2007 Southern Greece exceptional fire season. _Agric. For. Meteorol._**2016**, _218_, 135-145. * (35) Amraoui, M.; Liberato, M.L.R.; Calado, T.J.; DaCamara, C.C.; Pinto-Coelho, L.; Trigo, R.M.; Gouveia, C.M. Fire activity over Mediterranean Europe based on information from Meteosat-8. _For. Ecol. Manag._**2013**, _294_, 62-75. * (36) Hug, W.A. The Study of Urban Heat Islands in the Birmingham and Auburn-Opelika, Alabama Urban Areas, Using Satellite and Observational Techniques. M.S. Thesis, Auburn University, Auburn, AL, USA, 2014; 149p. * (37) Lu, G.Y.; Wong, D.W. An adaptive inverse-distance weighting spatial interpolation techniques. _Comput. Geosci._**2008**, _34_, 1044-1055. * (38) Zhu, P.; Zhang, Y. Demand for urban forests in United States cities. _Landsc. Urban Plan._**2007**, _84_, 293-300 * (39) Oke, T.R. The energetic basis of the urban heat island. _Q. J. R. Meteorol. Soc._**1982**, _108_, 1-24. * (40) Schneidereit, A.; Schubert, S.; Vargin, P.; Lunkkeit, F.; Zhu, X.; Peters, D.H.W.; Fraedrich, K. Large-scale flow and long-lasting blocking high over Russia: Summer 2010. _Mon. Weather Rev._**2012**, _140_, 2967-2981. * Kingston et al. (2015) Kingston, D.G.; Stagge, J.H.; Tallaksen, L.M.; Hannah, D.M. European-scale drought: Understanding connections between atmospheric circulation and meteorological drought indices. _J. Clim._**2015**, _28_, 505-516. * Li and Bou-Zeid (2013) Li, D.; Bou-Zeid, E. Synergistic interactions between urban heat island and heat waves: The impact in cities is larger than the sum of its parts. _J. Appl. Meteorol. Climatol._**2013**, _52_, 2051-2064. * Li et al. (2015) Li, D.; Sun, T.; Liu, M.; Yang, L.; Wang, L.; Gao, Z. Contrasting responses of urban and rural surface energy budgets to heat waves explain synergies between urban heat islands and heat waves. _Environ. Res. Lett._**2015**, _10_, 054009. * Founda and Santamouris (2012) Founda, D.; Santamouris, M. Synergies between urban heat island and heat waves in Athens (Greece), during an extremely hot summer (2012). _Sci. Rep._**2017**, \\(7\\), 10973. * Hirsch et al. (2014) Hirsch, A.L.; Pitman, A.J.; Kala, J. The role of land cover change in modulating the soil moisture-temperature land-atmosphere coupling strength over Australia. _Geophys. Res. Lett._**2014**, _41_, 5883-5890.
Atmospheric blocking is a long standing structure stalled in the mid-troposphere which is often associated with extreme weather events such as droughs, heatwaves, flood and cold air outbreak. A striking atmospheric blocking is identified to persist over the US during 13-17 August 2007, exacerbating the existing drought over the Southeastern US. This pronounced blocking event not only intensified the concurrent drought conditions, but also led to a record-breaking heatwave over the Southeast of the US. The excessive heat observed during this heatwave is attributable to the subsidence-associated adiabatic warming as well as the dry-and-warm air advection over Alabama and the neighboring states. At the local scale, we choose Birmingham, AL, as the study area for exploring the blocking influence on urban heat island. Based on the remote sensing data, the surface (skin) urban heat island is found to be 8 \\({}^{\\circ}\\)C in this area on the block-onset day. This provides partial evidences that the surface urban heat island intensity is likely amplified by the blocking-induced heat waves. The present work provides a unique case study in which blocking, drought, heatwave and urban heat island all occur concurrently, and interplay across a spectrum of spatial scales. We conclude that atmospheric blocking is capable of reinforcing droughs, initiating heatwaves, and probably amplifying the urban heat island intensity during the concurrent period. atmospheric blocking; drought; heatwaves; urban heat island; multiple scale; extreme weather events + Footnote †: journal: Atmosphere
Summarize the following text.
319
arxiv-format/2112_12671v1.md
# Single-mode sapphire fiber Bragg grating Mohan Wang 1 Patrick S. Salter 1 Frank P. Payne 1 Adrian Shipley 2 Stephen M. Morris 1 Martin J. Booth 1 and Julian A. J. Fells 1_Department of Engineering Science, University of Oxford, Parks Road, Oxford, OX1 3PJ, UK 2Rolls-Royce Plc, Derwent Building, 5000 Solihull Parkway, Birmingham Business Park, Birmingham, B37 7YP, UK [email protected] *_ ## 1 Introduction Fiber Bragg grating (FBG) sensors are widely used for remote monitoring of critical infrastructure applications, because of their ability to measure a range of parameters whilst withstanding extreme environments. Silica optical fiber is generally used, but the operational temperature range of silica FBGs is constrained to significantly below 1000\\({}^{\\circ}\\)C. However, sapphire optical fiber has a high melting temperature of \\(\\sim\\)2054\\({}^{\\circ}\\)C, which makes it a promising candidate to extend the limit of current extreme environment sensing applications. For example, gas turbines in aero engines operate at temperatures in excess of 1300\\({}^{\\circ}\\)C and the ability to monitor the temperature distribution within them could enable significant improvements in efficiency and emission reduction. Sapphire is also radiation-hard, allowing measurements in nuclear reactors and avoiding radiation-darkening in space applications. Sapphire optical fiber is a single-crystal, with a large core diameter, no cladding, and a very high refractive index (1.746 at 1550 nm). Sapphire fiber is therefore intrinsically multimode, with over 20,000 modes present in commercially available 75 \\(\\upmu\\)m diameter fiber at 1550 nm [1, 2, 3]. Femtosecond laser direct writing has been widely used to inscribe FBGs that can withstand extreme environments [4]. For example, sapphire FBGs have been fabricated using the phase-mask [5, 6], point-by-point [7], line-by-line [8], and filament-by-filament [9] approaches. However, such gratings have an extremely wide bandwidth (typically \\(\\sim\\)20 nm) because each mode has a different effective refractive index and consequently a different Bragg resonance for a uniform pitch grating. Furthermore, coupling between modes results in the power distribution fluctuating across the spectrum. Very recently, the bandwidth of a sapphire FBG has been reduced to 1.56 nm using a helical grating structure [10]. However, the fiber is still intrinsically multimode, giving rise to a spectral shift as different modes are excited. The multimode behavior of the sapphire fiber has therefore prevented the widespread adoption of sapphire FBGs for commercial sensing applications. A single-mode sapphire fiber is therefore needed to allow accurate FBG sensing. Moreover, a single mode sapphire fiber would also enable many alternative techniques, such as scattering based distributed sensing, interferometric sensing and transmission of coherent signals withinextreme environments. There have been various attempts to fabricate single-mode sapphire fibers. One method is to reduce the diameter of the sapphire fiber using chemical or physical pre-processing, to the extent that it can only support a restricted number of modes [11, 12]. However, the resulting few-mode fiber is only 9 \\(\\upmu\\)m in diameter and therefore mechanically vulnerable. A few-mode fiber has also been demonstrated using irradiation of Li-6 enriched lithium carbonate to form a cladding within the fiber [13], but there was failure above 300\\({}^{\\circ}\\)C. Alternatively, a few-mode FBG inside a sapphire derived fiber with sapphire core and silica-based cladding has been demonstrated [14]. However, the operating temperature range is constrained to less than 1000\\({}^{\\circ}\\)C by the temperature limit of the silica cladding material. Recently, by utilizing the three-dimensional micromachining capability of femtosecond laser processing, single-mode waveguides have been demonstrated in commercially available sapphire bulk using a depressed cladding waveguide (DCW) in the mid-infrared at 2850 nm [15]. However, fabricating single-mode DCWs at telecommunications wavelengths (e.g. 1550 nm), where commodity optical components are available, is significantly more challenging, as the waveguide dimensions required are so much smaller. We previously proposed that a multimode sapphire fiber could be micromachined to inscribe a single-mode waveguide along its length containing FBGs [16]. In this paper, we fabricate novel single-mode waveguides in sapphire in the 1550 nm waveband and inscribe Bragg gratings within these waveguides. Furthermore, we inscribe these structures within sapphire optical fiber to form single-mode sapphire fiber Bragg gratings. ## 2 Laser Fabrication System and Calibration The fabrication system is illustrated in Fig. 1. A regenerative femtosecond laser system (Light Conversion, Pharos SP-06-1000-PP) is used at a second harmonic generation wavelength of 515 nm. The pulse duration was 170 fs and the output beam was linearly polarized. Pulse energies between 10 to 300 nJ were selected by adjusting a half-waveplate prior to a polarizer. The repetition rate was tuned between 10 kHz and 1 MHz. The laser beam was expanded using a telescope and directed onto a liquid crystal spatial light modulator (SLM, Hamamatsu X10468). The phase image from the SLM was imaged to the pupil plane of an air objective (40\\(\\times\\), 0.75NA) using a 4-f imaging telescope. Phase correction was adaptively applied to the SLM, to optimize the laser focal spot and mitigate the aberration due to the refractive index mismatch between the sapphire and air [17]. Figure 1: The femtosecond laser fabrication system. The sample was mounted on a three-axis motion stage (\\(x\\), \\(y\\): Aerotech ABL10100L and \\(z\\): ANT95-3-V) and translated along the motion stage \\(x\\) and \\(y\\) axes at a speed between 0.1 to 25 mm/s. Laser modifications were introduced along the \\(c\\)-axis of 10\\(\\times\\)10\\(\\times\\)1 mm \\(M\\)-plane sapphire substrates (Pi-KEM Limited) at a depth of 200 \\(\\upmu\\)m below the surface, with the polarization direction parallel to the writing direction. A range of values for pulse energy, repetition rate, and writing speed were used to write single-tracks within the bulk sapphire, in order to establish the parameter window for laser-crystal modification, following a procedure similar to that described in [15]. Fig. 2(a) and 2(b) show the top and cross-sectional views respectively, of the femtosecond laser inscribed tracks at a fixed repetition rate of 1 MHz and a stage translation speed of 11 mm/s. The tracks are for increasing laser pulse energies between 30 nJ to 175 nJ, from left to right. The threshold energy was found to be \\(\\sim\\)15 nJ, below which the pulse energy became insufficient to generate nonlinear absorption inside the sapphire crystal. The tracks appeared black under the reflection microscope, likely because of scattering due to the formation of subwavelength nanograting structures [18, 19]. Fig. 2(c) summarizes the dimensions of the single-tracks written at different repetition rates and pulse energies, at a writing speed of 11 mm/s. The femtosecond laser inscribed track exhibited an asymmetrical 'carrot-shape', related to the energy density distribution at the laser focus and various nonlinear absorption mechanisms [20]. We measured the height and width as the longest dimension along the vertical (laser beam propagation direction) and horizontal (motion stage \\(y\\) direction) axes. It was found that the height of the single-track increased monotonically with increasing laser pulse energy and repetition rate, from 3.85 \\(\\upmu\\)m (at a pulse energy of 34 nJ and repetition rate of 10 kHz) to 26.36 \\(\\upmu\\)m (at 128 nJ and 1 MHz). It was also observed that the width of each single-track was consistently below 3 \\(\\upmu\\)m, regardless of the fabrication parameters. Figure 2: Top-view (a) and cross-sectional view (b) of the laser-induced single-tracks using a repetition rate of 1 MHz and a scan speed of 11 mm/s, at increasing laser pulse energies, measured using a reflection microscope; the red bar indicates a length of 20 \\(\\upmu\\)m; (c) graph of width (triangle) and height (square) of the femtosecond laser-written single-tracks for different pulse energies, at repetition rates between 10 kHz and 1 MHz. ## 3 Single-mode waveguides in planar sapphire The regions of the sapphire directly exposed to the laser pulses undergo a decrease in refractive index. A DCW is formed by leaving the core unexposed and exposing the material surrounding the core to reduce its refractive index. Using the data from the single-tracks, a single-mode DCW was designed. The modification dimensions for 30 nJ laser pulse energy, 1 MHz repetition rate and a scan speed of 11 mm/s were used. The design consisted of thirty-four overlapped single-tracks, following an elliptical shape as shown in Fig. 3(a). The number of thirty-four was chosen empirically, to increase the overlap of individual tracks, whilst mitigating the formation of cracks. Fig. 3(b) shows the cross-sectional view of the fabricated DCW in \\(M\\)-plane sapphire bulk. The \\(c\\)-axis was along the waveguide length to be consistent with sapphire fiber. The outer dimensions for the major (vertical) and minor (horizontal) axes were measured to be \\(\\sim\\)32 \\(\\upmu\\)m and \\(\\sim\\)17 \\(\\upmu\\)m, respectively, while the corresponding values for the inner dimensions were \\(\\sim\\)15 \\(\\upmu\\)m and \\(\\sim\\)10 \\(\\upmu\\)m, respectively. To characterize the waveguiding performance of the DCW, a tunable laser source (ID Photonics, CoBrite) was connected to a standard single-mode fiber with a cleaved end (Corning SMF28e+) and butt-coupled to the sapphire end facet. The near-field output mode field was imaged using an 80\\(\\times\\) microscope objective (Olympus ULWDMSPlan80) and lens (f = 100 mm achromatic doublet) onto an InGaAs camera (Hamamatsu C14041-10U). Fig. 3(c) shows the measured mode profile at 1550 nm, superimposed with the DCW design. Figure 4: The experimentally measured guided mode profiles of a DCW at 1550 nm for TE (a) and TM (b) modes. Adjacent are their respective mode fields along the axis (blue straight line) together with a Gaussian fit (orange dashed line). Figure 3: (a) the DCW design; (b) microscope image of the fabricated DCW from the side facet; and (c) the measured mode profile at 1550 nm with the waveguide design superimposed on top. A polarizer plate was placed in front of the camera and the polarization of the input light was adjusted with a manual controller. Fig. 4 shows the mode profiles of the DCW. The full-width-half-maximum (FWHM) for the TE mode (polarized along the \\(y\\) direction) was measured to be 6.32 \\(\\upmu\\)m horizontally and 10.49 \\(\\upmu\\)m vertically [Fig. 4(a)], while these values were 6.07 \\(\\upmu\\)m and 10.68 \\(\\upmu\\)m, respectively, for the TM mode (polarized along the \\(z\\) direction) [Fig. 4(b)]. The total insertion loss was measured to be between 10.96 to 11.02 dB, (polarization dependent). The waveguides therefore exhibit extremely low polarization dependent loss. The loss includes the Fresnel reflection loss, coupling loss due to the mode mismatch between single-mode fiber and DCW, propagation loss for the 1 cm waveguide, and an extra light diffraction loss due to an edge chamfer leading to a gap of \\(\\sim\\)250 \\(\\upmu\\)m between the DCW and the bulk end facet. The measured core dimensions and mode widths were used to estimate the laser-induced refractive index change, assuming the core refractive index is unchanged. A two-dimensional Gaussian was fitted to the measured mode widths using a variational best fit. Using a method in [21], the refractive index modification was estimated to be \\(\\Delta n=-2\\times 10^{-3}\\). Simulation results using FIMMWAVE(tm) (Photon Design Ltd.) are shown in Fig. 5 for the waveguide, confirming it to be single-mode. The next higher mode (\\(E_{y11}\\)) is predicted to occur at a core size of \\(11.66\\times 15.16\\)\\(\\upmu\\)m. There is therefore a good margin for single-mode behavior. The waveguide loss is dominated by mode-leakage due to the finite cladding width as shown in Fig. 5(b) and could be reduced by increasing this width. The simulated losses appear to be higher than observed, which is likely due to the uncertainty in estimating the extent of the modified region. The loss is dominated by the minor axis, so an optimum design would have a circular cladding. Berube _et al_. achieved a loss of 0.37 dBcm-1 in the mid-infrared (2850 nm) [15]. If similar losses could be achieved in sapphire fiber at telecommunication wavelengths, this could allow fiber lengths of 25 cm or more to be used in reflection. Such lengths would be more than sufficient for many applications, for example passing through the turbine casing of an aero engine. Using simulations based on the inferred laser-inscribed refractive index change, the required cladding diameter to reduce the waveguide leakage loss to 0.1 dBcm-1 was found to be 49 \\(\\upmu\\)m. Reducing the waveguide loss to this figure would allow 1-m lengths of fiber to be used, enabling many new applications. ## 4 Single-mode waveguide Bragg gratings in planar sapphire Single-mode waveguide Bragg gratings (WBG) have been previously demonstrated in various crystals and different waveguide geometries. A WBG has been demonstrated using a DCW and a Bragg grating in LiNb0\\({}_{3}\\) for electro-optical [22] and quasi-phase matching devices [23]. However, these would not be suitable for high-temperature applications [20, 24]. An alternative candidate for high-temperature stability is the dual-line waveguide, which we previously demonstrated on a diamond substrate [25]. However, a dual-line waveguide would not be suitable for transferring to an optical fiber as there would be significant bend loss. Figure 5: (a) Mode intensity profile of the DCW computed in FIMMWAVE(tm) and (b) mode loss as a function of outer ring minor axis width for a fixed aspect ratio of 1.75:1, computed in FIMMWAVE(tm). A new approach was therefore needed to fabricate WBGs suitable for sapphire fibers requiring high temperature stability. We developed a three-step process as follows: Step 1: The bottom layers of the multi-layer DCW were first fabricated using a transversal writing method [Fig. 6(a1)]. Step 2: Each individual period of the WBG was written as a ring using a longitudinal writing method [Fig. 6(a2)]. Step 3: The top layers were written using the same parameters as the bottom layer [Fig. 6(a3)]. The whole process is illustrated in Fig. 6(b), with the 3-D design of each fabrication step displayed from left to right. The WBG fabrication method is similar to one employed for bulk glass material [26], where the refractive index modulation is formed by utilizing the intrinsic shape of the Gaussian beam along the beam propagation direction. Fig. 6(c) shows the top-view of a fabricated WBG structure. Waveguide Bragg gratings were fabricated following the three-step process to the dimensions of the DCW shown in Fig. 7(a). This DCW had a further 3 outer layers compared with that in Fig. 3(a), in order to reduce the waveguide loss. We fabricated a second-order grating with a period of 887.76 nm. For Steps 1 and 3, a 22 nJ pulse energy, 1 MHz repetition rate, and 11 mm/s scan speed were used, while for Step 2, a speed of 0.1 mm/s and 30-nJ pulse energy were used. The cross-sectional view of the structure is shown in Fig. 7(b). It is noted that cracks in the crystal occur both along the beam propagation and sample translation directions during the fabrication process, particularly at the top and bottom of the DCW structure [Fig. 3(b) and Fig. 6(a)]. While it was found experimentally that these cracks can be mitigated by lowering the laser pulse energy or reducing the overlapping of neighboring single tracks, such approaches also result in either a decrease in the refractive index modification or sacrifice the homogeneity in the DCW region. Though the cracks would distort beam focus at the nearby region and cause inconsistency during fabrication, they resid Figure 6: (a) cross-sectional views for Step 1(a1), Step 2 (a2), and Step 3 (a3) during the three-step fabrication process, (b) diagram of the three-step multi-layer WBG fabrication process, and (c) the top-view of a fabricated multi-layer second-order WBG (Step 3). The red bars indicate a length of 20 \\(\\mu\\)m. area and did not incur significant degradation for the guiding behavior of our multilayer structure. After fabrication, the guided mode field was measured using the same method as the DCW, shown in Fig. 7(c). The total insertion loss of the 1 cm WBG was measured to be between 6.85 to 9.84 dB at 1550 nm depending on the input light polarization. The spectral performance of the WBG was characterized using a tunable laser source and photodetector system (Agilent 8164A), with a 3-dB coupler. Fig. 7(d) shows the reflection spectrum at room temperature from 1545 to 1550 nm in 2 pm steps. The WBG has a Bragg wavelength, \\(\\lambda_{\\mathrm{B}}=1547.75\\) nm. We believe that the small additional peak at 1547.6 nm is actually the result of interference fringes caused by a Fabry-Perot cavity formed by reflection at the facets. In a commercial device, an antireflection coating could be applied to the facets to mitigate this effect. Whilst the fringes make it difficult to accurately determine the WBG bandwidth, the FWHM would appear \\(<\\)0.5 nm. The mean effective refractive index for the waveguide structure was calculated to be \\(n_{\\mathrm{eff}}\\)\\(=1.7437\\), from the Bragg equation \\(m\\lambda_{B}=2n_{eff}\\Lambda\\), for order \\(m\\) and pitch \\(\\Lambda\\). To verify the high-temperature performance, a three-layer sapphire WBG written with the same fabrication parameters described above was annealed inside a box furnace. The temperature was raised up to 1000\\({}^{\\circ}\\)C over 7 hours, kept at 1000\\({}^{\\circ}\\)C for an hour, then decreased to room temperature over 7 hours. Fig. 8 shows the reflection spectrum of the WBG before and after annealing. The wavelength shifted from 1547.93 nm to 1548.43 nm, equivalent to an increase in the mean effective refractive index of the WBG from 1.7439 to 1.7444 after annealing. Figure 7: (a) The design schematic of the multi-layer WBG, the red bar indicates a length of 20 μm, (b) the cross-sectional view of the fabricated sample, measured using a transmission microscope, (c) experimentally measured guided mode at 1550 nm, with the mode profile along the horizontal and vertical central axis plotted at the top and the left axes, and (d) the experimentally measured reflection spectrum. ## 5 Single-mode Sapphire Fiber Bragg Grating The fabrication technique described above was applied to create single-mode FBGs in sapphire fibers. A commercial sapphire fiber with a diameter of 425 \\(\\upmu\\)m and the _c_-axis along its length was used (Photran). Sapphire fiber usually has a hexagonal shape from the manufacturing process. However, the surfaces of the six sides are not perfectly flat and have curvatures towards the radial direction. Using the motion stage and the imaging system, the fiber surfaces were profiled, and an arc was fit to each surface of the sapphire fiber, to calculate an approximate radius of curvature. The aberration resulting from this curvature was calculated and compensated using a method we developed for silica optical fibers [27]. The required phase compensation to focus the laser beam at different positions within the sapphire fiber cross-section was calculated and decomposed into Zernike coefficients. This phase correction was applied to the SLM dynamically in real-time during fabrication. It was observed that the dimensions of each single track written in the fiber was much smaller than those in the bulk sapphire at the same parameter settings, which was likely due to incomplete correction of the aberration at the cylindrical fiber surface. The waveguide design was adjusted to take into account the change in dimensions, to create a single-mode guiding area with surrounding outer layers. A second-order FBG was inscribed in the fiber, surrounded by two outer layers at a depth between 50 and 100 \\(\\upmu\\)m below the fiber top surface. The geometry was carefully designed such that the FBG in the center had a guiding area dimension consistent with the single-mode DCW dimensions we found in the modeling and the WBG on the planar sapphire. The same laser pulse energy and repetition rate as that of the planar waveguide were used, while the translation speed was adjusted to 1-mm/s for Step 1 and 3. A 1-cm long FBG was fabricated, shown in Fig. 9(a-b). Crack formation could be observed along the FBG at the top. Figure 8: The reflection spectrum of a WBG before and after annealing. The FBG section was cut and polished using silicon carbide polishing pads to optical quality, in order to expose the fabricated device. A cross-sectional image of the FBG is shown in Fig. 9(c) with a magnified view in the inset. The resulting structure has inner dimensions of \\(\\sim\\)9 \\(\\upmu\\)m and \\(\\sim\\)8 \\(\\upmu\\)m for the major (vertical) and minor (horizontal) axes, respectively. The outer dimensions were \\(\\sim\\)53 \\(\\upmu\\)m and \\(\\sim\\)26 \\(\\upmu\\)m, respectively. Crack formation was again observed both perpendicular and parallel to the FBG directions. The transmission mode profile of the FBG was measured at 1550 nm in Fig. 9(d) and the FWHM were calculated to be 6.67 \\(\\upmu\\)m horizontally and 7.52 \\(\\upmu\\)m vertically. The waveguide within the sapphire fiber shows single-mode operation. The noisy mode profile is likely caused by manufacturing defects due to an inhomogeneous fiber shape along the laser writing direction. The curvature of the sapphire varies along its length, but this was not taken into account, leading to some inaccuracy in aberration correction. Changes in the DCW dimension may introduce an increase in propagation loss. This could be improved with real-time feedback of the fiber surface profile to adaptively correct the phase. The reflection spectrum of the sapphire FBG was measured, shown in Fig. 10. There is a Bragg resonance, with a center wavelength at 1549.04 nm, corresp Figure 10: The experimentally measured reflection spectrum of the single-mode sapphire FBG. Figure 9: (a) Top-view microscope image of a sapphire fiber with an FBG inscribed in the center, (b) a magnified view of (a), (c) microscope image of the end-facet, with a three-layer FBG (Inset: magnified view with the orange bar indicating a length of 20 \\(\\upmu\\)m); and (d) the measured transmitted mode profile. refractive index of \\(n_{\\text{eff}}\\)= 1.7451. As before, we believe that what appear to be side modes immediately adjacent to the main peak are Fabry-Perot cavity modes, arising from reflection at the end facets. Index matching gel was used on both facets, however as the sapphire has a higher index, residual reflection was inevitable. As previously discussed, this effect could be mitigated with antireflection coatings on the end facets. The bandwidth of the Bragg reflection appears \\(<\\)0.5 nm as a result of the single-mode waveguide. Furthermore, unlike multimode sapphire FBGs, the mode is stable, showing significant promise for accurate measurements in extreme environments. ## 6 Conclusions To conclude, sapphire DCWs which are single-mode at telecommunications wavelengths have been demonstrated. The DCWs were fabricated with femtosecond laser direct writing using non-immersion lenses and adaptive optics aberration compensation. We confirmed the DCWs to be single-mode via experimental measurements and simulations. This work was extended to fabricate single-mode sapphire WBGs. This was achieved through a novel ring-shaped geometry incorporating a multi-layer DCW design and cladding-by-cladding inscription. The WBG had a narrow bandwidth (\\(<\\)0.5 nm), and withstood annealing at 1000\\({}^{\\circ}\\)C. Finally, we created a single-mode sapphire FBG by writing these structures within 425 \\(\\upmu\\)m diameter sapphire optical fiber. This result shows great potential for multipoint quasi-distributed sensing in ultra-extreme environments. Furthermore, single-mode sapphire fiber fabricated with this method enables a wide variety of other challenging sensing and communications applications. **Funding.** Engineering and Physical Sciences Research Council (EP/T00326X/1, EP/R004803/01) **Acknowledgments.** The authors gratefully acknowledge the support and advice of their partners Rolls-Royce plc, Cranfield University, UK Atomic Energy Authority and MDA Space and Robotics. The authors thank Tony Wheeler for the use of the furnace. They also thank Professor Dominic O'Brien and Dr Andy Schreier for the use of optical test equipment. **Disclosures.** The authors declare no conflicts of interest. **Data availability.** Data underlying the results presented in this paper are available in Dataset 1, Ref. [28]. ## References * [1] H. Chen, M. Buric, P. R. Ohodnicki, J. Nakano, B. Liu, and B. T. Chorpening, \"Review and perspective: Sapphire optical fiber cladding development for harsh environment sensing,\" Applied Physics Reviews **5**, 11102 (2018). * [2] S. J. Mihailov, \"Fiber bragg grating sensors for harsh environments,\" Sensors **12**, 1898-1918 (2012). * [3] B. Wang, Y. Niu, X. Qin, Y. Yin, and M. Ding, \"Review of high temperature measurement technology based on sapphire optical fiber,\" Measurement: Journal of the International Measurement Confederation **184**, (2021). * The International Society for Optical Engineering (2018), Vol. 10618. * [5] D. Grobnic, S. J. Mihailov, C. W. Smelser, and H. Ding, \"Sapphire fiber bragg grating sensor made using femtosecond laser radiation for ultrahigh temperature applications,\" IEEE Photonics Technology Letters **16**, 2505-2507 (2004). * [6] A. Graf, H. Bartelt, M. Rothhardt, T. Elsmann, and T. Habisreuther, \"Inscription of first-order sapphire Bragg gratings using 400 nm femtosecond laser radiation,\" Optics Express **21**, 4591-4597 (2013). * [7] S. Yang, D. Hu, and A. Wang, \"Point-by-point fabrication and characterization of sapphire fiber Bragg gratings,\" Optics Letters **42**, 4219 (2017). * [8] Q. Guo, Y. sen Yu, Z. M. Zheng, C. Chen, P. L. Wang, Z. N. Tian, Y. Zhao, X. Y. Ming, Q. D. Chen, H. Yang, and H. B. Sun, \"Femtosecond laser increshed sapphire fiber bragg grating for high temperature and strain sensing,\" IEEE Transactions on Nanotechnology **18**, 208-211 (2019). * [9] X. Xu, J. He, J. He, B. Xu, R. Chen, Y. Wang, Y. Yang, and Y. Wang, \"Efficient point-by-point Bragg grating inscription in sapphire fiber using femtosecond laser filaments,\" Optics Letters **46**, 2742 (2021). * [10] Q. Guo, S. Liu, X. Pan, B. Wang, Z. Tian, C. Chen, Q. Chen, Y. Yu, and H. Sun, \"Femtosecond laser inscribed helical sapphire fiber Bragg gratings,\" Optics Letters **46**, (2021). * [11] S. Yang, D. Homa, G. Pickrell, and A. Wang, \"Fiber Bragg grating fabricated in micro-single-crystal sapphire fiber,\" Optics Letters **43**, 62 (2018). * [12] Y. Cheng, C. Hill, B. Liu, Z. Yu, H. Xuan, D. Homa, A. Wang, and G. Pickrell, \"Modal reduction in single crystal sapphire optical fiber,\" Optical Engineering **54**, 107103 (2015). * [13] B. A. Wilson and T. E. Blue, \"Creation of an Internal Cladding in Sapphire Optical Fiber Using the 6Li(n, $_a_$)3H Reaction,\" IEEE Sensors Journal **17**, 7433-7439 (2017). * [14] Q. Guo, Z. Jia, X. Pan, S. Liu, Z. Tian, Z. Zheng, C. Chen, G. Qin, and Y. Yu, \"Sapphire-Derived Fiber Bragg Gratings for High Temperature Sensing,\" Crystals **11**, 946 (2021). * [15] J.-P. Berube, J. Lapointe, A. Dupont, M. Bemier, and R. Vallee, \"Femtosecond laser inscription of depressed cladding single-mode mid-infrared waveguides in sapphire,\" Optics Letters **44**, 37 (2019). * [16] J. Fells, M. Booth, and P. Salter, \"Method of laser modification of an optical fibre,\" International Patent Application WO2019030521A1 (February 14, 2019). * [17] P. S. Salter, M. Baum, I. Alexeev, M. Schmidt, and M. J. Booth, \"Exploring the depth range for three-dimensional laser machining with aberration correction,\" Optics Express **22**, 17644 (2014). * [18] L. Capuano, R. M. Tiggelaar, J. W. Berenschot, J. G. E. Gardeniers, and G. R. B. E. Romer, \"Fabrication of millimeter-long structures in sapphire using femtosecond infrared laser pulses and selective etching,\" Optics and Lasers in Engineering **133**, 106114 (2020). * [19] H. Fan, M. Ryu, R. Honda, J. Morikawa, Z. Z. Li, L. Wang, J. Maksimovic, S. Juodkazis, Q. D. Chen, and H. B. Sun, \"Laser-Inscribed stress-induced birefringence of sapphire,\" Nanomaterials **9**, 1-9 (2019). * [20] J. Burghoff, S. Nolte, and A. Tunnermann, \"Origins of waveguiding in femtosecond laser-structured LiNbO3,\" Applied Physics A: Materials Science and Processing **89**, 127-132 (2007). * [21] A. W. Snyder and J. D. Love, Optical Waveguide Theory (Springer US, 1984). * [22] S. Kroesen, W. Horn, J. Imbrock, and C. Denz, \"Electro-optical tunable waveguide embedded multiscan Bragg gratings in lithium niobate by direct femtosecond laser writing,\" Optics Express **22**, 23339 (2014). * [23] C. Denz, J. Imbrock, L. Wesemann, M. Ayoub, and S. Kroesen, \"Waveguide-integrated three-dimensional quasi-phase-matching structures,\" Optica **7**, 28-34 (2020). * [24] L. Huang, P. Salter, M. Karpinski, B. Smith, F. Payne, and M. Booth, \"Waveguide fabrication in KDP crystals with femtosecond laser pulses,\" Applied Physics A: Materials Science and Processing **118**, 831-836 (2015). * [25] V. Bharadwaj, Y. Wang, T. T. Fernandez, R. Ramponi, S. M. Eaton, and G. Galzerano, \"Femtosecond laser written diamond waveguides: A step towards integrated photonics in the far infrared,\" Optical Materials **85**, (2018). * [26] B. McMillen, B. Zhang, K. P. Chen, M. Li, and S. Huang, \"Ultrafast laser fabrication of Bragg waveguides in chalcogenide glass,\" Optics Letters **39**, 3579-3582 (2014). * [27] P. S. Salter, M. J. Woolley, S. M. Morris, M. J. Booth, and J. A. J. Fells, \"Femtosecond fiber Bragg grating fabrication with adaptive optics aberration compensation,\" Optics Letters **43**, 5993 (2018). * [28] Dataset 1, Datafiles 1-9, Figshare (2021) _[Data submitted as supplementary material for this manuscript on Figshare]_
We present here the inscription of single-mode waveguides with Bragg gratings in sapphire. The waveguide Bragg gratings have a novel multi-layer depressed cladding design in the 1550 nm telecommunications waveband. The Bragg gratings have a narrow bandwidth (\\(<\\)0.5 nm) and have survived annealing at 1000\\({}^{\\circ}\\)C. The structures are inscribed with femtosecond laser direct writing, using adaptive beam shaping with a non-immersion objective. A single-mode sapphire fiber Bragg grating is created by writing a waveguide with a Bragg grating within a 425 \\(\\upmu\\)m diameter sapphire optical fiber, providing significant potential for accurate remote sensing in ultra-extreme environments. Published by The Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Provide a brief summary of the text.
211
arxiv-format/2304_13000v4.md
# Segment anything, from space? Simiao Ren\\({}^{1}\\) Francesco Luzi*\\({}^{1}\\) Saad Lahrichi*\\({}^{2}\\) Kaleb Kassaw*\\({}^{1}\\) Leslie M. Collins\\({}^{1}\\) Kyle Bradbury\\({}^{1,3}\\) Jordan M. Malof\\({}^{4}\\) \\({}^{1}\\) Electrical and Computer Engineering, Duke University \\({}^{2}\\) Division of Natural and Applied Sciences, Duke Kunshan University \\({}^{3}\\) Nicholas Institute for Energy, Environment & Sustainability, Duke University \\({}^{4}\\) Computer Science, University of Montana {simiao.ren, francesco.luzi, saad.lahrichi, kaleb.kassaw}@duke.edu, {leslie.collins, kyle.bradbury}@duke.edu, [email protected] ## 1 Introduction Foundation models are large deep learning models (e.g., in terms of free parameters) that have been trained on massive datasets, giving them the ability to generalize well to novel down-stream tasks (e.g., novel datasets, or prediction targets) with little or no additional training (i.e., so-called few-shot or zero-shot generalization). Recent foundation models have been focused on natural language processing (NLP) tasks (e.g., BERT [9], GPT-3 [4]), while some work also focused on text and imagery (e.g., CLIP [24] and ALIGN [14]). Recently the first foundation model primarily designed for Segmentation tasks was developed, termed the \"Segment Anything Model\" (SAM) [18]. SAM is designed to output a segmentation mask for a given input image based upon one (or several) of the following input prompts: one (or more) points, a bounding box, or a segmentation mask (e.g., one that is coarse). The goal of SAM is to segment _any_ object in _any_ image only based upon these cheap input prompts, and without the need for additional task-specific or dataset-specific adaptation (e.g., training) [18]. SAM's flexible input prompting [18]) and zero-shot segmentation scheme make it a highly flexible model that could be employed as a key component in a variety of vision systems. To illustrate SAM's flexibility and potential impact, the authors in [18] demonstrated its effectiveness on several potential application scenarios. Two important scenarios - which we aim to replicate in overhead imagery in this work - were _interactive annotation_ and so-called _model composition_, which are broadly illustrated in Fig. 1 (see Sec. 4 and Sec. 5 for more detail). **Interactive Annotation.** In this scenario SAM is em Figure 1: Illustration of using SAM for segmenting solar arrays in satellite imagery. ployed to reduce the cost of manual (human) annotation of target object instances. Rather than drawing a mask by hand, the annotator provides cheap input prompts such as bounding boxes or point prompts to SAM. The annotator can optionally provide additional prompts as feedback to SAM, to iteratively refine its segmentation, or _mask_, predictions. SAM was shown to generally outperform existing interactive annotation methods - often by a large margin - on natural imagery tasks and to be capable of producing highly accurate annotations with relatively little guidance [18]. **Model Composition.** In this scenario SAM is employed to predict a segmentation mask based on prompts from another vision model: e.g., a bounding box or another (potentially low-quality) mask. For example, given limited data, a detection model could be trained to produce a (potentially noisy) bounding box, which could then be used to prompt SAM. In this scenario, and on natural imagery, SAM often achieved segmentation accuracy that was comparable to existing models that had been trained specifically for the target task (i.e., with full target label supervision) [18]. ### Contributions of this Work The authors of SAM reported comprehensive empirical evidence demonstrating the effectiveness of SAM for natural imagery, however, they only investigated a single overhead imagery dataset. Vision problems for overhead imagery exhibit unique challenges compared to natural imagery, and therefore it is unclear whether, or to what extent, the effectiveness of SAM transfers to tasks involving overhead imagery. In other words, we ask whether SAM can segment anything _from space_. For example, Fig. 2 presents some out-of-the-box results with SAM on varying tasks, illustrating a variety of performance levels. To answer this question, we replicate the experiments from [18] that correspond to two important potential applications in overhead imagery: interactive annotation and model composition. We adapt the aforementioned experiments of [18] to seven existing public benchmark datasets of overhead imagery (see Table 1), encompassing 5 million square kilometers of surface area. The benchmarks include a variety of widely-studied object classes (e.g., buildings, roads, cloud cover, farming crops), image resolutions (e.g., 0.3m - 30m), and geographic locations (e.g., including Africa, Europe, Asia, and North America). To our knowledge, this is the first systematic and comprehensive investigation of SAM on overhead imagery, including experiments relevant to both model composition and interactive annotation, thereby providing valuable guidance to the research community. From our experiments, we also identify SAM failure cases that are unique to overhead imagery and suggest potential future areas of research for the community. ## 2 Related Work In the short time since its publication, SAM has been used in numerous applications including image dehazing [15], image tagging [34], shadow segmentation [29], sonar segmentation [28], foreground segmentation [30], electron microscopy segmentation [7, 19], and many medical segmentation applications [1, 10, 11, 12, 28, 32, 17]. There have also been several applications of SAM in remote sensing, including segmentation of geological features and landforms in planetary structures [16], segmentation of glaciers [26] and sea ice [31], road segmentation [13], and building segmentation [13, 33]. Our work represents the first comprehensive evaluation of SAM on overhead imagery (e.g., including a variety of target classes, resolutions, and geographic locations), and the first to include both model composition and interactive annotation scenarios. Collectively, therefore, our results are the first that assess how well SAM transfers to overhead imagery applications. ## 3 Benchmark Datasets For our experiments, we utilized eight benchmark datasets of overhead imagery that were selected for inclusion based on several criteria. We first had two strict criteria for inclusion: (i) the availability of class-level or instance-level segmentation labels; and (ii) the datasets were publicly available, to enable further study by the community. Among the datasets that satisfied these criteria, we also applied several softer criteria for inclusion: (i) we prioritized datasets that were of greater interest to the overhead imagery community, as evidenced by their inclusion in many prior publications; (ii) benchmarks that involved widely-studied target objects (e.g., buildings, roads, land use); and (iii) datasets that collectively resulted in a representative set of key properties (e.g., geographic location, image resolution, or target classes). The resulting set of eight datasets that we selected is shown in Table 1, along with references and key details. Further details about our benchmark datasets can be found in the Supplement. **Additional Data Preparation Details.**The datasets we ultimately used for experimentation were constructed based upon the benchmarks in Table 1. In many cases, we used the datasets without modification: Solar, 38-Cloud, DeepGlobe Roads, SpaceNet2. However, we made some modifications to other datasets. Following a recent large-scale comparison of building segmentation models [21], we combine the Inria and the DeepGlobe Building datasets into a single dataset, _DG+Inria Building_, allowing us to compare our results with SAM to the strong supervised models from [21]. The DeepGlobe Land dataset comprises seven classes, and all pixels are assigned to one of seven classes, resulting in a large number of potential object instances; in our experiments, we treat each connected component within each class as an instance (see Sec. 4). To make the problem less computationally intensive, we chose a subset of three classes of increasing visual complexity (water, agriculture, and urban) and in each case, we set their labels to one, and all other classes to zero, resulting in three separate problems: _DG-land-water_, _DG-land-urban_, _DG-land-agri_. Lastly, the Parcel Delineation dataset is natively an edge detection dataset; however, we designated each isolated component of non-edge pixels as an object instance. ## 4 Model Composition Experiments In the Composition scenario, SAM is utilized to enhance the predictions made by other vision models. Specifically, following [18], we assume that SAM is prompted with either a single point, \\(p\\), or a bounding box, \\(b\\), that has been produced by some other vision model. The goal of SAM is to produce an accurate instance mask based upon these simpler, and potentially imperfect, prompts. To emulate the desired model-based prompts, we train U-Net models to generate the bounding box prompts, and we use the ground truth labels from each benchmark to generate the point-based prompts. Our experimental design is illustrated in Fig 3. **U-Net Models.** We train a U-Net segmentation model for each of our benchmark datasets, denoted _U-Net (ours)_. Note that most of our benchmarks have class-level segmentation labels (as opposed to instance-level) and therefore we train our models to predict class-level masks. For each benchmark we create disjoint training and testing data partitions for model training and evaluation. Where possible we use pre-established partitions for each benchmark (e.g., Inria, DeepGlobe, SpaceNet). For some of our benchmarks - where they are available - we also report the IoU of a recent State-of-the-Art (SOTA) U-net model, denoted _U-Net (SOTA)_. Full details of these models can be found in the Supplementary Materials (e.g., architecture, training procedures, source publications). These models reflect the current performance that can be achieved using fully supervised techniques, providing a useful comparison to SAM. **Bounding Box Prompts.** Based upon the output of the class-level masks output by the _U-Net (ours)_ model, we extract instance-level bounding boxes that can then be used to prompt SAM. This is achieved by treating each connected component in the U-Net output as an instance-level mask. Then for each of these instance-level masks, we compute the smallest bounding box that encloses it and use the resulting box to prompt SAM. From these prompts, SAM produces instance-level mask predictions, denoted \\(\\hat{m}_{i}\\), where \\(i\\) refers to the mask generated by the \\(i^{th}\\) box prompt. To evaluate the accuracy of SAM's masks, we use them to create a single class-level mask \\(\\hat{m}=\\cup_{i}\\hat{m}_{i}\\), which can be scored against the class-level ground truth masks available with our benchmark datasets. **Point Prompts.** Our point-based prompts are generated using the ground truth labels that are available with each benchmark dataset. Except for SpaceNet, our benchmark datasets all provide class-level labels. We treat connected components in the ground truth masks as instance-level masks. Within each mask, we generate two point prompts: one by selecting a random point within the mask, and one by selecting the center point in the mask (where \"center\" refers to the point most distant from the mask boundary). We score SAM's performance once using only the random points, and once only the center points. When SAM is prompted with a single point it produces three candidate masks, denoted \\(\\hat{m}^{k}\\), and a score for each mask indicating the model's estimate of the IoU of that mask with the true underlying object, denoted \\(\\hat{c}^{k}\\). SAM returns as a prediction the mask with the highest IoU prediction, which we denote \\(\\hat{m}^{*}\\). Similar to bounding boxes, we produce a single class-level mask by taking the union of all instance-level mask predictions. ### Results and Discussion The results of our composition experiments are presented in Fig. 4. The results indicate that the task-specific supervised models almost always achieve the highest IoU. This is unsurprising since these models were trained using a large quantity of task-specific segmentation masks. The _U-Net (SOTA)_ model always outperforms the _U-Net (ours)_, reflecting the additional effort invested in its specialized design for each benchmark. Therefore, the IoU of the supervised models represents the performance that can be achieved using contemporary vision models out-of-the-box - in the case of _U-Net (ours)_ - and contemporary vision models with significant design investment - in the case of _U-Net (SOTA)_ model. We also see that SAM con \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline Dataset & Unique & Classes & Size & Resolution & Task \\\\ & Locations & & (km\\({}^{2}\\)) & (m) & Type \\\\ \\hline \\hline Solar [3] & 4 & Solar Panel & 1,352.25 & 0.30 & Seg \\\\ \\hline Inria [22] & 5 & Building & 405 & 0.30 & Seg \\\\ \\hline DeepGlobe [8] & 4 & Building & 398.28 & 0.31 & Seg \\\\ \\hline 38-Cloud [23] & NA & Cloud & 2,188,800 & 30 & Seg \\\\ \\hline DeepGlobe & \\multirow{2}{*}{3} & \\multirow{2}{*}{Roads} & \\multirow{2}{*}{2,220} & \\multirow{2}{*}{0.50} & \\multirow{2}{*}{Seg} \\\\ Roads [8] & & & & & \\\\ \\hline \\begin{tabular}{c} Parcel \\\\ Delineation [2] \\\\ \\end{tabular} & \\multirow{2}{*}{1} & Crop & \\multirow{2}{*}{4,403.84} & \\multirow{2}{*}{10} & Edge \\\\ & & Boundaries & & & \\\\ \\hline \\begin{tabular}{c} DeepGlobe \\\\ Land [2] \\\\ \\end{tabular} & \\multirow{2}{*}{3} & Land & \\multirow{2}{*}{1,716.9} & \\multirow{2}{*}{0.50} & Seg \\\\ & & Use & & & \\\\ \\hline SpaceNet 2 [27] & 4 & Building & 3011 & 0.30 & Seg \\\\ \\hline \\end{tabular} \\end{table} Table 1: A summary of the quantity, resolution, and type of data we evaluate. The resolution is in units of meters per pixel. For task type, we have ”Seg” for segmentation and ”Edge” for edge detection. sistently performs significantly worse with point prompts compared to more informative (but more expensive) box prompts, and that the position of point prompts (center versus random) has little effect on results. These findings for overhead imagery are largely consistent with those reported for SAM for natural imagery in [18] (e.g., on COCO and LVIS), where SAM's performance was also always inferior to a supervised model, and less-informative prompts performed worse. However, we highlight a few key differences in our findings here. One key difference in our findings is that the gap in performance between SAM and supervised models (the U-Net models in our case) is significantly larger on most of our benchmarks. This is especially noticeable on the largest and most widely studied benchmark target objects, such as buildings and roads. We hypothesize that this reflects SAM's bias towards objects in natural imagery. For ex Figure 3: Illustration of the Model Composition experimental design, adapted from [18] to our Overhead Imagery application. Please see text for detailed description. Figure 2: The input image, prompts, and segmentation output for SAM is shown for the building, road, solar panel, and crop segmentation tasks. Each row displays a specific segmentation task and the input output image pairs are ordered from worst performing to best performing. Red X’s in the input images represent the prompts given to SAM. The output is shown in three colors where white represents the correctly predicted pixels, red shows the false positive pixels, and green shows the missed pixels. The image-wise IoU is shown above each output image. ample, the building class is highly complex (e.g., buildings are often large, and comprise many object-like sub-components), and therefore they likely cannot be well-characterized using a generic \"objectness\" concept inferred based upon natural imagery. However, SAM also performs relatively poorly (compared to supervised models) on solar arrays, despite their apparent visual simplicity, suggesting that it is not guaranteed to generalize to overhead imagery even when the target objects are relatively simple. As we discuss in **Sec. 6**, SAM's performance can depend strongly upon the scale of the objects (i.e., we find simply artificially up-sampling the imagery can be beneficial), potentially impacting its performance on solar arrays or other small object instances. Lastly, we see that SAM achieves substantially lower performance, and low performance overall, on the road class. This highlights a major challenge with some target classes in overhead imagery where the notion of an object instance is not well-defined. We discuss this issue further in **Sec. 6**. Overall our results suggest that SAM can sometimes generalize well to overhead imagery tasks, providing competitive zero-shot results compared with supervised models in some cases. However, in most cases, it performs relatively poorly compared to supervised models, especially heavily-engineered models (e.g., on buildings and clouds), and sometimes it completely fails (e.g., roads). Many of these challenges may be overcome by fine-tuning SAM on overhead imagery classes (e.g., buildings, clouds, land use), or by producing class-level segmentation rather than instance-level segmentations in cases where an instance concept is poorly defined (e.g., roads). ## 5 Interactive Annotation Experiments In the Interactive Annotation scenario, SAM is employed to reduce the time and effort required for a human to draw instance-level segmentation masks. Rather than drawing a full mask by hand, the annotator can provide one (or more) cheap input prompts, from which SAM can provide richer instance-level masks. Specifically, following [18], we assume that SAM is prompted with either a series of points, denoted \\(p_{i}\\) for the \\(i^{th}\\) point; or a bounding box, denoted \\(b\\). In contrast to the composition scenario in Sec. 4, here we assume that the prompts are provided by a human annotator and this changes some of the experimental design. Our experimental design, which follows [18], is illustrated in Fig 5 and described below. **Bounding Box Prompts.** Bounding box prompts are created in the same fashion as described in Sec. 4 except that we use ground truth class-level masks available with our benchmark datasets instead of those generated by a supervised model. Ground truth masks are generally more accurate than those generated by a model, reflecting the superior accuracy of a human annotator. Recapitulating this procedure, we extract instance-level prompts from the ground truth of each dataset, which are then used to prompt SAM. From these prompts, SAM produces instance-level mask predictions, denoted \\(\\hat{m}_{i}\\), where \\(i\\) refers to the mask generated by the \\(i^{th}\\) box prompt. To evaluate the accuracy of SAM's masks, we again use them to create a single class Figure 4: The intersection-over-union for all models included in our Model Composition experiments. level mask \\(\\hat{m}=\\cup_{i}\\hat{m}_{i}\\), which can be scored against the class-level ground truth masks available with our benchmark datasets. **Single Point Prompts.** In these experiments we evaluate SAM's performance when prompted with a single point, similar to Sec. 4. For this purpose we use exactly the same procedure for generating the point prompts as was described in Sec. 4. In summary, we extract instance-level masks from the ground truth masks available for each of our benchmark datasets and then prompt SAM either with (i) a random point, or (ii) a centered point, in each instance-level mask. When prompted with a point, SAM produces three candidate masks and corresponding estimates of their IoU, \\(\\hat{m}_{t}^{k},\\hat{c}_{t}^{k})\\), where \\(k\\) indexes over the three masks, \\(t\\) indexes the iteration of interactive segmentation (\\(t=1\\) in the single point case). In the composition experiments in Sec. 4 we selected the mask with the largest \\(c\\) value as the final mask, however, a human annotator can evaluate the _true_ IoU of each candidate mask, denoted \\(c_{t}^{k}\\). Therefore, in this case, we select as the output mask, the one that has the highest true IoU, denoted \\(\\hat{m}^{*}\\). Once we have \\(\\hat{m}^{*}\\), we aggregate SAM's instance-level mask predictions in the same way as Sec. 4 and report the resulting IoU for each benchmark. Also following [18] we compare to the RITM interactive annotation model [25], which is evaluated using the same random and centered point prompts, respectively, that were used to prompt SAM. **Interactive Point Prompts.** The goal of this experiment is to emulate an interactive point-prompting scenario, in which a human annotator provides a series of point prompts, and where each point is used to improve upon the masks generated using all the previous prompts. Specifically, and following [18], we begin by prompting SAM with an initial point, \\(p_{1}\\), which is obtained using the center point prompt from the Single Point Prompt above. The final mask chosen for the next iteration of annotation is the one with the highest true IoU, denoted \\(\\hat{m}_{t}^{*}\\). Based upon \\(\\hat{m}_{t}^{*}\\), another point prompt, \\(p_{t+1}\\) is generated with the following process: we identify the largest contiguous error regions (either false positive or false negative) and then produce a point prompt in the center of that region. The point prompt also assigned a number, which is input to SAM, indicating whether the point corresponds to a false negative or false positive region. Then we input \\(p_{t+1}\\) and \\(\\hat{m}_{t}^{*}\\) to SAM to generate another set of candidate masks. We evaluate the performance of SAM and RITM as a function of \\(T\\), the total number of iterations (and total number of point prompts) provided by SAM. ### Results and Discussion The results of our single-point and single-box prompt experiments are reported in Fig. 6. The results indicate that the IoU of SAM tends to decrease as the informativeness of the prompts decreases, where bounding boxes are most informative, and random center points are least informative. SAM seems relatively insensitive to whether the prompt points are centered or random. We also find that SAM almost always outperforms RITM, and often by a substantial margin. For example, on the Solar and Inria+DG Buildings datasets, SAM outperforms RITM by 50 and 45 IoU points, respectively. These results are largely consistent with those reported for natural imagery in [18], although the advantage of SAM over RITM seems even larger on overhead imagery than on natural imagery. Also, notably, and similar to our model composition experiments in Sec. 4, all approaches perform very poorly on Inria+DG Roads, which we discuss in Sec 6. The results of our multi-point prompt experiments are reported in Fig. 7. The results indicate that the IOU of both SAM and RITM consistently improves as the number of prompt points increases. Notably SAM outperforms RITM in nearly every combination of benchmark or number of points, and SAM typically has the greatest performance advantages when there is a small number of points. Once \\(T=10\\) both approaches typically have similar IoU, and often achieve high overall IoUs that we expect would often be satisfying for extraction of ground truth masks (e.g., SAM achieves an IOU of 0.75, or larger, on 6 out of our 10 benchmarks when T=10). These results are largely consistent with those reported for SAM on natural imagery [18], except that the overall IoUs are lower. Collectively these results suggest that SAM offers state-of-the-art interactive annotation accuracy, and also often (though not always) achieves sufficiently high IoU to be a useful tool for human-interactive instance annotation. Similar to our findings in the model annotation case, we hypothesize that substantial improvements in IoU could be obtained through fine-tuning SAM for overhead imagery, or for class-level segmentation. Figure 5: Illustration of the experimental designs adopted from [18] to evaluate SAM for human-interactive annotation. We consider two general interactive annotation scenarios: a single bounding box prompt (light blue), and multi-point prompts (dark blue). In the first scenario (light blue) SAM is provided with a single bounding box prompt, denoted \\(b\\) and it returns a single estimated mask, denoted \\(\\hat{m}\\). In the second scenario (dark blue), SAM is initially provided with a single point prompt, denoted \\(p_{1}\\), and returns as output three mask-score pairs, \\((\\hat{m}_{i}^{k},\\hat{c}_{i}^{k})\\) each with an estimate of the IOU obtained. ## 6 Additional Analysis In this section we discuss additional findings based on the collective experimental results in Sec. 4 and Sec. 5. **Sensitivity of SAM to Image Resolution.** One finding of SAM was that it suffers from some sensitivity to image resolution. It is well-known that using higher-resolution overhead imagery will enable higher accuracy for image recognition models, which is often attributed to the greater information content in the imagery. However, we found that simply up-sampling our overhead imagery (i.e., adding no new information) can significantly improve SAM's IoU. Fig. 8 shows the IoU of SAM as a function of up-sampling factor for two benchmark problems, where in each case we artificially up-sampled the imagery. The results indicate that the impact of up-sampling varies, but that SAM is generally sensitive to the number of pixels per unit of ground area, even if the information content in the image has not changed. We hypothesize that SAM may have a bias in its expected size of object instance. In particular, SAM was trained on very high-resolution natural imagery, which in many cases biases it towards more pixels per object. **Instance Segmentation is Sometimes Ill-Posed.** SAM is designed to perform object instance segmentation, however, some classes in overhead imagery may not be well-conceptualized in this manner. Fundamentally this problem can arise if there is no clear definition of an instance of a target class, or the definition may vary significantly across geography or application areas. One example is roadways, a widely-studied problem in overhead imagery [5, 6, 35, 36], which is challenging because roads are large objects that are spatially connected over very large geographic regions. Therefore it is unclear how to define when one road instance ends, and another begins. Consequently, SAM consistently performs very poorly (IoU\\(<\\)10) on the road segmentation task (i.e., Inria+DG Roads) in all of our experiments compared to models that have been trained for class-level segmentation, which does not suffer from these problems. This problem may be addressed by fine-tuning SAMs decoder to produce class-level segmentation masks for certain classes where this is more appropriate. Figure 6: Results of (pixel) intersection-over-union scores for each interactive annotation scenario. Figure 7: Results of iterative prompts. ## 7 Conclusions In this work, we investigate whether the recently-proposed Segment Anything Model (SAM) [18] generalizes well to tasks involving overhead imagery. We evaluated SAM on a large and diverse set of eight overhead imagery benchmark datasets. For each benchmark, we investigated SAM's performance on two important potential application scenarios, in a similar fashion to [18]: model composition (COMP), and interactive segmentation (INTER). We summarize our conclusions, labeled by the application area. * **(INTER)** SAM nearly always outperforms RITM (a baseline state-of-the-art annotation approach), and often by a large margin. * **(INTER)** SAM achieves somewhat lower IOUs on overhead imagery compared to comparable experimental settings with natural imagery reported in [18]. * **(INTER)** However, SAM can often achieve IoUs that would likely be satisfying for many practical annotation scenarios (e.g., SAM achieves \\(\\hat{\\iota}\\)0.75 IoU on 6 of 10 benchmarks, with 10 point-based prompts). * **(COMP)** SAM can often achieve performance that is comparable to out-of-the-box, but state-of-the-art, supervised models that have been trained on each task. However, its behavior is highly variable: in most cases it performs somewhat worse, but in some cases it performs superior to supervised models, while in others it performs much worse. * **(COMP)** The aforementioned (COMP) finding pertain to bounding-box prompts provided by other vision models. SAM generally performs poorly when prompted with point prompts obtained from other vision models. * We also find that SAM is sensitive to the resolution of the imagery; e.g., artificially up-sampling the imagery can significantly improve or lower SAM's performance. * SAM is designed to produce instance segmentations, however, this may not be well-suited to some object classes, such as roads, where we observe very poor performance. **Recommendations.** The performance of SAM can vary substantially on overhead imagery tasks. This is typical of models being applied to novel data domains (i.e., overhead imagery instead of natural imagery), but users should be aware of this variability. We suspect that fine-tuning SAM's decoder alone (as opposed to its larger encoder) may yield substantial improvements in its performance and reliability on overhead imagery. In particular, retraining the decoder to better recognize common target class features (e.g., buildings, land-use) may be greatly beneficial. Also, many tasks in overhead imagery are better-suited for class-level segmentation (e.g., roadways), and the decoder could be re-trained for class-level segmentation to substantially improve performance on road segmentation, or similar classes. ## References * [1] Mohsen Ahmadi, Masoumeh Farhadi Nia, Sara Asgarian, Kasra Danesh, Elyas Irankhah, Ahmad Gholizadeh Lonbar, and Abbas Sharifi. Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images. _arXiv preprint arXiv:2306.12510_, 2023. * [2] Han Lin Aung, Burak Uzkent, Marshall Burke, David Lobell, and Stefano Ermon. Farm parcel Delineation Using Spatio-temporal Convolutional Networks. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_, pages 340-349, Seattle, WA, USA, June 2020. IEEE. * [3] Kyle Bradbury, Raghav Saboo, Timothy L Johnson, Jordan M Malof, Arjun Devarajan, Wuming Zhang, Leslie M Collins, and Richard G Newell. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification. _Scientific data_, 3(1):1-9, 2016. * [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neekalatan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877-1901, 2020. * [5] Alexander Buslaev, Selim Seferbekov, Vladimir Iglovikov, and Alexey Shvets. Fully Convolutional Network for Automatic Road Extraction from Satellite Imagery. In _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_, pages 207-210, 2018. * [6] Ziyi Chen, Liai Deng, Yuhua Luo, Dilong Li, Jose Marcato Junior, Wesley Nunes Goncalves, Abdul Awal Md Nurunnabi, Jonathan Li, Cheng Wang, and Deren Li. Road Figure 8: Results of iterative prompts. extraction in remote sensing data: A survey. _International Journal of Applied Earth Observation and Geoinformation_, 112:102833, 2022. * [7] Ao Cheng, Guoqiang Zhao, Lirong Wang, and Ruobing Zhang. AxonCallosumEM Dataset: Axon Semantic Segmentation of Whole Corpus Callosum cross section from EM Images. _arXiv preprint arXiv:2307.02464_, 2023. * [8] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raskar. DeepGlobe 2018: A challenge to parse the earth through satellite images. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, pages 172-181, 2018. * [9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018. * [10] Sheng He, Rina Bao, Jingpeng Li, P Ellen Grant, and Yangming Ou. Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks. _arXiv preprint arXiv:2304.09324_, 2023. * [11] Fabian Horst, Moritz Rempe, Lukas Heine, Constantin Seibold, Julius Keyl, Giulia Baldini, Selma Ugurel, Jens Siveke, Barbara Grunwald, Jan Egger, et al. CellViT: Vision Transformers for Precise Cell Segmentation and Classification. _arXiv preprint arXiv:2306.15350_, 2023. * [12] Xinrong Hu, Xiaowei Xu, and Yiyu Shi. How to Efficiently Adapt Large Segmentation Model (SAM) to Medical Images. _arXiv preprint arXiv:2306.13731_, 2023. * [13] Wei Ji, Jingjing Li, Qi Bi, Wenbo Li, and Li Cheng. Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-world Applications. _arXiv preprint arXiv:2304.05750_, 2023. * [14] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision. In _International conference on machine learning_, pages 4904-4916. PMLR, 2021. * [15] Zheyan Jin, Shiqi Chen, Yueting Chen, Zhihai Xu, and Huajun Feng. Let Segment Anything Help Image Dehaze. _arXiv preprint arXiv:2306.15870_, 2023. * [16] Sahib Julka and Michael Granitzer. Knowledge distillation with Segment Anything (SAM) model for Planetary Geological Mapping. _arXiv preprint arXiv:2305.07586_, 2023. * [17] Heejong Kim, Victor Ion Butoi, Adrian V Dalca, and Mert R Sabuncu. Empirical Analysis of a Segmentation Foundation Model in Prostate Imaging. _arXiv preprint arXiv:2307.03266_, 2023. * [18] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_, 2023. * [19] Rasmus Larsen, Torben L Villadsen, Jette K Mathiesen, Kirsten MJ Jensen, and Espen D Boejesen. NP-SAM: Implementing the Segment Anything Model for Easy Nanoparticle Segmentation in Electron Microscopy Images. 2023. * [20] Haipeng Li, Dingrui Liu, Yu Zeng, Shuacheng Liu, Tao Gan, Nini Rao, Jinlin Yang, and Bing Zeng. Single-Image-Based Deep Learning for Segmentation of Early Esophageal Cancer Lesions. _arXiv preprint arXiv:2306.05912_, 2023. * [21] Francesco Luzi, Aneesh Gupta, Leslie Collins, Kyle Bradbury, and Jordan Malof. Transformers For Recognition In Overhead Imagery: A Reality Check. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, pages 3778-3787, 2023. * [22] Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat, and Pierre Alliez. Can Semantic Labeling Methods Generalize to Any City? The Inria Aerial Image Labeling Benchmark. In _IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_. IEEE, 2017. * [23] S. Mohajerani, T. A. Krammer, and P. Saeedi. \"A Cloud Detection Algorithm for Remote Sensing Images Using Fully Convolutional Neural Networks\". In _2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP)_, pages 1-5, Aug 2018. * [24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning Transferable Visual Models From Natural Language Supervision. In _International conference on machine learning_, pages 8748-8763. PMLR, 2021. * [25] Konstantin Sofiuk, Ilya A Petrov, and Anton Konushin. Reviewing Iterative Training with Mask Guidance for Interactive Segmentation. In _2022 IEEE International Conference on Image Processing (ICIP)_, pages 3141-3145. IEEE, 2022. * [26] Leigh Stearns, Cornelis van der Veen, and Siddharth Shankar. Segment Anything in Glaciology: An initial study implementing the Segment Anything Model (SAM). 2023. * [27] Adam Van Etten, Dave Lindenbaum, and Todd M Bacastow. SpaceNet: A Remote Sensing Dataset and Challenge Series. _arXiv preprint arXiv:1807.01232_, 2018. * [28] An Wang, Mobarakol Islam, Mengya Xu, Yang Zhang, and Hongliang Ren. SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation. _arXiv preprint arXiv:2308.07156_, 2023. * [29] Yonghui Wang, Wengang Zhou, Yunyao Mao, and Houqiang Li. Detect Any Shadow: Segment Anything for Video Shadow Detection. _arXiv preprint arXiv:2305.16698_, 2023. * [30] Muduo Xu, Jianhao Su, and Yutao Liu. AquaSAM: Underwater Image Foreground Segmentation. _arXiv preprint arXiv:2308.04218_, 2023. * [31] Anzhu Yu, Wenjun Huang, Qing Xu, Qun Sun, Wenyue Guo, Song Ji, Bowei Wen, and Chunping Qiu. Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets, Applications and Challenges. _arXiv preprint arXiv:2306.00303_, 2023. * [32] Wenxi Yue, Jing Zhang, Kun Hu, Yong Xia, Jiebo Luo, and Zhiyong Wang. SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation. _arXiv preprint arXiv:2308.08746_, 2023. * [33]* [33] Jielu Zhang, Zhongliang Zhou, Gengchen Mai, Lan Mu, Mengxuan Hu, and Sheng Li. Text2Seg: Remote Sensing Image Semantic Segmentation via Text-Guided Visual Foundation Models. _arXiv preprint arXiv:2304.10597_, 2023. * [34] Youcai Zhang, Xinyu Huang, Jinyu Ma, Zhaoyang Li, Zhaochuan Luo, Yanchun Xie, Yuzhuo Qin, Tong Luo, Yaqian Li, Shilong Liu, et al. Recognize Anything: A Strong Image Tagging Model. _arXiv preprint arXiv:2306.03514_, 2023. * [35] Lichen Zhou, Chuang Zhang, and Ming Wu. D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, pages 182-186, 2018. * [36] Qiqi Zhu, Yanan Zhang, Lizeng Wang, Yanfei Zhong, Qingfeng Guan, Xiaoyan Lu, Liangpei Zhang, and Deren Li. A global context-aware and batch-independent network for road extraction from VHR satellite imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, 175:353-365, 2021.
Recently, the first foundation model developed specifically for image segmentation tasks was developed, termed the \"Segment Anything Model\" (SAM). SAM can segment objects in input imagery based on cheap input prompts, such as one (or more) points, a bounding box, or a mask. The authors examined the zero-shot image segmentation accuracy of SAM on a large number of vision benchmark tasks and found that SAM usually achieved recognition accuracy similar to, or sometimes exceeding, vision models that had been trained on the target tasks. The impressive generalization of SAM for segmentation has major implications for vision researchers working on natural imagery. In this work, we examine whether SAM's performance extends to overhead imagery problems and help guide the community's response to its development. We examine SAM's performance on a set of diverse and widely studied benchmark tasks. We find that SAM does often generalize well to overhead imagery, although it fails in some cases due to the unique characteristics of overhead imagery and its common target objects. We report on these unique systematic failure cases for remote sensing imagery that may comprise useful future research for the community.
Summarize the following text.
216
arxiv-format/2201_05371v1.md
# Artificial Intelligence in Software Testing : Impact, Problems, Challenges and Prospect Zubair Khaliq [email protected] Sheikh Umar Farooq [email protected] Dawood Ashraf Khan [email protected] University of Kashmir, Srinagar, Jammu and Kashmir, India hyke.ai ## 1 Introduction With recent progress in automated and digitized data acquisition, efficient machine learning and deep learning algorithms, and high computing infrastructure, Artificial Intelligence (AI) applications are now inflating their footprint in areas that were previously expected to be only the domain of human experts. The AI-powered tools have already made significant progress in various fields, including finance, law, medicine, and even arts. In many respects, AI is radically surpassing human intelligence and is approaching the domain of human creativity and empathy. Examples include AI's spectacular successes in winning Go [1], chess [2], and other board games with humans, and in surpassing humans on fully defined world puzzles. In the domain of NLP, we witnessed how a powerful language model like GPT3 wrote news articles that people found hard to distinguish from prose written by humans [3]. We also witnessed DeepMind's protein-folding AI solving a 50-year-old grand challenge of biology [4]. Over the past few decades, there has been substantial significant growth in the software industry driven by the recent advances in AI. Artificial Intelligence is gradually changing the landscape of software engineering in general [5] and software testing in particular [6] both in research and industry as well. In the last two decades, AI has been found to have made a considerable impact on the way we are approaching software testing. Since most of the organizations have turned to automation testing to bridge the gap that exists between the growing complexity of deliverable software and the contraction of the delivery cycle yet the gap is stretching at an alarming pace bringing us closer to a tipping point wherein test automation too will fail for us to deliver quality software on time. AI can help us fill this gap and help us streamline our speeding software delivery process, thereby saving a significant amount of time and effort (and likely a sizeable amount of money too). So far, the use of AI has been very successful in the automation of software testing in some areas. Still, much research remains to be carried out on analyzing, understanding and improving the tested software artefacts in order to learn more and develop better techniques to enable modern software systems. Our goal in this study is to identify software testing activities where AI has made a significant impact and greatly enhanced the process within each activity. We also identify AI techniques that have been mostly applied to the process of software testing. Further, we convey the problems identified by the study that the testing community is facing while implementing AI-based solutions to the testing problems. We also provide some key areas where AI can potentially help the testing community. ## 2 Background _Artificial Intelligence Overview:_ The term artificial intelligence was coined by John McCarthy in 1955 at a conference organized by the Dartmouth Conference. The term was used to refer to all \"programming systems in which the machine is simulating some intelligent human behaviour\". According to John McCarthy, it is \"The science and engineering of making intelligent machines, especially intelligent computer programs\" [7]. Here we discuss the main branches of AI that have been mostly applied to software testing. _Artificial Neural Network :_ Designing artificial intelligence based on a biological neural network gives birth to an artificial neural network (ANN) [8]. Like the biological neural network, the ANN is an interconnection of nodes, analogous to neurons. Each neural network has three critical components: node character, network topology, and learning rules. Node character determines how signals are processed by the node. Network topology determines the ways nodes are organized and connected. Learning rules automatically determine how the weights are initialized and adjusted using weight adjustment schemes. This type of network becomes a computational device, which is able to learn through training, consequently improving its performance. _AI planning :_ Research on AI planning can be traced back to the logic theorist program designed by Newell and Simon in the 1960s [9]. The task of AI planning is to find a series of effective actions in a given planning domain, to ensure that the initial state in the planning problem can be successfully transferred to the goal state after applying the actions [10][11]. _Robotics :_ Robotics is a branch of AI, that comprises Electrical Engineering, Mechanical Engineering, and Computer Science for the design, construction, and application of robots. An Intelligent Robot is a physically situated Intelligent Agent containing five major components: textiteffectors, perception, control, communications, and power [12]. Effectors are the peripherals of the robot that help it to move and interact with the environment. Perception is a set of sensors that provide the robot with the capability to sense the environment. Control is analogous to the central nervous system and is capable of computations that allow the robot to maximize its chances of success. Communication is how a robot interacts with other agents like humans use language, gestures, and proxemics to interact with each other. _Machine Learning :_ Machine learning can be broadly defined as computational methods using experience to improve performance or to make accurate predictions [13]. Here, experience refers to the past information available to the learner, which typically takes the form of electronic data collected and made available for analysis. This data could be in the form of digitized human-labelled training sets, or other types of information obtained via interaction with the environment [13][14]. _Natural Language Processing (NLP) :_ Natural Language Processing (NLP) refers to the AI method of communicating with an intelligent system using a natural language such as English. Processing of Natural Language is required when we want an intelligent system to perform as per our instructions, when we want to hear decisions from a dialogue-based clinical expert system, etc. _Fuzzy Logic :_ Fuzzy logic(FL) is a method of reasoning that resembles human reasoning. The approach of FL imitates the way of decision-making in humans that involves all intermediate possibilities between digital values YES and NO. FL is based on the idea that there is no sharp distinction between the two extremes. FL is a method of reasoning that is applied to make decisions by means of a number of rules which are combined with each other to produce a result. The rules are fuzzy sets, which are used as a basis for decision-making. _Expert Systems :_ Expert systems are computer applications developed to solve complex problems in a particular domain, at the level of extraordinary human intelligence and expertise. The common features of expert systems can be summarized as follows:* **Rules** that define the specific problem are formalized in the form of computer procedures in the programming language. * **Knowledge Base** in the form of a computerized database, stores the problems and solutions for support in the decision-making process. * **Inference Engine** which processes and evaluates scenarios. The problems posed and solutions found are completely transparent to the user. In simple terms, the system functions as a large, intelligent \"computerized brain\". _Software Testing Overview:_ Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or system under test (SUT). Usually, a software development organization expends between 30% to 40% of total project effort on testing [15] and testing consumes more than 50% of the total cost of a project [16]. A higher-quality software is achieved when SUT is failure-free. A failure is detected when the SUT's external behaviour is different from what is expected of the SUT according to its requirements or some other description of the expected behaviour [17]. An important element of the testing activity is the test case. Essentially, a test case specifies in which conditions the SUT must be executed in hopes of finding a failure. When a test case reveals a failure, it is considered successful (or effective) [18]. The test cases are usually derived from either the functional specification, or a design specification, or a requirements specification. A test case specification includes: * The preconditions, which describe the environment and state of the SUT before the test case is executed. * The test steps, which describe the actions that should be performed to execute the test case. * The expected results, which describe the expected results of the executing test case. * The actual results, which describe the results of the executing test case. There are different dimensions under which testing has been studied and implemented and these dimensions define the test adequacy criteria, that is, the criterion that defines what constitutes an adequate test [19]. A great number of such criteria have been proposed and investigated and considerable research effort has attempted to provide support for the use of one criterion or another. We discuss test adequacy criteria in the following sections. _1) Testing Types:_ Two main types of testing are * _Manual Testing :_ In manual testing, testers execute test cases manually without the use of tools or scripts. In this type of testing, the tester takes over the role of an end-user and tests the software to identify any unexpected behaviour or bug. * _Automated Testing :_ is a form of software testing that uses software tools to execute predefined tests. The software tools used for automated testing are often called test automation tools or test automation frameworks. It relieves the tester from the burden of executing the test cases however the process of planning and writing test cases in the form of test scripts still needs to be carried out manually. _2) Testing Techniques:_ Three main testing techniques have been identified. * _Black-box Testing :_ Black-box testing also known as functional testing aims to study the external behaviour of software without dwelling on the internal structure of the software. Black-box testing is based on the inputs and the outputs of the software. * _White-box Testing :_ White-box Testing also known as structural testing, on the other hand, creates test cases based on the SUT implementation. Its purpose is to make sure that all structures (e.g., paths, instructions, and branches) of the SUT are exercised during the execution of the test suite [18]. * _Gray-box Testing :_ Gray box testing is a testing technique to test a software product or application with partial knowledge of the internal structure of the application. The purpose of grey box testing is to search and identify the defects due to improper code structure or improper use of applications. _3) Testing Phases or Testing Levels:_ Testing is performed at all levels of the software development lifecycle including development, release, and production. During development _unit testing_ is carried out to test basic units of software like a method or a class. After unit testing, as the basic units combine to form components further testing is carried out for testing these components to ensure that the integration has not bought any unintended bugs and the component are working as per the specification. The process of testing at the component level is called _Integration testing_. Since different teams work on the code simultaneously and there is much reusable and third-party code that is incorporated in the software a further level of testing known as _system testing_ has been identified to test the integrated components from these sources and therefore to test the system as a whole. Most often due to the requirements change or addition of functionality to software or due to maintenance the code changes, which may result in bugs crreping in the code apparently resulting in a failure. To tackle this a technique called _regression testing_ is incorporated at all levels of testing. Regression testing is the most cumbersome and time-consuming testing technique as it involves testing the SUT whenever a change is incorporated. Before the release, _requirements testing_ ensures that the SUT is performing all the functions according to the requirements that have been pre-defined in the software requirement specification document. _Scenario testing_ is carried out before the release where scenarios of the SUT are created and the SUT is tested against these scenarios to look for any unintended behaviour of the SUT. _Performance testing_ is a testing measure that evaluates the speed, responsiveness, and stability of a SUT under a workload. At production time _alpha testing_ is carried in the development environment wherein the developer acts as the user of the SUT and tries to identify any failure. In this testing technique, the developer actually looks at the SUT from the perspective of the user. _Beta Testing_ is the testing of SUT in the user environment. Here the user actually interacts with the SUT and the developer just watches and analyses the SUT for any failure. ## 3 Impact of AI on Software Testing The areas in which AI techniques have proved to be useful in software testing research and practice can be characterized by their applications in the software testing life cycle (STLC). From planning to reporting, AI techniques have made a dominant imprint across all the stages of STLC. To study the impact of AI on software testing we have identified testing activities or testing facets for which considerable and significant research has been carried out by applying AI. These testing activities cover most of the STLC. _Test Specification :_ At the beginning of the software testing life cycle, the test cases are written based on the features and requirements of the software. The test cases are written in a checklist type test specification document to ensure that every requirement of the software is tested. It includes the purpose of a specific test, identifies the required inputs and expected results, provides step-by-step procedures for executing the test and outlines the pass/fail criteria for determining acceptance. Below we mention the work of two seminal papers where AI has been applied to this activity. Last and Friedman [21] demonstrated the potential use of Info-Fuzzy Networks (IFN) for automated induction of functional requirements from execution data. The induced models of tested software were utilized for recovering missing and incomplete specifications, designing a minimal set of regression tests, and evaluating the correctness of software outputs when testing new, potentially flawed releases of the system. Briand et al. [20] proposes a methodology that takes as inputs the test suite (a set of test cases) and test specifications developed using the Category-Partition (CP) strategy. Based on the CP specification, test cases are transformed into abstract test cases which are tuples of pairs (category, choice) associated with an output equivalence class (instead of raw inputs/outputs). C4.5 is then used to learn rules that relate pairs (category, choice), modelling input properties, to output equivalence classes. These rules are in turn analyzed to determine potential improvements of the test suite (e.g., redundant test cases, need for additional test cases) as well as improvements of the CP specification (e.g., need to add a category or choices). _Test Case Refinement:_ Test case refinement is a planned activity that is employed by testers to select the most effective test cases for execution consequently reducing the testing cost. We identified two AI techniques applied to this testing activity. Info-Fuzzy Networks (IFN) was used by Last and Kandel [22] and Last et al.[23] who presented a novel approach to automated reduction of combinatorial black-box tests, based on automated identification of input-output relationships from execution data of the tested program. Singh et al. [24] details an approach generating test casesfrom \\(\\mathbf{Z}\\) specifications for partition testing. The learner receives as input the functional specification in Z. As output, the approach produces a classification tree describing high-level test cases. Then the high-level test cases are further refined by generating a disjunctive normal form for them. Test Case GenerationAfter devising a test adequacy criteria it is the job of testers to formulate a test set that satisfies the test adequacy criteria. Since for complex applications, the job of handcrafting test sets is an unmanageable task most of the testers use automatic test case generation techniques. In the last two decades, there has been considerable growing interest in applying AI to automate test case generation and AI has impacted this testing activity significantly. Prior work in this field started in 1996 where authors used the inductive learning method to generate test cases from a finite set of input-output examples [25]. Given a program P and a set of alternative programs P', the proposed approach yields test cases that are adequate in the sense that they are able to distinguish P from all programs in P'. Xiao et al. [26] present an active learning framework for black box software testing. The active learning approach samples input/output pairs from a black box and learns a model of the system's behaviour. This model is then used to generate new inputs for sampling. Li, H., and Lam, C. P [27] proposes an Ant Colony Optimization approach to automatic test sequence generation for state-based software testing. The proposed approach directly uses UML artefacts to automatically generate test sequences to achieve required test coverage. Sant et al. [28] develop methods that use logged user data to build models of a web application. Their approach automatically builds statistical models of user sessions and automatically derive test cases from these models. Paradkar et al [29] present an automated approach to generating functional conformance tests for semantic web services. The semantics of the web services have been defined using the Inputs, Outputs, Preconditions, Effects (IOPEs) paradigm. Their techniques allow the generation of test cases that can be executed through GUI or through the direct invocation of web services. Li et al. [30] uses a modified AI planner to avoid the occurrence of combinatorial explosion problems with AI Planners. They applied the method on the GUI test case generation, and the main idea was to produce the initial test case from the planner firstly and then propose a way of solution expanding to reinforce the generation. Shen et al. [31] propose an approach by combining the Genetic Algorithm(GA) with the Tabu Search technique. The experimental study that they carry out regress over the fact that combining the methods is effective against using the GA method individually for the purpose of generating test cases. The primary observation is that the tabu search helps the proposed technique from getting stuck at some local minimum. Srivastava and Baby [32] present an algorithm by applying an ant colony optimization technique, for the generation of optimal and minimal test sequences for behaviour specification of software. The paper presents an approach to generate test sequences in order to obtain complete software coverage. Keyvanpour et al. [33] presented some test case generation techniques using memetic algorithms which differentiates from the GA in that at each generation on each individual local optimum is reached using a hill-climbing search. Verma nad Beg [34] propose an approach to generate test cases from software requirements expressed in natural language using natural language processing techniques. Mariani et al. [35]present a technique to generate new test cases for GUI-based applications from GUI-driven tests hand-crafted manually. The learner receives as input an initial test suite, GUI actions, and observed states obtained by the tool. As output, this GUI-based testing approach produces a behavioural model from which new test cases can be created. Rijwan and Mohd [36] introduce an approach to test case generation for unit software testing by using GA incorporated with mutation analysis. Their algorithm injects mutant into the program and then generate random test cases. Carino and Andrews[37] introduce a test sequence generator for GUIs. The system uses ant colony optimization in order to generate tests that have an impact on the state of the GUI. Saddler and Cohen [38] expand the notion of goal-based interface testing to generate tests for a variety of goals. They develop a direct test generation technique, EventFlowSlicer, that is more efficient than that used in human performance regression testing, reducing run times by 92.5% on average for test suites between 9 to 26 steps and 63.1% across all other test suites. Ansari et.al [39] proposes a system that deals with the automatic generation of test cases from functional requirements using Natural Language Processing (NLP). The proposed system is beneficial as it can automatically analyze the functional requirement from Software RequirementSpecification in order to extract test cases for testing. Bozic and Wotawa [40] contribute to the application of AI for security testing of web applications, by using automated planning for obtaining test suites to test common vulnerabilities. The planning system generates test cases in the form of a sequence of actions that lead from an initial to a final state. Rosenfield et al. [41] uses the structure of the GUI elements as features to a classifier to classify similar GUI's. Later they apply a pre-defined set of test cases for that particular group. The idea stemmed from the fact of reusing test cases for applications with similar structures. Santiago et al. [42] uses LSTM to generate test cases using an abstract flow learning mechanism for generation GUI tests. The authors present a novel approach that models how human testers produce test-flows as an application-agnostic abstract sequence problem. Hu et al. [43] generates tests by progressively discovering a SUT's behaviour, which is necessary to handle different SUT designs and synthesize only tests reusable in this SUT. This also demands creating a training hand-crafted testing library from which new and dynamic on the fly test cases can be generated. Moghadam [44] has used model-free reinforcement learning to build a self-adaptive autonomous stress testing framework that is able to learn the optimal policy for stress test case generation without having a model of the system under test. [45] propose a method that automatically extracts homogeneous test cases that are not dependent on the skills and know-how of the engineer writing the test cases from requirements specification documents _Test Data Generation :_ Test Data Generation is a software testing activity or process to create test inputs and data based on logical test cases and test scenarios. It is the quality of test data that determines the testing coverage of a SUT. Preliminary works in applying AI to this activity started in 1995 when Jones et al. [46] applied GAs to generate test sets by searching the input domain for test data which ensure that each branch of the code is exercised. Roper [47] used the same technique except for setting up a threshold level for the test coverage. Rodrigues et al. [48] provided a systematic mapping study regarding the application of GA techniques to Test Data Generation activity. The results showed that genetic algorithms have been successfully applied to simple test data generation, but are rarely used to generate complex test data such as images, videos, sounds, and 3D (three-dimensional) models. Tracey et al. [49] provided a method with the aim to develop a generalised framework for automated generation of test-case data to satisfy both black and white box testing of functional properties and also non-functional properties. Souza et al. [50] proposes an automated test data generation approach, using hill-climbing, for strong mutation. The idea is that if automatic test data generation can achieve an acceptable level of mutation score, it has the potential to greatly reduce the involved manual effort. Behjati et al. [51] plan to produce synthetic test data instead of live production data. The authors investigate the use of LSTM a type of Recurrent Neural Network for this purpose. Choi et al. [52] introduce a tool that automatically generates sequences of test inputs for Android apps. The learner receives as input sequences of actions extracted from the app's GUI. The output can be seen as a model representing the GUI of the application under test. Padararu et al. [53] presented a tool that is able to generate test data for evaluating programs, having as initial input a corpus of example tests. The corpus is clustered and then a sequence to sequence RNN is used to learn generative models that are able to produce new test data. Sharifipour et al. [54] proposes a memetic ant colony optimization algorithm for structural test data generation. By using evolution strategies they improve the search functionality of ants in local moves. Cegin et al. [55] proposes Machine learning methods as meta-heuristic approximations modelling the behaviour of programs that are hard to test using traditional approaches, where the path explosion problem does occur and thus could solve the limitations of the current state-of-art approaches. Liu et al. [56] proposes a novel deep learning-based approach to solve the challenges of test data generation. Their approach consists of two phases: In the training phase, the monkey testing procedure is used to learn the testers' manual inputs and statistically associates them with the contexts, such as the action history and the textbox label; In the prediction phase, the monkey automatically predicts text inputs based on the observed contexts. _Test Oracle Construction :_ Software testing is the process of verifying the correct behaviour of the SUT as per the requirements. To highlight this when a program is run with a certain input a mechanism is needed to distinguish between the correct and incorrect behaviour of the SUT. This mechanism is known in the testing terminology as the oracle problem [57]. Below we mention the AI techniques that have been applied to this problem. Jin et al. [58] investigated how ANNs can be used to ease the test oracle problem. The authors conclude that ANN's cannot be used directly on some problems for test oracle construction. More preprocessing and analysis needs to be carried out to apply ANN's directly to a problem. Wang et al. [59] were the first to study how ML algorithms can be used to automatically generate test oracles for reactive programs even if the specification is missing. Their approach turns test traces into feature vectors, which are used to train the ML algorithm. The model yielded by the algorithm then acts as a test oracle. Vineeta et al. [60] outlines two ML approaches toward implementing test oracles. To predict the expected outputs of the SUT the first approach builds on ANNs and the second one builds on decision trees. The applicability of these approaches was examined through an example using a toy program. Shahamiri et al. [61] proposes MultiNetworks Oracles based on artificial neural networks to map the input domain to the output domain. The accuracy of the proposed oracle was up to 98.26%, and the oracle detected up to 97.7% of the injected faults. Braga et al. [62] uses historical usage data from an application that goes through a Knowledge Discovery in Database step and is then used for training (using AdaBoostM1 and Incremental Reduced Error Pruning (IREP) ) to generate an oracle suitable for the application under test. Chan et al. [63] developed an approach that trains a classifier using a reference model of the SUT. This supervised ML approach groups test cases into two categories: passed and failure causing. Vanmali et al. [64] train an artificial neural network by the backpropagation algorithm on a set of test cases applied to the original version of the system. The trained network is then used as an intelligent oracle for evaluating the correctness of the output produced by new and possibly faulty versions of the software. Agarwal et al. [65] studied Info Fuzzy Networks (IFNs) and ANNs to determine the effectiveness of these approaches used to implement test oracles. According to the results of this study, IFNs significantly outperform ANNs in terms of computation time while achieving almost the same fault-defection effectiveness. Tsimpourlas et al. [66] aims at solving the test oracle problem in a scalable and accurate way. The authors use supervised learning over test execution traces. They label a small fraction of the execution traces with their verdict of a pass or fail. Then they use the labelled traces to train a neural network (NN) model to learn to distinguish run-time patterns for passing versus failing executions for a given program. Test Case PrioritizationTest case prioritization involves arranging the execution of test cases in a particular order - to ensure that multiple test runs are carried out in a variety of ways - so that the test cases that are most likely to expose defects are executed earlier in the testing process. Test cases can also be prioritized by the risk, i.e., the severity of the item being tested or the impact of the risk if it were to occur in the testing process, the importance of a test case, or any other factor. The available tools didn't provide the ability to automatically prioritize test cases paving the path to studying this area. AI has been found to have impacted this testing activity significantly. Spieker et al. [67] proposes a technique for automatically learning test case prioritization with the goal to minimize the round-trip time between code commits and developer feedback on failed test cases. The proposed method uses reinforcement learning to prioritize test cases according to their duration, previous last execution and failure history. It does so by working under the guidance of a reward function that rewards error-prone tests higher than the ones that are less likely to find an error. Lenz et al [68] introduces an approach that groups test data into similar functional clusters using K-means Clustering, Expectation-Maximization Clustering and incremental conceptual clustering. After this, according to the tester's goals, it uses the C4.5 classifier for the prioritization of test cases. Busjaeger and Xie [69] presents a novel approach that integrates multiple existing techniques via a systematic framework of machine learning to rank tests. The framework takes input code coverage information, text path similarity, test age, failure history, and text content similarity to yield a model for efficient prioritization of test cases. Wang et al. [70] uses ANN to learn the count of times a program segment will be visited in the execution of a test case. Later they use the count estimation for the program segments as a part of the feature vector for a test input and feed the vector to another ANN for test prioritization of the test input. Ozawa et al. [71] proposes a statistical method to prioritize software test cases with operational profiles, where the system behaviour is described by a Markov reward model. The authors introduce software code metrics as reward parameters and apply the resulting Markov reward model to the test case prioritization problem. Lachmann et al. [72] uses SVM Rank for test case prioritization for manual system-level regression testing based on supervised machine learning. Their approach considers black-box meta-data, such as test case history, as well as natural language test case descriptions for prioritization. Their experimental results imply that the SVM Rank technique improves the failure detection rate significantly compared to a random order. Lachmann et al. [73] takes the work further and evaluates the efficiency of ANN, K-NN, Logistic Regression, and Ensemble Methods in their application to test case prioritization of black-box tests based on natural language artefacts. Their results indicate that logistic regression outperforms the other applied ML algorithms in terms of effectiveness. Nucci et al. [74] proposes a Hypervolume-based Genetic Algorithm, namely HGA, to solve the test case prioritization problem when using multiple test coverage criteria. The authors deduct that HGA is more cost-effective and it improves the efficiency of test case prioritization. Test Cost EstimationSoftware cost estimation is the process of predicting the effort required to develop a software system. The general practice of software development is that there should be no shortfalls in the estimation of software cost, the earlier the cost estimation the better it is for the team. AI techniques have been found effective for deriving predictions regarding the unseen. Zhu et al. [75] proposes an experience-based approach for test execution effort estimation. In the approach, they characterize a test suite as a 3-dimensions vector that combines test case number, test execution complexity and its tester together. Based on the test suite execution vector model, they set up an experience database, and then apply support vector machine (SVM) to estimate efforts for given test suite vectors from historical data. Badri et al.[76] evaluated ML algorithms to predict test code size for object-oriented software in terms of test lines of code (TLOC), which is a key indicator of the testing effort. The authors used linear regression, k-NN, Naive Bayes, C4.5, Random Forest, and Multilayer Perceptron to build the models. According to their results, their approach yields accurate predictions of TLOC. Silva et al. [77] studied the application of ANN's and support vector regression to estimate the execution effort of functional tests. After an analysis of the test process, the authors concluded that there are larger impacts on software quality when the effort is underestimated. In order to cope with this, a modified cost function was developed to train an artificial neural network, aiming to bias the predictive model to overestimate, instead of underestimating. Cheatham et al. [78] purposes to demonstrate a technique using machine-learning (COBWEB/3) to identify attributes that are important in predicting software testing costs and more specifically software testing time. FindingsThe overall impact of AI techniques on particular software Testing activities can be visualized from [Figure 1]. Based on the discovered publications, seven software testing activities viz test case generation, test oracle generation, test data generation, test case prioritization, test case specification, test case refinement, and test cost estimation were identified as activities that have been improved significantly by the application of AI techniques. From this study, we can infer that test case generation or test case design activity has been considerably enhanced by the application of AI techniques. Most of the recent research has been carried out around activities like test case generation, test case prioritization, test data generation and test oracle construction. The trivial reason for this is that these activities are more important than other activities in the STLC. We skipped some software testing activities including test harness, testing technique selection, test repairing, change proness etc. from our study as just one or two AI-based studies have been carried for these activities. [Table 1] shows a list of AI techniques that have been applied for software testing activities. Also, the most commonly used AI techniques applied to soft testing appear to be solving the problem of optimization across various software testing activities. Specifically, genetic algorithms, ANN, and reinforcement learning were among the techniques that were used across various testing activities more frequently than others. Figure 1: Impact of AI on software Testing Activities ## 4 Problems and Challenges of AI in Software Testing Considering the lack of industrial expertise and research work this section outlines some of the open problems and challenges in the application of AI to software testing. _Test Oracle the biggest challenge :_ Test oracle problem is a companion of every researcher and practitioner working in the field of software testing. It has been there from the inception of the software testing conundrum and the problem is expected to stay with us for a longer duration or maybe forever. Despite continuous attempts to mitigate the problem of the test oracle, researchers have been able to solve this problem for a static subset of SUT's. As soon as the dynamic traits of the SUT start to display the previous test oracle derived for the SUT starts to lose effectiveness. In many scenarios, even the documentation from which test oracles are generated is missing in the requirements document. To cope with this dynamism and to dream of a documentation-free effective test oracle AI techniques have been employed and these AI techniques have provided a significant initial effort towards realizing this dream. _Availability of Data :_ Any model in AI must be trained and tested before being deployed in production. The efficiency and effectiveness of a model are highly correlated to the amount of data with which a model is trained and tested. Acquiring data for building AI models in the domain of software testing is a challenging task because software testing unlike other fields of study is not fully automated yet. Apparently, a lot of testing is still being carried manually and it is difficult to capture data when testing is manual. This exhumes as a bottleneck to data acquisition for training AI models in the future. _Adaptiveness to data :_ AI models are highly dependant on the data with which they are trained and tested. An important phase in the production of an AI model is the collection of robust datasets from real-world scenarios and the use of that data to train a model generalized to fit that data. Such a model assumes future data and historic data (data with which the model is trained) to be from the same distribution. However, it is often not the case as most data has higher disparity over time e.g. learning customers shopping behaviour is dependant on seasons. AI models are evaluated for generalizations by testing the model on a particular subset from the data (test set) which is from the same time distribution. Over time less promising outcomes from such models are witnessed. Some AI techniques allow the model to be readjusted to adapt to the changes in the new data. However, the challenging task is to detect this ideal time to readjust and even automate the readjustment process. _Identifying Test Data :_ Every AI model must be tested \\begin{table} \\begin{tabular}{l l} \\multicolumn{2}{l}{**Software Testing Activity**} & **AI Technique Applied** \\\\ \\hline Test Case Generation & Inductive Learning - Active Learning - Ant colony Optimization - Markov Model - AI Planner -GA - Tabu Search - NLP - Re-enforcement Learning - C4.5 - Goal Based - Decision Tree - K-Neirest Neighbour - Logistic Regression - Random Forest - Multi-Layer Perceptron - K star - LSTM - Heuristic Search \\\\ \\hline Test Data Generation & GA - Simulated Annealing - Hill Climbing - Generative Model - LSTM - Deep Re-enforcement Learning - Ant Colony Optimization - Heuristic Methods \\\\ \\hline Test Oracle Construction & ANN - SVM - Decision Trees - AdaBoostM1 - Incremental Reduced Error Pruning (IREP) - Info Fuzzy Network \\\\ \\hline Test case prioritization & K-Means - Expectation-Maximization - c4.5 - Cob Web - Reinforcement Learning - CBR - ANN - Markov Model - K-NN - Logistic Regression - SVM Rank \\\\ \\hline Test Case Specification & IFN - C4.5 \\\\ \\hline Test Case Refinement & IFN - Classification Tree Method \\\\ \\hline Test Cost Estimation & SVM - linear regression - k-NN - Naïve Bayes - C4.5 - Random Forest - Multilayer Perceptron \\\\ \\hline \\end{tabular} \\end{table} Table 1: AI techniques applied to software testing activities. thoroughly before being put to production. Model testing is like a black-box technique where the structural or logical information regarding the model is not a necessity. Rather comprehensive information and understanding regarding the testing data is required. Again the selection of testing data from the same distribution can incur issues resulting in a biased model. The problem is with the coverage of the test set i:e asking the question \"Is the model tested over a larger distribution of data?\" Identification of such coverage-based test datasets is a challenging task in the domain of software testing. _Exhaustive search space leads to generality loss :_ For most of the optimization problems in search-based software testing, the AI algorithm has to exhaustively search for the solution or the goal. Although sub-optimal search strategies have been identified and implemented so far they work for a particular class of problems. To include more general solutions to a variety of problems the inputs of the whole problem domain are to be included, consequently making the input space more exhaustive. Versatile and broadly capable AI methodologies need to be identified to cope with this generality loss. _Exploitation of Multicore Computation :_ A lot of AI techniques are highly computationally expensive making them potentially incompatible with large-scale problems faced by software testers. With the recent advancements in computing infrastructure, Graphical Processing Units (GPU) and Tensor Processing Units (TPU) have been incorporated at scale for these techniques. More work is required to fully exploit the enormous potential of the rapidly increasing number of processors available. Since such high computational devices are expensive more work needs to be carried out towards designing techniques that require less computation and still match the performance of high computing devices. ## 5 Prospects of AI in Software Testing In the past few years, many companies have begun to invest in AI-powered software testing technologies. These AI systems offer an alternative to traditional testing processes. While AI systems are still relatively new, the potential gains are simply too great to ignore. Here are some excerpts from our study and from software testing industry experts where we expect these technologies to potentially help software testers in the future: * Collaborating with people who are geographically spread out can be difficult. This is where AI systems can be relied upon to carry out routine, labour-intensive tasks. This frees up more productive time for software testers to spend on addressing the most complicated issues. * The ability to program AI systems to test application code is incredibly useful. It offers a realistic simulation of a situation that a software tester might face. This also improves the accuracy of tests because they can identify and replicate all possible scenarios. * The next generation of artificial intelligence in software testing will include self-modifying tools that can instantly identify and fix vulnerabilities without any human intervention creating self-healing systems. * With artificial intelligence in software testing, software companies, and testers can reduce their costs by a great degree, which is already happening. We think it will normal to see organizations and other user groups automate their testing process using AI while testers focus on the exploratory testing of systems. * The AI predictive analytics will play a major role in discovering all possible test cases and will make the software products more robust, reliable and will exceed customer expectations. * AI is operating at all levels of testing from planning to execution to reporting and it is expected to take over all the tasks in the STLC which require human intelligence. This in turn will free the tester from the job of various time-consuming testing strategies like regression testing and smoke testing etc. * AI incorporated in testing will provide a high ROI because these systems ensure that the time allocated to deliver the product is spent on polishing its features rather than on testing and debugging technical defects. * AI-powered automation tools will help to increase the level of autonomy in software testing and hence help to deliver higher quality software. AI-related technologies are helping to bridge the gap between human and machine-driven testing capabilities. * AI is expected to impact testing in all the software product areas including mobile applications, web applications, IoT, embedded systems, database applications, gaming industry, real-time applications, critical software applications to name a few. * With more data being acquired and stored, AI can enhance the software testing capabilities which are somewhat restricted today due to the non-availability of data. ## 6 Conclusion In the last two decades, the rapid growth of interest in topics where AI has been applied to software testing is a testimony to the appetite the software testing community has for AI. This is a consequence of AI providing efficient solutions to the problems faced by the testing community for a long time. AI has already been accepted as a promising solution to many problems faced by testers all around the globe. In this paper, we studied the impact of AI across all stages of the STLC. We identified seven software testing activities that were most enhanced by AI techniques. GA's, Reinforcement Learning, and ANN were among the most widely used techniques from the domain of AI. We identified problems and challenges researchers and testers face while applying AI techniques to software testing. We also provided a future prospect into how AI can shape the software testing domain. ## References * (1) C. Koch, \"How the computer beat the Go master,\" Scientific American, vol. 19, 2016. * (2) F. Hsu, Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Princeton, NJ: Princeton Univ. Press, 2004 * (3) Brown, T. B. et al. Preprint at [https://arxiv.org/abs/2005.14165](https://arxiv.org/abs/2005.14165) (2020). * (4)[https://www.technologyreview.com/2020/11/30/1012712/deepmind-protein-folding-ai-solved-biology-science-drugs-disease/](https://www.technologyreview.com/2020/11/30/1012712/deepmind-protein-folding-ai-solved-biology-science-drugs-disease/) * (5) M. Harman. The role of artificial intelligence in software engineering. In 1st International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2012), Zurich, Switzerland, 2012. * (6) Hourani, H., Hammad, A., Lafi, M. (2019, April). The Impact of Artificial Intelligence on Software Testing. In 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEETI) (pp. 565-570). IEEE. * (7) J. McCarthy, \"Programs with common sense,\" in Proceedings of the Symposium on Mechanisms of Thought Processes, vol. 1. London: Her Majesty's Stationery Office, 1958, pp. 77-84. * (8) Zou J., Han Y., So SS. (2008) Overview of Artificial Neural Networks. In: Livingstone D.J. (eds) Artificial Neural Networks. Methods in Molecular Biology(tm), vol 458. Humana Press. [https://doi.org/10.1007/978-1-60327-101-12](https://doi.org/10.1007/978-1-60327-101-12) * (9) Newell, A., and Simon, H. A. 1963. GPS: A Program That Simulates Human Thought. In Computers and Thought, eds. E. A. Feigenbaum and J. Feldman. New York: McGraw-Hill. [GPS] * (10) Jiao, Z.; Yao, P.; Zhang, J.; Wan, L.; Wang, X. Capability Construction of C4ISR Based on AI Planning. IEEE Access 2019, 7, 31997-32008. * (11) James Hendler, Austin Tate, and Mark Drummond, \"AI Planning: Systems and Techniques,\" AI Magazine Volume 11 Number 2 (1990) * (12) Robin R Murphy, \"Introduction to AI Robotics Second Edition, \" The MIT Press, Cambridge, Massachusetts, London England, 2019. * (13) M. Mohri, A. Rostamizadeh, and A. Talwalkar, Foundations of Machine Learning. Cambridge, MA, USA: MIT Press, 2012. * (14) P. Louridas and C. Ebert, \"Machine learning,\" IEEE Softw., vol. 33, no. 5, pp. 110-115, Sep./Oct. 2016. * (15) Pressman, R. S. \"Software engineering: A practitioner's approach,\" New York: McGraw-Hill. 1987 * (16) Ramler, R., & Wolfmaier, K, \"Economic perspectives in test automation: balancing automated and manual testing with opportunity cost. In Proceedings of the 2006 international workshop on Automation of software test\", (pp. 85-91). ACM, 2006, May * (17) P. Ammann and J. Offutt, \"Introduction to Software Testing, 2nd ed,\". Cam- bridge, U.K.: Cambridge Univ. Press, 2016.997 * (18) Vinicius H. S. Durelli, Rafael S. Durelli, Simone S. Borges, Andre T. Endo, Marcelo M. Eler, Diego R. C. Dias, and Marcelo P. Guimaraes, \"Machine Learning Applied to Software Testing: A Systematic Mapping Study,\". IEEE TRANSACTIONS ON RELIABILITY. 2019. * (19) H. Zhu, P. A. V. Hall, and J. H. R. May, \"Software unit test coverage and adequacy,\" ACM Comput. Surveys, vol. 29, no. 4, pp. 366-427, 1997. * (20) INDI,2014. L. C. Briand, Y. Labiche, and Z. Bawar, \"Using Machine Learning to Refine Black-Box Test Specifications and Test Suites,\" 2008 The Eighth International Conference on Quality Software, 2008. * (21) Mark Last and Menahem Friedman, \"BLACK-BOX TESTING WITH INFO-FUZZY NETWORKS,\" Artificial Intelligence Methods in Software Testing, pp. 1-20 (2004) * (22) Last M., Kandel A. (2003) Automated Test Reduction Using an Info-Fuzzy Network. In: Khoshgoftaar T.M. (eds) Software Engineering with Computational Intelligence. The Springer International Series in Engineering and Computer Science, vol 731. Springer, Boston, MA. [https://doi.org/10.1007/978-1-4615-0429-0_9](https://doi.org/10.1007/978-1-4615-0429-0_9) * (23) Last M., Friedman M., Kandel A. \"USING DATA MINING FOR AUTOMATED SOFTWARE TESTING,\" International Journal of Software Engineering and Knowledge Engineering. Vol. 14, No. 04, pp. 369-393 (2004) * (24) H. Singh, M. Conrad, and S. Sadeghipour, \"Test case design based on Z and the classification-tree method,\" in Proc. IEEE Int. Conf. Formal Eng. Methods, 1997, pp. 81-90. * (25) F. Bergadano and D. Gunetti, \"Testing by means of inductive program learning,\" ACM Trans. Softw. Eng. Methodol., vol. 5, no. 2, pp. 119-145, 1996. * (26) G. Xiao, F. Southey, R. C. Holte, and D. Wilkinson, \"Software testing by active learning for commercial games,\" in Proc. 20th Nat. Conf. Artif. Intell., 2005, vol. 2, pp. 898-903. * (27) Li, H., & Lam, C. P. (2005). An ant colony optimization approach to test sequence generation for state-based software testing. QSIC 2005, 255-262. * (28) J. Sant, A. Souter, and L. Greenwald, \"An exploration of statistical models for automated test case generation,\" in Proc. Int. Workshop Dyn. Anal., 2005, pp. 1-7. * (29) Paradkar, A. M., Sinha, A., Williams, C., Johnson, R. D., Outterson, S., Shriver, C., & Liang, C. (2007). Automated functional conformance test generation for semantic web services. ICWS 2007, 110-117. * (30) Li, L., Wang, D., Shen, X., & Yang, M. (2009). A method for combinatorial explosion avoidance of AI planner and the application on test case generation. CISE 2009, 1-4. * (31) Shen, X., Wang, Q., Wang, P., & Zhou, B. (2009). Automatic generation of test case based on GATS algorithm. GrC 2009, 496-500. * (32) Srivastava, P. R., & Baby, K. (2010). Automated software testing using matheutscheinte technique based on an Ant Colony Optimization. ISED 2010, 235-240. * (33) M.R. Keyvanpour, H. Homayouni and Hasein Shirazee, 2011. Automatic Software Test Case Generation. Journal of Software Engineering, 5: 91-101. * (34) R. P. Verma and M. R. Beg, \"Generation of Test Cases from Software Requirements Using Natural Language Processing,\" 2013 6th International Conference on Emerging Trends in Engineering and Technology, 2013. * (35) L. Mariani, M. Pezze, O. Riganelli, and M. Santoro, \"Automatic testing of GUI-based applications,\" Softw. Testing, Verification Reliab., vol. 24, no. 5, pp. 341-366, 2014. * (36) Rizwan Khan and Mohd Amjad, \" Automatic Test case generation for unit software testing using genetic algorithm and mutation testing,\" 2015 IEEE UP section conference on electrical computer and electronics (UPCON), 2015 * (37) Carino, S., & Andrews, J. H. (2015). Dynamically Testing GUIs Using Ant Colony Optimization. ASE 2015, 138-148. * (38) Jonathan Saddler, Myra B. Cohen, \"EventFlowSlicer: goal based test generation for graphical user interfaces\"Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and EvaluationNovember 2016 Pages 8-15[https://doi.org/10.1145/2994291.2994293](https://doi.org/10.1145/2994291.2994293) * (39) A. Ansari, M. B. Shagufa, A. S. Fatima, and S. Tehreem, \"Constructing Test cases using Natural Language Processing,\" 2017 Third International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB), 2017. * (40) Bozic, J., & Wotawa, F. (2018). Planning-based security testing of web applications. AST@ICSE 201, 20-26. * (41) Rosenfeld, A., Kardashov, O., & Zang, O. (2018). Automation of Android Applications Functional Testing Using Machine Learning Activities Classification. MOBILESoft@ICSE 2018, 122-132. * (42) Santiago, D., Clarke, P. J., Alt, P., & King, T. M. (2018). Abstract flow learning for web application test generation. ATEST@ESEC/SIGSOFT FSE 2018, 49-55. * (43) Hu, G., Zhu, L., & Yang, J. (2018). AppFlow: using machine learning to synthesize robust, reusable UI tests. ESEC/SIGSOFT FSE 2018, 269-282. * (44) Moghadam, M. H. (2019). Machine Learning-assisted Performance Testing. ESEC/SIGSOFT FSE 2019, 1187-1189. * (45) Kazuhiro Kikuma, Takeshi Yamada, Koki Sato, Kiyoshi Ueda,\"Preparation Method in Automated Test Case Generation using Machine Learning,\"Proceedings of the Tenth International Symposium on Information and Communication Technology, December 2019, Pages 393-398, [https://doi.org/10.1145/3368926.3369679](https://doi.org/10.1145/3368926.3369679) * (46) B Jones,H.Shamer,X.Yang,D.Eyres,The automatic generation of software test data sets using adaptive search techniques, Proceedings of the Third International Conference on Software Quality Management SQM'95,Sevilla, Spain 1995 * (47) M.Roper,Computer-Aided Software Testing using Genetic Algorithms, Proceedings of the 10th International Software Quality Week QW '97), San Francisco, USA 1997). * (49) N.Tracey,J.Clark,K.Mander,Automated program flaw finding using simulated annealing, Proceedings of the ACM/SIGSOFT Inter- national Symposium on Software Testing and Analysis ISSTA '98, Clearwater Beach, Florida, USA 1998 * (50) Francisco Carlos M. Souza, Mike Papadakis, Yves Le Traon, Marcio E. Delamaro, \"Strong mutation-based test data generation using hill climbing,\" Proceedings of the 9th International Workshop on Search-Based Software Testing, May 2016,Pages 45-54, [https://doi.org/10.1145/2897010.2897012](https://doi.org/10.1145/2897010.2897012) * (51) Behjati, R., Arisholm, E., Bedregal, M., & Tan, C. (2019). Synthetic Test Data Generation Using Recurrent Neural Networks: A Position Paper. 2019 IEEE/ACM \"In International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE). doi:10.1109/raise.2019.00012 * (52) Choi, W., Necula, G., & Sen, K. (2013). Guided GUI testing of android apps with minimal restart and approximate learning. OOPSLA 2013, 623-640. * (53) Paduraru, C. and Marius-Constantin Melcemico. \"An Automatic Test Data Generation Tool using Machine Learning.\" ICSOFT (2018). * (54) Sharifipour, H., Shakeri, M., & Haghighi, H. (2018). Structural test data generation using a memetic ant colony optimization based on evolution strategies. Swarm Evol. Comput. 40, 76-91. * (55) Jan Cegin, \"Machine learning based test data generation for safety-critical software,Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringNovember 2020 Pages 1678-1681[https://doi.org/10.1145/3368089.3418538](https://doi.org/10.1145/3368089.3418538) * (56) Liu, P., Zhang, X., Pspassed test cases: Improving the pattern classification approach to the testing of mesh simplification programs,\" Softw. Testing, Verification Rel., vol. 20, no. 2, pp. 89-120, 2010. * [64] M. Vannali, M. Last, and A. Kandel, \"Using a neural network in the software testing process,\" Int. J. Intell. Syst., vol. 17, no. 1, pp. 45-62, 2002. * [65] D. Agarwal, D. E. Tamir, M. Last, and A. Kandel, \"A comparative study of artificial neural networks and info-fuzzy networks as automated oracles in software testing,\" IEEE Trans. Syst., Man, Cybernet., vol. 42, no. 5, pp. 1183-1193, Sep. 2012. * [66] Tsimopurlas, Foivos et al. \"Supervised learning over test executions as a test oracle.\" Proceedings of the 36th Annual ACM Symposium on Applied Computing (2021): n. pag. * [67] Spieker, Helge et al. \"Reinforcement learning for automatic test case prioritization and selection in continuous integration.\" Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis (2017): n. pag. * [68] Alexandre Rafael Lenz, Aurora Pozo, Silvia Regina Vergilio, Linking software testing results with a machine learning approach, Engineering Applications of Artificial Intelligence, Volume 26, Issues 5-6, 2013, Pages 1631-1640, ISSN 0952-1976, [https://doi.org/10.1016/j.engappai.2013.01.008](https://doi.org/10.1016/j.engappai.2013.01.008). * [69] B. Busjaeger and T. Xie, \"Learning for test prioritization: An industrial case study,\" in Proc. ACM SIGSOFT Int. Symp. Found. Softw. Eng., 2016, pp. 975-980. * [70] Wang F., Yang SC., Yang YL. (2011) Regression Testing Based on Neural Networks and Program Slicing Techniques. In: Wang Y., Li T. (eds) Practical Applications of Intelligent Systems. Advances in Intelligent and Soft Computing, vol 124. Springer, Berlin, Heidelberg. [https://doi.org/10.1007/978-3-642-25658-550](https://doi.org/10.1007/978-3-642-25658-550) * Int. Comput. Softw. Appl. Conf., vol. 1, pp. 245-250, 2018. * [72] Lachmann, R. et al. \"System-Level Test Case Prioritization Using Machine Learning.\" 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA) (2016): 361-368. * [73] Lachmann, R., 2018. Machine Learning-Driven Test Case Prioritization Approaches for Black-Box Software Testing. Nuremberg, Germany, European Test and Telemetry Conference, 2018. * [74] Nucci, Dario Di et al. \"A Test Case Prioritization Genetic Algorithm Guided by the Hypervolume Indicator.\" IEEE Transactions on Software Engineering 46 (2020): 674-696. * [75] Zhu, X., Zhou, B., Hou, L., Chen, J., & Chen, L. (2008). An Experience-Based Approach for Test Execution Effort Estimation. 2008 7th International Conference for Young Computer Scientists. doi:10.1109/icycys.2008.53 * ICMLSC '17. doi:10.1145/3036290.3036323 * [77] Silva, D. G. e, Jino, M., & Abreu, B. T. de. (2010). Machine Learning Methods and Asymmetric Cost Function to Estimate Execution Effort of Software Testing. 2010 Third International Conference on Software Testing, Verification and Validation. doi:10.1109/icst.2010.46 * CSC '95. doi:10.1145/259526.259548
_Artificial Intelligence (AI) is making a significant impact in multiple areas like medical, military, industrial, domestic, law, arts as AI is capable to perform several roles such as managing smart factories, driving autonomous vehicles, creating accurate weather forecasts, detecting cancer and personal assistants, etc. Software testing is the process of putting the software to test for some abnormal behaviour of the software. Software testing is a tedious, laborious and most time-consuming process. Automation tools have been developed that help to automate some activities of the testing process to enhance quality and timely delivery. Over time with the inclusion of continuous integration and continuous delivery (CI/CD) pipeline, automation tools are becoming less effective. The testing community is turning to AI to fill the gap as AI is able to check the code for bugs and errors without any human intervention and in a much faster way than humans. In this study, we aim to recognize the impact of AI technologies on various software testing activities or facets in the STLC. Further, the study aims to recognize and explain some of the biggest challenges software testers face while applying AI to testing. The paper also proposes some key contributions of AI in the future to the domain of software testing._ keywords: Artificial Intelligence, Machine Learning, Deep Learning, Software Testing, Software Testing Activities + Footnote †: journal: Elsevier
Give a concise overview of the text below.
268
ieee/87d5875c_44aa_47d5_aca9_3f18823296b4.md
Spiral Propagation of Polymer Optical Fiber Fuse Accompanied by Spontaneous Burst and Its Real-Time Monitoring Using Brillouin Scattering Yosuke Mizuno, Neisei Hayashi, Hiroki Tanaka, and Kentaro Nakamura, Precision and Intelligence Laboratory, Tokyo Institute of Technology, Yokohama 226-8503, Japan DOI: 10.1108/UPHOT.2014.2323301 1943-0655 (c) 2014 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See [http://www.ieee.org/publications_standards/publications/ights/index.html](http://www.ieee.org/publications_standards/publications/ights/index.html) for more information. Manuscript received April 14, 2014; revised May 8, 2014; accepted May 8, 2014. Date of publication May 13, 2014; date of current version May 21, 2014. This work was supported in part by a Grant-in-Aid for Young Scientists (A) under Grant 25709032 from the Japan Society for the Promotion of Science (JSPS) and in part by research grants from the General Sekiyu Foundation, the Iwatai Naoji Foundation, and the SCAT Foundation. The work of N. Hayashi was supported by a Grant-in-Aid for JSPS Fellows under Grant 25007652. Corresponding author: Y. Mizuno (e-mail: [email protected]). ## 1 Introduction Despite their higher loss than that of silica single-mode fibers (SMFs), polymer (or plastic) optical fibers (POFs) [1], [2] provide various advantages such as easy and cost-efficient connection, safety, and extremely high flexibility, leading to medium-range applications [3] and large-strain monitoring applications [4]. Since the first observation of Brillouin scattering [5]--one of the most important nonlinear effects--in POFs in 2010 [6], its properties have been extensively studied, especially for sensing applications [7, 8, 9, 10, 11]; and recently, distributed Brillouin sensing of strain and temperature in POFs has been successfully demonstrated [12, 13], proving its high-precision temperature sensing capability [7] as well as large-strain sensing capability [8]. The signal-to-noise ratio (SNR) of the system is, however, not sufficiently high when the spatial resolution is set to centimeter-order [13], because of the low Brillouin-scattered power in POFs resulting from their relatively large core diameters and multimode nature [6]. One solution is, as a previous study predicts [9], simply to boost the incident power; but recent reports have clarified that such high-power light injection into POFscauses not merely burning or damage at the POF-to-SMF interfaces [10] but also a continuous self-destruction process of the POFs, i.e., fiber fuse. A fiber fuse [14, 15, 16, 17, 18] occurs when high-power light propagating along the fiber locally heats the fiber material and initiates an optical discharge, which is then trapped in the fiber core and propagates back toward the light source, while consuming the optical energy and leaving damage. As the fiber is no longer usable after the fuse passage, this phenomenon is generally considered to be one of the critical factors that limit the maximal optical power to be delivered [19]. Thus, it is of great importance to thoroughly investigate the fuse properties in order that all possible measures are taken to avoid this effect. Recently, we have reported on the first observation of the fiber fuse in graded-index (GI-) POFs at 1.55 \\(\\mu\\)m [20], and its unique properties have been investigated from various angles [20, 21]. Although its propagation behavior is similar to that of the fuse in silica glass fibers [14, 15, 16, 17, 18, 22, 23, 24, 25] from a macroscopic perspective, its velocity is \\(\\sim\\)20 mm/s [20], which is one to two orders slower. The threshold power density is 6.6 kW/cm\\({}^{2}\\)[20], which is over 150 times lower than that of silica SMFs. Spectral measurement has clarified that the POF fuse is not a plasma but an optical discharge at a temperature of approximately 3600 K [21]. In addition, microscopic studies have revealed not only that the damage left after the passage of the bright spot looks like a black curve that is slightly oscillatory [20], which can be explained by taking the multimode nature into consideration [21], but also that gas bubbles are sometimes formed simultaneously with the damage [21]. However, these properties have been reported only in the case of a relatively low incident power density (\\(<\\) 13 kW/cm\\({}^{2}\\); corresponding to a power of 130 mW in the POFs used in Refs. [20, 21]. See Eq. (1) in Ref. [20] for the calculation method), and thus, to clarify the propagation behavior of the POF fuse under a higher power density is an important issue. Another significant task for study is to remotely and nonvisually detect the location of the propagating POF fuse on a real-time basis to minimize the damage extension. As POF-based sensors are often assumed to be used after being embedded in various materials and structures (the POFs can no longer be visually monitored), even though the measurement range is relatively short, a remote-detection method is required. Abedin _et al._[24] have developed a method of detecting the fuse location in silica SMFs based on optical time-domain reflectometry, but the injection of high-peak-power optical pulses may cause damage at the POF-to-SMF interface; needs for additional light at a different wavelength and for the time-consuming process of signal integration for SNR enhancement are also problematic. Note that Abedin _et al._[25] have developed another fuse-detecting method based on optical coherence-domain reflectometry, which can rapidly detect the fuse initiation but does not provide its location information. In this work, we investigate the propagation behavior of the POF fuse at a power density up to several tens of kW/cm\\({}^{2}\\) (corresponding to a sub-Watt power). The propagation velocity is raised in proportion to the incident power density, reaching 41 mm/s at 67 kW/cm\\({}^{2}\\). Spiral propagation of damage along with a burst induced by a considerable amount of generated gas is newly observed. We then develop and demonstrate a new method of detecting the location of the propagating POF fuse in real time using Brillouin scattering, in which a continuous wave (CW) is used and neither signal integration nor additional light injection is required. ## 2 Principle When propagating in an optical fiber, light is partially returned via spontaneous Brillouin scattering. The backscattered light spectrum is called Brillouin gain spectrum (BGS) [5], and the central frequency of the BGS is downshifted from the incident frequency by the amount called Brillouin frequency shift (BFS). The BFS is known to be \\(\\sim\\)10.8 GHz for a silica SMF [5] and \\(\\sim\\)2.8 GHz for a perfluorinated GI-POF [6] at 1.55 \\(\\mu\\)m. According to the theory [5], the Brillouin-scattered power is roughly in proportion to the effective length \\(L_{\\text{eff}}\\) defined as \\[L_{\\text{eff}}=\\frac{1-\\text{exp}(-\\alpha L)}{\\alpha} \\tag{1}\\]where \\(\\alpha\\) is the propagation loss and \\(L\\) is the fiber length. The fuse propagation can be regarded as the shortening of \\(L\\), leading to the reduction in the Brillouin signal. Though light can still propagate through a POF after the fuse passage [20], the Brillouin signal from this part is negligibly small. One may think of an idea of exploiting the Rayleigh-scattered light in the same way, but it is difficult because of its spectral overlap with the Fresnel-reflected light, which is extremely unstable in power. We have also confirmed that the fuse emission is so weak that its direct detection is difficult. ## 3 Methods POFs employed in the experiment were perfluorinated Gl-POFs [2] with a numerical aperture of 0.185, a core diameter of 55 \\(\\mu\\)m (different from that in Refs. [20], [21]), a cladding diameter of \\(\\sim\\)100 \\(\\mu\\)m, an overcladding diameter of 750 \\(\\mu\\)m, a core refractive index of \\(\\sim\\)1.35, and a propagation loss of \\(\\sim\\)250 dB/km (i.e., \\(\\alpha=\\) 0.056 /m) at 1.55 \\(\\mu\\)m. The core/cladding layers and the overcladding layer were composed of amorphous perfluorinated polymer and polycarbonate, respectively. The experimental setup is depicted in Fig. 1. The POF fuse was initiated in the same way as in Refs. [20], [21]. The Brillouin detecting system based on self-heterodyne was essentially the same as that previously reported in Ref. [6]. The high-power light at 1.55 \\(\\mu\\)m, which was boosted with a 1-W-output erbium-doped fiber amplifier (EDFA; HPA-200, Alnair Labs), not only provided energy for the initiation and propagation of the fuse but also served as Brillouin pump light. The polarization state was optimized using polarization controllers (PCs). The fuse propagation was recorded with a video camera to obtain its velocity and location. ## 4 Experimental Results on Fuse Characterization The measured dependence of the POF-fuse propagation velocity on the power density is shown in Fig. 2. The power (density) was calculated by taking into consideration the coupling loss at the POF-to-SMF interface, the propagation loss of the POF, and other losses caused in the optical circulator, etc. With increasing power density, the propagation velocity was linearly raised, and reached 41 mm/s at 67 kW/cm\\({}^{2}\\) (corresponding to \\(\\sim\\)800 mW). The threshold power density was 8.4 kW/cm\\({}^{2}\\), which is close to the previous report (6.6 kW/cm\\({}^{2}\\)) [20]. The proportionality constant was 440 mm \\(\\cdot\\) s\\({}^{-1}\\) MW\\({}^{-1}\\) cm\\({}^{2}\\) in this range, which is \\(\\sim\\)3.5 times smaller than the previous report (1560 mm \\(\\cdot\\) s\\({}^{-1}\\) MW\\({}^{-1}\\) cm\\({}^{2}\\)) [20]. This discrepancy may be elucidated by Todoroki's [26] theory on fuse propagation modes proposed for silica SMFs; further study is needed on this point. Microscopic analyses also revealed some unique features. Fig. 3(a) shows a micrograph of the damage left spirally after the fuse passage at 42 kW/cm\\({}^{2}\\). The period of the spiral oscillation was approximately 1400 \\(\\mu\\)m, which moderately agrees with the theoretical period of the ray propagating in a Gl fiber (1204 \\(\\mu\\)m in this POF) [27]. The diameter of the spiral oscillation was \\(\\sim\\)200 \\(\\mu\\)m (measured by observing the cross section of the damaged POF with a microscope; Fig. 3(a) does not provide correct information on its diameter because of the lens effect of the POF side surface), which is larger than the core/cladding diameter (\\(\\sim\\)100 \\(\\mu\\)m), indicating that the overcladding layer was thermally damaged at this high power density. The conversion of the spiral direction was also Figure 1: Schematic setup of Brillouin-based POF-fuse monitoring system. EDFA, erbium-doped fiber amplifier; ESA, electrical spectrum analyzer; PC, polarization controller; PD, photo detector; POF, polymer optical fiber. observed as shown in Fig. 3(b), which suggests the possible change in the fuse propagation mode, supporting the mechanism of the OPF fuse propagation reported in Ref. [21], i.e., the bright spot travels only along the optical path of a particular propagating mode (that with the highest energy) that provides it with energy directly. When the power density was as high as 67 kW/cm\\({}^{2}\\), not only did the spiral oscillation become less clear (Fig. 4(a)) but the fuse propagation was also sometimes spontaneously ceased Fig. 4: (a) Path of the OPF fuse at a power density of 67 kW/cm\\({}^{2}\\), (b) that with a burst, and (c) its magnified view. Fig. 2: OPF-fuse propagation velocity vs. power density. The red circles are measured points, and the green line is a linear fit. accompanying a burst, as shown in Fig. 4(b) and (c). This burst appears to have been induced by a considerable amount of generated gas; and the higher the incident power was, the more frequently it was observed. Thus, it is difficult to make the POF fuse propagate for a relatively long distance (longer than several tens of centimeters) at a power density higher than \\(\\sim\\)100 kW/cm\\({}^{2}\\) (or at a power higher than \\(\\sim\\)1 W). ## 5 Experimental Results on Fuse Monitoring The incident light power was \\(\\sim\\)450 mW, corresponding to a power density of 38 kW/cm\\({}^{2}\\). The room temperature was 19 \\({}^{\\circ}\\)C. A fuse was initiated at around the open end of a 62-cm-long POF. Fig. 5(a) shows the BGSs measured every 2 s after the fuse initiation, the vertical axis of which is in linear scale. The noise floor was \\(\\sim\\)0.6 mW. The BFS was \\(\\sim\\)2850 MHz, which agrees with the previously reported value [6, 7, 8, 9, 10, 11]; a slight discrepancy was caused by the different room temperature and video bandwidth of an electrical spectrum analyzer (ESA). As the fuse propagated, the peak power of the BGS was continuously reduced. In the weakest BGS presented in Fig. 5(a), an additional peak was observed at approximately 2960 MHz, which originated not from the heated POF but from the noise floor of the ESA, because the BFS in a POF is lowered with increasing temperature [7]. Fig. 5(b) shows the peak power, measured every 1 s (this interval can be even shorter; if BGSs are unnecessary, we need not use an ESA with a relatively low sampling rate to measure the peak power), plotted as a function of the remaining POF length. The red line indicates a fitted curve calculated using the effective length given by Eq. (1), which can be regarded as almost linear with a slope of 0.032 nW/cm in this range (When the POF length is shorter than \\(\\sim\\)10 m, linear approximation is valid [9]). Thus, the fuse location can be identified using the peak power. The measurement error seems to originate from polarization- and multimode-dependent signal fluctuations. The fuse propagation velocity calculated from Fig. 5(b) was \\(\\sim\\)29.4 mm/s, which well agrees with the result in Fig. 2. As long as a POF fuse is initiated, the maximum length of the remote detection is, in principle, not limited, because the optical power required for Brillouin observation is much lower than that for fuse initiation [20]. Although demonstrated using the POF fuse, we expect this real-time monitoring method is applicable also to the glass optical fiber fuse. ## 6 Conclusion The propagation behavior of the POF fuse at a power density up to several tens of kW/cm\\({}^{2}\\) (or at a sub-Watt power) was investigated. The propagation velocity was proportional to the power density, and reached 41 mm/s at 67 kW/cm\\({}^{2}\\). The spiral propagation mode of damage as well as Figure 5: (a) BGSs in POF measured every 2 s after the fuse initiation. (b) Peak power dependence on the remaining POF length. The blue circles are points measured every 1 s, and the red curve is a theoretical fit. spontaneous termination of the fuse propagation accompanied by a burst was newly observed. A novel method of detecting the location of the propagating POF fuse in real time using Brillouin scattering was also developed. This CW-based method is free from the necessity of injecting additional light and/or integrating signal, and could be implemented also for glass-fiber fuse monitoring. We believe that this work will be useful in developing POF-based systems for high-capacity transmission and distributed Brillouin sensing [28, 29, 30, 31, 32] in the near future. ## Acknowledgment The authors are grateful to R. Nedelcov (Department of Language Arts, Tokyo University of the Arts) for his English editing. ## References * [1] M. G. Kuzyk, _Polymer Fiber Optics: Materials, Physics, and Applications_. Boca Raton, FL, USA: CRC Press, 2006. * [2] Y. Koike and M. Asai, \"The future of plastic optical fiber,\" _NPG Asia Mater._, vol. 1, pp. 22-28, 2009. * [3] I. Mollers _et al._, \"Plastic optical fiber technology for reliable home networking: Overview and results of the EU project POP-ALL,\" _IEEE Commun. Mag._, vol. 47, no. 8, pp. 58-68, Aug. 2009. * [4] I. R. Husoli, K. Nakamura, and S. Ueha, \"Sensing characteristics of plastic optical fibres measured by optical time-domain reflectometry,\" _Meas. Sci. Technol._, vol. 15, no. 8, pp. 1553-1559, 2004. * [5] G. P. Agrawal, _Nonlinear Fiber Optics_. San Diego, California, USA: Academic Press, 1995. * [6] Y. Mizuno and K. Nakamura, \"Experimental study of Brillouin scattering in perfluorinated polymer optical fiber at telecommunication wavelength,\" _Appl. Phys. Lett._, vol. 97, no. 2, pp. 021013-1-021103-3, Jul. 2010. * [7] Y. Mizuno and K. Nakamura, \"Potential of Brillouin scattering in polymer optical fiber for strain-insensitive high-accuracy temperature sensing,\" _Opt. Lett._, vol. 35, no. 23, pp. 3985-3987, 2010. * [8] N. Hayashi, Y. Mizuno, and K. Nakamura, \"Billouin gain spectrum dependence on large strain in perfluorinated graded-index polymer optical fiber,\" _Opt. Exp._, vol. 20, no. 19, pp. 21101-21106, 2012. * [9] Y. Mizuno, T. Ishigure, and K. Nakamura, \"Brillouin gain spectrum characterization in perfluorinated graded-index polymer optical fiber with 62.5-\\(\\mu\\)m core diameter,\" _IEEE Photon. Technol. Lett._, vol. 23, no. 24, pp. 1863-1865, Dec. 2011. * [10] Y. Mizuno, N. Hayashi, and K. Nakamura, \"Brillouin scattering signal in polymer optical fiber enhanced by exploiting pulsed pump with multimode-fiber-assisted coupling technique,\" _Opt. Lett._, vol. 38, no. 9, pp. 1467-1469, May 2013. * [11] N. Hayashi, Y. Mizuno, and K. Nakamura, \"Characterization of stimulated Brillouin scattering in polymer optical fibers based on lock-in-free pump-probe technique,\" _J. Lightwave Technol._, vol. 31, no. 19, pp. 3162-3166, Oct. 2013. * [12] A. Minardo, R. Berini, and L. Zeni, \"Distributed temperature sensing in polymer optical fiber by BOFDA,\" _IEEE Photon. Technol. Lett._, vol. 26, no. 4, pp. 387-390, Feb. 2014. * [13] N. Hayashi, Y. Mizuno, and K. Nakamura, \"First demonstration of distributed Brillouin measurement with centimeter-order resolution based on plastic optical fibers,\" presented at the OptoElectronics Commun. Cont. Australian Conference Optical Fibre Technology (OEC/ACOGT), Melbourne, Australia, Jul. 6-10, 2014, paper 239.00. * [14] R. Kashyap and K. J. Blow, \"Observation of catastrophic self-propelled self-focusing in optical fibres,\" _Electron. Lett._, vol. 24, no. 1, pp. 47-49, Jan. 1988. * [15] R. Kashyap, \"Self-propelled self-focusing damage in optical fibres,\" in _Proc. 10th Int. Conf. Lasers Appl._, F. J. Duarte, Ed. McLean, VA: STS Press, 1988, pp. 859-866, Lake Tahoe, NV, USA, Dec. 7-11, 1987. * [16] R. Kashyap, \"The fiber fuse--From a curious effect to a critical issue: A 25th year retrospective,\" _Opt. Exp._, vol. 21, no. 5, pp. 6422-6441, 2013. * [17] S. Todoraki, \"Fiber fuse propagation behaviour,\" in _Selected Topics on Optical Fiber Technology_. Rijeka, Croatia: InTech, 2012, pp. 551-570. * [18] S. Todoraki, _Fiber Fuse--Light-Induced Continuous Breakdown of Silica Glass Optical Fiber_. Tokyo, Japan: Springer-Verlag, 2014. * [19] T. Morioka _et al._, \"Enhancing optical communications with brand new fibers,\" _IEEE Commun. Mag._, vol. 50, no. 2, pp. s31-s42, 2012. * [20] Y. Mizuno, N. Hayashi, H. Tanaka, K. Nakamura, and S. Todoraki, \"Observation of polymer optical fiber fuse,\" _Appl. Phys. Lett._, vol. 104, no. 4, pp. 043302-043302-4, Jan. 2014. * [21] Y. Mizuno, N. Hayashi, H. Tanaka, K. Nakamura, and S. Todoraki, \"Propagation mechanism of polymer optical fiber fuse,\" _Sci. Rep._, vol. 4, no. 4800, pp. 1-4, Apr. 2014. * [22] S. Todoraki, \"Origin of periodic void formation during fiber fuse,\" _Opt. Exp._, vol. 13, no. 17, pp. 6381-6389, 2005. * [23] R. M. Atkins, P. G. Simpkins, and A. D. Valon, \"Track of a fiber fuse: A Rayleigh instability in optical waveguides,\" _Opt. Lett._, vol. 28, no. 12, pp. 974-976, 2003. * [24] K. S. Abedin and M. Nakazawa, \"Real time monitoring of a fiber fuse using an optical time-domain reflectometer,\" _Opt. Exp._, vol. 18, no. 20, pp. 21315-21321, 2010. * [25] K. S. Abedin, M. Nakazawa, and T. Miyazaki, \"Backreflected radiation due to a propagating fiber fuse,\" _Opt. Exp._, vol. 17, no. 8, pp. 6525-6531, 2009. * [26] S. Todoraki, \"Fiber fuse propagation modes in typical single-mode fibers,\" in _Proc. Opt. Fiber Commun.Nat. Fiber Opt. Eng. Conf._, Anaheim, CA, USA, 2013, pp. JWZA.1-JWZA.11. * [27] G. P. Agrawal, _Fiber-Optic Communication Systems_. Hoboken, NJ, USA: Wiley, 2010. * [28] T. Horiguchi and M. Tateda, \"BOTDA--Nondestructive measurement of single-mode optical fiber attenuation characteristics using Brillouin interaction: Theory,\" _J. Lightwave Technol._, vol. 7, no. 8, pp. 1170-1176, Aug. 1989. * [29] T. Kurashima, T. Horiguchi, H. Izumida, and M. Tateda, \"Brillouin optical-fiber time domain reflectometry,\" _IEICE Trans. Commun._, vol. E76-B, pp. 382-390, 1993. * [30] K. Hotate and T. Hasegawa, \"Measurement of Brillouin gain spectrum distribution along an optical fiber using a correlation-based technique--Proposal, experiment and simulation,\" _IEICE Trans. Electron._, vol. E83-C, no. 3, pp. 405-412, Mar. 2000. * [31] Y. Mizuno, W. Zou, Z. He, and K. Hotate, \"Proposal of Brillouin optical correlation-domain reflectometry (BOCDR),\" _Opt. Exp._, vol. 16, no. 16, pp. 12148-12153, Aug. 2008. * [32] D. Garus, K. Kreber, and F. Schliep, \"Distributed sensing technique based on Brillouin optical-fiber frequency-domain analysis,\" _Opt. Lett._, vol. 21, no. 17, pp. 1402-1404, Sep. 1996.
We study the propagation behavior of the polymer optical fiber (POF) fuse at a power density up to several tens of kW/cm\\({}^{2}\\) (corresponding to subwatt power). The propagation velocity is raised in proportion to the power density, reaching 41 mm/s at 67 kW/cm\\({}^{2}\\). We also observe spiral oscillation and spontaneous termination of the fuse propagation, with the latter accompanied by a burst. We then develop a new method of detecting the location of the propagating POF fuse remotely and nonvisually in real time using Brillouin scattering, which can be clearly observed at such a high power density. This method requires neither additional light injection nor signal integration, and it could be used to monitor the propagating fuse in glass fibers. Polymer optical fibers, fiber fuse, Brillouin scattering, remote sensing, real-time monitoring, nonlinear optics.
Provide a brief summary of the text.
187
isprs/6b1f6cad_1219_4d60_b30b_8cb2e113b8b3.md
# Points Classification by a Sequential Higher - Order Moments Statistical Analysis of Lidar Data F. Crosilla D. Macorig I. Sebastianutti D. Visintini Department of Civil Engineering and Architecture, University of Udine, via delle Scienze 206, Udine, Italy - (fabio.crosilla, domenico.visintini)@uniud.it Municipality of Tavagnacco (UD), Piazza Indipendenza 1 - 33010 Feletto Umberto, Udine, Italy - [email protected] ###### LIDAR, Classification, Algorithms, Skewness, Kurtosis ## 1 Background Up to now, a limited number of algorithms has been proposed to perform unsupervised point classification by studying the behaviour of some statistical parameters of the LIDAR point cloud distribution values. Bartels et al. (2006, 2010) have introduced a \"skewness balancing\" algorithm able to separate by elevation and non ground points, where the first ones can belong to both flat or sloped terrains. In another paper Bao et al. (2007) considered the kurtosis point distribution values analysis, allowing a separation among ground, buildings and vegetation. Antonariakis et al. (2008) subdivided the whole area into cells of small dimension and for each cell the computation of skewness and kurtosis of the points first and last pulses have been computed. Final classification results from the combination of several parameters. A further improvement of the classification process was recently obtained by a combined analysis of skewness and kurtosis distribution functions for elevation and intensity LIDAR point distribution values (Bao et al, 2008; Yunfei et al, 2009; Liu et al, 2009). As well known from statistics, skewness (sk) is the third moment about the mean. Its distribution value represents the degree of asymmetry around the mean and is defined as \\[sk=\\left(\\frac{1}{N\\times\\sigma^{2}}\\times\\sum_{i=1}^{N}(x_{i}-\\mu)^{3}\\right) ^{1/3} \\tag{1}\\] where \\(N\\) is the number of the points of the cloud, \\(x_{i}\\) the elevation or the intensity value of the \\(i\\)-th point, \\(\\mu\\) is the mean value of elevation or intensity computable from \\[\\mu=\\frac{\\sum x_{i}}{N} \\tag{2}\\] \\(\\sigma\\) is the standard deviation of all points obtainable from \\[\\sigma=\\sqrt{\\frac{\\sum(x_{i}-\\mu)^{2}}{N}} \\tag{3}\\] A skewness value of zero indicates a symmetric distribution. For elevation data, negative values indicate dominance of valleys while positive values show dominance of peaks. Kurtosis (ku) is instead the fourth moment about the mean. Its value measures the relative flatness or peakedness of the distribution about its mean. It can be computed from \\[ku=\\left(\\frac{1}{N\\times\\sigma^{2}}\\times\\sum_{i=1}^{N}(x_{i}-\\mu)^{4}\\right) ^{1/4} \\tag{4}\\] The normal distribution has a kurtosis equal to 3. Larger values indicate a peak distribution, while smaller values than 3 characterize a valley distribution. In the mentioned literature, skewness and kurtosis are computed every time that the most elevated point and the point with the largest intensity values are sequentially removed from the data set. For instance, by performing the skewness and kurtosis analysis of the intensity sampled data, there is a good probability to well approximate the skewness and kurtosis values of a normal distribution in case of a homogeneous cluster of data. The same holds for a flat terrain in case its elevation values are considered. For a bi or a multi modal intensity distribution, Gaussian parameters values for kurtosis are satisfied at the last part of the procedure, when the original multi modal distribution is reducedto a single modal behaviour. For this reason the analysis is sequentially carried out for all the sampled LIDAR points in order to identify all the potential clusters. Let's consider the example reported in the Fig. 1a (red square). It can be noted the presence of ground points with homogeneous intensity and some darker vegetation points. The diagrams of the skewness and kurtosis values are reported in Fig. 1b. It is possible to note how during the running steps (cycles), when the ground intensity values are successively removed, skewness and kurtosis values continuously change. When skewness is zero and kurtosis presents the minimum value, the distribution is symmetric and the same number of points is expected for the two different clusters. At this point skewness and kurtosis start raising and kurtosis reaches a local maximum (equal to 3) when vegetation points are only present. This fact is also verified by a local maximum of skewness, confirming the only presence of vegetation points belonging to a unique cluster. As suggested by Liu et al (2009) this point separates ground and vegetation. Vegetation points are on the right side (see Fig. 1b) while ground points are on the left side. This behaviour is mainly true for intensity data, not at all for elevation data. In this last case the object geometry deeply conditions the skewness and kurtosis values. As said before, negative skewness values indicate dominance of valleys while positive values show dominance of peaks. Anyway, also in this case it is possible to identify clusters of homogeneous points. This can be easily verified from the example reported in Fig. 2a where two different clusters of LIDAR data are shown. They represent ground points and vegetation (white points). Analyzing the skewness and the kurtosis of the sequential procedure, when all the vegetation points are removed, the two curves become and remain stable till the end of the process (Fig. 2b). Similar results have been also provided by Liu et al (2009). ## 2 The proposed procedure The sequential procedure allows to alternatively use the most effective values between intensity and elevation for classifying an homogeneous cluster of points. If the graph of the distribution is such to prefer the intensity values (pronounced bi or multi modal distribution), from skewness and kurtosis behaviour analysis, the last part of the distribution values will be classified as in Fig 1b. Points located at the right side of the kurtosis local maximum are homogeneously classified, while points located at its left part remain unclassified. A similar approach is still valid if the point classification is carried out for the elevation. Points satisfying for the last part of skewness and kurtosis function values a local flat condition are homogeneously classified, while the others remain unclassified. The same procedure can be applied again to the points just classified, or not yet classified, using the complementary data category; i.e. the intensity analysis is applied to the data already classified by elevation and vice versa. The mixed procedure allows to identify further sub classes, within the already classified ones, or allows to perform a reliable classification of the points still unclassified. A shape analysis of the point elevation and intensity continuous distributions is carried out at first. A non parametric estimation of the probability density functions can be obtained by a convolution process of a chosen kernel applied to each sampled value (e.g. Epanechnikov, 1969). Given a data set (\\(x_{l}\\), \\(x_{2}\\), , \\(x_{u}\\)) sampled from a distribution having an unknown density function \\(f\\), the problem is to estimate the shape of this function \\(f\\) from the following relationships: \\[f_{h}(x)=\\frac{1}{n}\\sum_{i=1}^{n}K_{h}\\left(x-x_{i}\\right)\\ \\ \\text{with}\\ \\ \\ K_{h}=\\frac{1}{h}K\\left(\\frac{x}{h}\\right) \\tag{5}\\] where \\(K(x/h)\\) is the kernel, a non negative density function with integral equal to 1; \\(h>0\\) is a real positive parameter defining the size of the sampling class (the default value is 100). Symmetrical density functions, with respect to the origin, are usually applied (a normal function was applied in this case). The procedure chooses the category of data (elevation or intensity) to analyse at first: the choice falls on the category better showing a bi or a multi modal distribution. If the clusters are totally disjoined, the problem does not exist. If this is not the case, further analyses are be carried out. Hartigan J.A. and Hartigan P.M. (1985) proposed to apply the dip test to measure multimodality in a sample by the maximum difference, over all sample points, between the empirical distribution function and the unimodal distribution function that minimizes that maximum difference. More recently, profile analysis was carried out by applying different strategies like the Bayesian Information Criterion (BIC) score (e.g. Yeoung et al, Figure 1a: Example of an area with homogeneous ground points intensity and darker vegetation points. Figure 2a: Example of two geometrical clusters (ground and vegetation) differently coloured. Figure 1b: Skewness and kurtosis values after removing at each step of the procedure the higher intensity values (see Fig. 1a). 2001) and the variational Bayesian approach (e.g. Teschendorff et al, 2005). Teschendorff et al (2006) proposed to integrate to the previous models the analysis of the kurtosis. They showed that in case of a bimodal distribution, a mixture of two approximately equal mass Gaussians must have a kurtosis value less than 3, whereas, in case of highly unequal masses, the kurtosis must be greater than 3. They also found out a relationship among kurtosis, the standardized separation between the two clusters and the minor cluster mass (in percentage of total). Practically, comparing two distribution functions, the best seems to be the one with the kurtosis value much more less than 3. According to what explained above, the iterative procedure can be summarized by the following five sequential main steps: 1. Non parametric estimation of the probability distribution for elevation and intensity point values. 2. Choice of the data category to start by testing multimodality of the respective probability distributions. 3. Skewness and kurtosis variation analysis following the point removal and identification of a significant point cluster. 4. Analysis of the selected cluster by the complementary data category. Identification of potential sub clusters. 5. Go to point 2. and repeat the process for the rest of the data. The performance of this sequential procedure has been verified by some experiments, for different classification conditions, onto an aerial LIDAR survey of a municipality near Udine. ## 3 Some experiments In the following, the results of two experiments are reported, evaluating at first the category of data to start, i.e. non calibrated intensity or elevation. Thanks to the mixed sequential method, the classes obtained from the first classification run have been furthermore subdivided. Data are relative to a strip of the aerial laser scanning survey of the municipality of Tavagnacco, North of Udine (Italy), carried out in 2007 with a Leica ALS50 sensor. Forty strips have been acquired at a flight height of 1000 m with a point density around 12 pts/sm. A manual point classification has been previously carried out by the program MARS Explorer 6.1; four target classes have been manually identified: ground, street, building, vegetation and other objects (i.e cars). ### First experiment The experiment is related to one of the main applications of the laser points analysis, that is road extraction. A small area of the municipality of Tavagnacco (UD), crossed by the highway, is taken into account (Fig. 3). By analyzing the distribution functions both for intensity (Fig. 4a) and for elevation (Fig. 4b), it is possible to see how it is really hard to clearly distinguish some elevation point clusters, while there exists a clear distinction for what concerns the intensity. This is confirmed by the values of the dip test that furnishes 0,0166 for elevation and 0,0543 for intensity. Thus the choice falls on the computation of skewness and kurtosis for the intensity values obtaining the graph reported in fig 5. The decision is to classify the points in correspondence of the maximum value of kurtosis, where a peak value of skewness is also present (cycle 206), obtaining the classification of Fig. 6a. How it was logical to expect, the largest part of the points belonging to the asphalted area are correctly classified, more some points belonging to vegetation (upper right part of Fig. 6a) and some sparse ground points. To all these points a provisory classification label was assigned, while the points not yet classified (gray in Fig. 6a) were considered unclassified. The elevation analysis was then applied to the red points in Fig. 6a, obtaining the graph in Fig. 6b. It seems evident that is possible to separate the points belonging to the small cluster at height 187 m, from those contained in the range 188 m \\(-\\) 191 m. Computing again the skewness and kurtosis coefficients for such points, considering the elevation values, the following behaviour is obtained (Fig. 7). It was decided to classify the points according to the elevation following the cycle 2845, in correspondence of a local flatness of the kurtosis. The result makes possible to extract from the red points of Fig. 6a those belonging to the terrain (Fig. 8). Figure 4: (a) Intensity distribution function and (b) Elevation distribution function for the points of the first experiment. Figure 5: Skewness and kurtosis behaviour for the intensity values of the first experiment. Figure 3: Intensity image of the area of the first experiment. Figure 6: (a) Point intensity classification. (b) Distribution function for the elevation values of the points classified in red. Figure 7: Skewness and kurtosis values for elevation of the points classified in red in Fig. 6. ### Second experiment This experiment represents a very significant synthesis of real situations (see Fig. 9); we can see the presence of ground, vegetation, road and of part of the roof of a building, besides some disturbing elements such as cars and the parking place (with intensity value similar to the asphalt). Analysing the point distribution by intensity and by elevation (Fig. 10a, Fig. 10b), it is possible to see in the graph of the elevations the presence of more than two classes, while in the graph of intensities there are two classes partially overlapped. According to these results, it was decided to start the classification process according to the elevation values (Fig.11). From the kurtosis behaviour it is possible to clearly distinguish a slip in correspondence of the cycle number 1739 due to the removing of the points belonging to the roof, and the successive drop around the cycle 2450 due to a series of disturbing points (vegetation, cars). It is evident that the first significant drop that may be seen in Fig. 10 could be avoided as the roof points are totally isolated. The analysis could be carried out only for vegetation, cars and ground points. The authors reported a complete analysis to show the readers the skewness and kurtosis behaviour for all the data set. According to these considerations, the authors decided to classify the points starting from the cycle 2856 (flat area of the kurtosis), recognizing the majority of the ground points (red) from all the others located over it (Fig. 12). Then, it was decided to classify again the ground points proceeding with the intensity values, with the aim to identify the points belonging to the road. Fig. 13 reports the behaviour of the ground points distribution analysed by intensity. After performing the computation of skewness and kurtosis indexes, the behaviour reported in Fig. 14 was obtained. The points are classified according to the cycle 153, in correspondence of a local maximum of kurtosis. In this way it was possible to separate the points belonging to the road and to the near parking area (see Fig. 15). Points not yet classified are now taken into account and their distribution is evaluated. Fig. 16a shows the intensity values: two partially overlapping families can be distinguished. Fig. 16b instead reports, the point elevation distribution that put in evidence two distinct main clusters, one around 201 m, another one around 211 m. Figure 14: Behaviour of skewness and kurtosis for the intensity values of the ground points. Figure 12: Point classification by elevation for the second experiment. Figure 10: (a) Intensity distribution function and (b) elevation distribution function for the points of the second experiment. Figure 8: Height classification of the red points in Fig. 6a. Figure 13: Distribution function of the intensity values for the ground points of the second experiment. Figure 9: Particular by intensity of the area interested by the second experiment. Of course, the cluster relative to a mean elevation of 211 m corresponds to the roof points while the other cluster considers a small residual number of ground points and disturbing points close to the ground, like vegetation, cars and a ramp. According to these results, it was decided to compute the skewness and kurtosis to the elevation values of the unclassified point cluster, neglecting the roof points (Fig 17). According to the behaviour (Fig. 17), it was decided to classify the point in correspondence of the cycle 778, obtaining the result reported in Fig. 18. In this figure it is possible to immediately see the roof coloured in pink, corresponding to the point cluster with a mean elevation of 211 m, and some small red areas representing residual ground points, not completely identified at the previous iteration (see Fig. 12), and the points of the ramp. In this way, it was possible to separate the roof of the building from the residual ground points, the ramp and a series of disturbing points relative to the cars and low vegetation. ## 4 Extension to complex situations The classification method here proposed works well for small areas, where the presence of only a few modal distribution values can be expected for intensity and elevation. The method becomes prohibitive when applied to large, not homogeneous and complex areas, where a wide multi modal behaviour could be present for intensity and elevation values. This is the reason for which the classification procedure was thought as a progressive multi analysis method, where the whole area is subdivided into regular sub areas and for each of these the interactive classification is carried out. Some first experimental results confirm the extendibility of the interactive classification method to complex situations. The experiment was carried out for the complex area reported in Fig. 19. The area is characterized by two flat parts, located at different height, connected by a sloped terrain covered by trees and other kind of vegetation. The whole area was subdivided into four zones (Fig. 20) and for each zone the intensity and the elevation distribution values were computed (see Fig. 21). According to the distribution results of the zones 1 and 3, it was decided to furthermore divide these zones in four parts. Proceeding in this way, that is after having analysed the shape of the distribution functions for intensity and elevation, the whole area was finally subdivided according to the scheme reported in Fig. 22. Performing the skewness and kurtosis analysis for elevation and intensity of each of the unitary zones, the result of progressive and interactive classification is finally reported in Fig. 23. Figure 16: (a) Intensity distribution function and (b) elevation distribution function for the unclassified points of Fig. 12. Figure 21: Distribution functions for intensity and elevation for each of the four main areas. Figure 17: Behaviour of skewness and kurtosis for the elevations of the unclassified points, neglecting the roof points. Figure 22: Final subdivision of the entire area. Figure 18: Classification according to the skewness and kurtosis values as in Fig. 17. Figure 19: Example of a point classification for a complex area. The performance of the algorithm is measured by comparing the classifications against the same referenced data obtained by a manual classification with the program MARS Explorer 6.1. The total error (i.e. number of misclassified points as a percentage of all the points) results equal to 8.9 %. Type I error (i.e. number of misclassified ground points as a percentage of all the ground points) corresponds to 4,8%, while type II error (i.e. number of misclassified vegetation points as a percentage of all the vegetation points) is equal to 22,0%. According to these preliminary results it seems that the algorithm works very well for filtering ground points, also for heavily vegetated slopes, that, according to some results reported in Sithole and Vosselman (2005) are not usually correctly classified with standard packages. In any case, the II type error value would be significantly reduced in case of buildings and other kind of man-made objects. The error values obtained in the three experiments are summarized in Table 24. ## 5 Conclusions The paper proposes a new LIDAR point classification method based on the sequential skewness and kurtosis analysis of elevation and intensity point distribution values, after removing at each step of the process the largest data values as suggested by Liu et al. (2009). After a preliminary shape analysis of elevation and intensity point distribution, the new procedure starts by choosing the category of data showing a significant bi or multi clustering distribution. The method extracts the first data cluster that is furthermore analysed by studying skewness and kurtosis behaviour of the same points belonging to the complementary data category. This makes possible to iteratively find out potential sub clusters of the original selected one. Successive clusters are identified applying the same mixed procedure to the unclassified LIDAR points, sequentially avoiding those points classified at the last iteration. A progressive multi analysis extension of the proposed method was also proposed for performing point classification in complex or large areas. Some real numerical experiments confirm the good applicability of the method also for ground point filtering in case of vegetated slopes. ## References * Antonarakis et al. (2008) Antonarakis, A.S., Richards, K.S., Brasington, J., 2008. Object-based land cover classification using airborne LiDAR. In: _Remote Sensing of Environment_ 112 (2008), pp. 2988-2998. * Bao et al. (2007) Bao, Y., Cao, C., Chang, C., Li, X., Chen, E., Li, Z., 2007. Segmentation to the clouds of LIDAR data based on change of Kurtosis. In: _International Symposium on Photoelectronic Detection and Imaging 2007_. Beijing. * Bao et al. (2008) Bao, Y., Li, G., Cao, C., Li, X., Zhang, H., He, Q., Bai, L., Chang, C., 2008. Classification of LIDAR point cloud and generation of DTM from LIDAR height and intensity data in forested area. In: _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Beijing, China, Vol. XXXVII, Part B3b. * Bartels and Wei (2006) Bartels, M., Wei, H., 2006. Segmentation of LIDAR Data using measures of distribution. In: _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Enschede, the Netherlands, Vol, XXXVI, Part 7. * Bartels et al. (2006) Bartels, M., Wei, H., Mason, D., 2006. DTM Generation from LIDAR Data using Skewness Balancing. In: _18th International Conference on Pattern Recognition (ICPR'06) Volume 1_, pp. 566-569. * Bartels and Wei (2010) Bartels, M., Wei, H., 2010. Threshold-free object and ground point separation in LIDAR data. In _Pattern recognition letters_ 31 (2010), pp. 1089-1099. * Epanechnikov (1969) Epanechnikov, V.A., 1969. Non-Parametric estimation of a multivariate probability density. _Theory of the Probability and its Applications_, 14, pp. 153-158. * Hartigan and Hartigan (1985) Hartigan, J.A., Hartigan, P.M., 1985. The dip test of unimodality. _The annals of Statistics_, 13, n. 1, pp. 70-84. * Liu et al. (2009) Liu, Y., Li, Z., Hayward, R., Walker, R., Jin, H., 2009. Classification of Airborne LIDAR Intensity Data Using Statistical Analysis and Hough Transform with Application to Power Line Corridors. In: _2009 Digital Image Computing: Techniques and Applications_, pp. 462-467. * Sithole and Vosselman (2005) Sithole, G., Vosselman, G., 2005. Filtering of airborne laser scanner data based on segmented point clouds. In: _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Enschede, the Netherlands, Vol. XXXVI, Part 3/W19, pp. 66-71. * Teschendorff et al. (2005) Teschendorff, A.E., Wang, Y, Barbosa-Morais, N., Brenton, J.D., Caldas, C., 2005. A variational bayesian mixture modelling framework for cluster analysis of gene-expression data. _Bioinformatics_, 21, pp. 3025-3033. * Teschendorff et al. (2006) Teschendorff, A.E., Naderi, A., Barbosa-Morais, N., Caldas, C., 2006. Pack: Profile Analysis using Clustering and Kurtosis to find molecular classifiers in cancer. _Bioinformatics_, 22, pp. 2269-2275. * Yeoung et al. (2001) Yeoung, K.Y., Fraley, C., Murua, A., Raftery, A.E., Ruzzo, W.L., 2001. Model-based clustering and data transformations for gene expression data. _Bioinformatics_, 17, n. 10, pp. 977-987. ## Acknowledgements The authors thank Dr. Daniele Piccolo for providing some statistical testing computations by the R package. \\begin{table} \\begin{tabular}{c|c|c|c} \\hline & First exp. & Second exp. & Complex area \\\\ \\hline Total error & 5,6\\% & 1,2\\% & 8,9\\% \\\\ \\hline Type I error & 6\\% & 0,4\\% & 4,8\\% \\\\ \\hline Type II error & 20\\% & 0,7\\% & 22\\% \\\\ \\hline \\end{tabular} \\end{table} Table 24: Error values in the three experiments. Figure 23: Final classification result.
The paper deals with a new sequential procedure to perform unsupervised LIDAR points classification by iteratively studying skewness and kurtosis for elevation and intensity point distribution values. After a preliminary local shape analysis of elevation and intensity point distributions, carried out from the original discrete frequencies by a non parametric estimation of the density functions, the procedure starts by choosing the category of data (elevation or intensity) to analyse at first: the choice falls on the category better showing by a testing procedure a bi or a multi clustering distribution. The first point cluster is identified by studying the distribution skewness and kurtosis variations, after removing at each step the largest data values. The selected cluster is furthermore analysed by studying higher order moments behaviour of the complementary data category. This makes possible to find out potential sub clusters of the original selected one, permitting, in this way, a more effective point classification. Successive clusters are identified by applying the same iterative procedure to the still unclassified LIDAR points. For complex point distribution shapes or for the classification of large areas, a progressive analysis method, based on the partition of the entire data set into regular subsets, is proposed. Some real numerical experiments confirm the capability of the method proposed. The classification total errors in the experiments range from a minimum value of 1,2% to a maximum value of 8.9%.
Write a summary of the passage below.
276
arxiv-format/2303_17241v2.md
# A Joint Model and Data Driven Method for Distributed Estimation Meng He,, Ran Li,, Chuan Huang,, and Shulong Zhang Part of this paper was presented in IEEE International Conference on Communications in China (ICCC) 2022 [1]. This work was supported in part by the Natural Science Foundation of China under Grant No. 62022070 and No. 62341112, in part by Shenzhen high-tech zone project No. KC2022KCCX0041, in part by the key project of Shenzhen No. LYJ20208118000613, in part by the Shenzhen Outstanding Talents Training Fund 202002, in part by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No. 202B1212010001), and in part by the Shenzhen Key Laboratory of Big Data and Artificial Intelligence (Grant No. ZDSYS201707251409055). (Corresponding author: Chuan Huang).M. He, R. Li and C. Huang are with the Future Network of Intelligence Institute and the School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China, 518172. Chuan Huang is also with Peng Cheng Laboratory, Shenzhen, China, 518066. (Emails: [email protected], [email protected], and [email protected]).S. Zhang is with the SF Technology, Shenzhen, China, 518052. (Email: [email protected]). ## I Introduction Motivated by the high-speed development of the wireless sensor network (WSN) in practical applications [2, 3], such as environmental monitoring, weather forecasts, and health care, the area of parameter estimation from distributed data has been thoroughly investigated in bunches of works [4, 5, 6, 7]. A typical distributed estimation network consists of multiple sensors deployed at different locations, each observing the desired unknown parameter and transmitting the local observations to the fusion center (FC) [8]. Constrained by the limited wireless communication resources [9, 10], e.g., bandwidth and energy, the local observations at the sensors are usually quantized into finite number of bits before being transmitted to the FC [11, 12, 13]. The FC leverages the quantized information received from all the sensors to estimate the desired parameter. The performance of quantization-based distributed estimation methods and the optimization of distributed estimation network have been extensively investigated in the literatures [14, 15, 16, 17, 18]. Considering the impact of transmission errors, the authors in [19] formulated the distributed estimation problem in the context of least mean square adaptive networks and derived the closed-form expressions for the steady-state mean square error (MSE) of the network. Considering the different distortion criteria, e.g., Fisher information [20] and minimum mean square error (MMSE) [21], the authors in [14] investigated the optimal quantizer design for decentralized parameter estimation with two distributed sensors. Based on the generalization of the classical Lloyd-Max results [22, 23], the author in [15] proposed an algorithm to design the optimal non-linear distributed estimators with bivariate probability distribution. Considering the constraint of one-bit quantization at the sensors, the authors in [16] established the asymptotic optimality of the maximum-likelihood (ML) estimator for the deterministic mean-location parameter estimation problem. However, the above works were limited to the assumption of either a deterministic desired parameter or a random one with perfect information of its distribution. Based on the minimization of Cramer-Rao lower bound (CRLB), the probabilistic quantization strategies have been actively investigated recently [24, 25, 26, 27, 28, 29]. Considering the one-bit quantization scheme at the sensors and applying the minimax criterion, the authors in [26] derived the minimax CRLB for the scenario with ideal noiseless observations. Following the identical one-bit quantization scheme across different sensors, the authors in [27] further approximated the optimal probabilistic quantization by a parameterized antisymmetric and piecewise-linear function to minimize the corresponding minimax CRLB. In [28], the authors obtained the optimality conditions for using binary quantizer at all sensors and the optimal binary quantizer for the location parameter estimation problem. However, these works were limited to the noiseless observation scenario and the extension to the noisy case was subject to the assumption of perfect knowledge about the distributions of the desired parameter and observation noise. The optimal quantizer for noisy scenario with imperfect statistic information remains to be further explored. This paper considers a WSN in which local observations are acquired at distributed sensors and transmitted to the FC for the estimation of a desired parameter. Due to the bandwidthconstraints at the sensors, these observations are quantized into finite bits prior to transmissions. The FC aims to estimate the desired parameter based on the quantized information from all sensors. The major contributions of this paper are summarized as follows: * First, considering the one-bit quantization constraint at sensors, we derive the universal estimation MSE lower bound for the case with conditionally independent and identically distributed (i.i.d.) observations. This lower bound is shown to be independent of the FC design, and thus the binary probabilistic quantizer is designed to minimize this MSE lower bound. * Second, we prove the optimality of the mean-fusion operation at the FC, which uses only the average of all the quantized data for estimation, in terms of achieving the same MSE lower bound as the one by using all the quantized data from the sensors. Thus, a FC module with mean-fusion operation on the quantized data is proposed to accommodate the system with a variable number of sensors. * Then, considering the scenarios that the distributions of both the desired parameter and noise are unknown or only the noise distribution is known, a joint model and data driven method is proposed to train the probability controller in quantizer and the estimator in FC as deep neural networks (DNNs). Two loss functions derived from MMSE criterion are utilized for the sequential training of their design parameters. * Finally, we extend the above results to the case with multi-bit quantizer. We propose the multi-bit parallel and one-hot quantization schemes, and analyze the corresponding MSE lower bounds and optimality of mean-fusion operations, respectively. The remainder of the paper is organized as follows: Section II introduces the system model and presents the problem formulation. Section III proposes the joint model and data driven method for binary quantization. Section IV extends the results to the scenario of multi-bit quantization. Section V shows the simulation analysis. Finally, section VI concludes this paper. Notations: Boldface lowercase and uppercase letters, e.g., \\(\\mathbf{x}\\) and \\(\\mathbf{X}\\), denote vectors and matrix, respectively. \\([\\mathbf{X}]_{i,j}\\) denotes the \\((i,j)\\)-th entry of the matrix \\(X\\). \\(\\mathbb{E}[\\cdot]\\) represents expectation. \\(|x|\\) denotes the absolute value of scalar \\(x\\). \\(\\mathbb{N}^{+}\\) is the positive integer set. \\([N]\\) denotes the set of positive integers no bigger than \\(N\\). Copperplate uppercase letters, e.g., \\(\\mathcal{U}\\), denotes the set. \\(U_{1}\\circ U_{2}\\) represents the composition of neural networks \\(U_{1}\\) and \\(U_{2}\\). ## II System Model and Problem Formulation ### _System model_ A generalized distributed estimation problem is considered in this paper as shown in Fig. 1, where the FC aims to estimate the desired parameter \\(\\theta\\) by using \\(K\\) distributed sensors, denoted as \\(\\mathrm{S}_{1},\\cdots,\\mathrm{S}_{K}\\). Sensor \\(\\mathrm{S}_{k}\\), \\(k=1,2,\\cdots,K\\), observes \\(\\theta\\) independently and obtains its local observation \\(X_{k}\\), which is a noisy version of \\(\\theta\\). Here, we consider a widely adopted scenario that the observation noises at all sensors are i.i.d., and thus the local observations obtained at all sensors are conditionally i.i.d. with given \\(\\theta\\), i.e., \\[f_{X_{1},\\cdots,X_{K}}(x_{1},\\cdots,x_{K}|\\theta)=\\prod_{k=1}^{K}f_{X_{k}}(x_{ k}|\\theta)=\\prod_{k=1}^{K}f_{X}(x_{k}|\\theta), \\tag{1}\\] \\(\\forall x_{1},\\cdots,x_{K}\\in\\mathbb{R}\\), where \\(f_{X_{1},\\cdots,X_{K}}(\\cdot|\\theta)\\) is the conditional joint probability density function (PDF) of all observations with given \\(\\theta\\) and \\(f_{X}(\\cdot|\\theta)=f_{X_{1}}(\\cdot|\\theta)=\\cdots=f_{X_{K}}(\\cdot|\\theta)\\) is the conditional marginal PDF of the local observation with given \\(\\theta\\) at any sensor. Due to the bandwidth constraints, sensor \\(\\mathrm{S}_{k}\\), \\(k=1,\\cdots,K\\), quantizes its local observation \\(X_{k}\\) as a discrete message \\(u_{k}\\in\\{0,1,\\cdots,L-1\\}\\), with \\(L\\) being the quantization level. The conditional distribution of the quantized data \\(u_{k}\\) given the local observation \\(x_{k}\\), i.e. \\(p(u_{k}|X_{k})\\), for \\(k=1,\\cdots,K\\), describes the probabilistic quantizer at sensors \\(k\\). Then, the sensor transmits \\(u_{k}\\) to the FC through an error free channel, and the FC receives the quantized data \\(\\mathbf{u}=[u_{1},\\cdots,u_{K}]^{T}\\) from all \\(K\\) sensors to generate the estimation of \\(\\theta\\), denoted as \\(\\hat{\\theta}(\\mathbf{u})\\). ### _Problem formulation_ To evaluate the estimation performance, we define a cost function \\(C_{\\theta}\\left(\\hat{\\theta},\\mathbf{u}\\right)\\) for the desired parameter \\(\\theta\\), and a widely adopted one is based on the MSE principle, i.e., \\[C_{\\theta}\\left(\\hat{\\theta},\\mathbf{u}\\right)=\\mathbb{E}_{\\theta,\\mathbf{u}} \\big{[}\\theta-\\hat{\\theta}\\left(\\mathbf{u}\\right)|^{2}\\big{]}. \\tag{2}\\] The optimal quantizers \\(\\{p(u_{k}|X_{k})\\}_{k=1}^{K}\\) and FC \\(\\hat{\\theta}(\\mathbf{u})\\) are chosen to minimize \\(C_{\\theta}\\left(\\hat{\\theta},\\mathbf{u}\\right)\\), and the corresponding MSE minimization problem is formulated as \\[\\min_{\\{p(u_{k}|X_{k})\\}_{k=1}^{K},\\ \\hat{\\theta}(\\cdot)}C_{\\theta}\\left(\\hat{ \\theta},\\mathbf{u}\\right). \\tag{3}\\] Problem (3) is applied for various types of quantizer designs at the sensors, including the deterministic threshold quantizer and the quantizer with random dithering [30]. For the scenario where all sensors have conditionally i.i.d. local observations and identical quantization level \\(L\\), it was demonstrated in [28] that adopting identical quantizer across all the sensors, i.e., \\[p(u_{i}=u|X_{i}=x)=p(u_{j}=u|X_{j}=x), \\tag{4}\\] \\(\\forall i,j\\in[K],u\\in\\{0,1,\\cdots,L-1\\},x\\in\\mathbb{R}\\), can achieve the global optima for problem (3). Then, we can further prove the conditionally i.i.d. property for the quantized data from all sensors. _Proposition II.1_: If all sensors have conditionally i.i.d. local observations and adopt identical quantizer, then the quantized Fig. 1: System Model for Distributed Estimation. data \\(\\{u_{k}\\}_{k=1}^{K}\\) are conditionally i.i.d. with given \\(\\theta\\). In other words, we have \\(p(\\mathbf{u}|\\theta)=\\prod_{k=1}^{K}p\\left(u_{k}|\\theta\\right)\\) and \\(p(u_{i}=u|\\theta)=p(u_{j}=u|\\theta)\\), \\(\\forall i,j\\in[K],u\\in\\{0,1\\cdots,L-1\\}\\)._ Proof:: By subsisting (1) and (4) into \\(p\\left(u|\\theta\\right)=\\int_{x}p\\left(u|X=x\\right)f_{X}(x|\\theta)\\) and \\(p(\\mathbf{u}|\\theta)=\\prod_{k=1}^{K}p\\left(u_{k}|\\theta\\right)\\), Proposition 2.1 is proved. Therefore, we ignore the indices of the sensors in the sequel, and problem (3) can be simplified as \\[\\min_{p\\left(u|X\\right),\\,\\hat{\\theta}\\,\\left(\\cdot\\right)}C_{\\theta}\\left( \\hat{\\theta},\\mathbf{u}\\right). \\tag{5}\\] ## III Binary Quantization Scheme This section considers the design of identical binary probabilistic quantizer at all sensors. First, the MSE lower bound for the binary-quantization-based distributed estimation is derived to serve as the benchmark for the quantization performance evaluation, and the binary probabilistic quantizer is designed to minimize this lower bound. Then, the optimality of the mean-fusion operation at the FC is proved, and the corresponding FC design is derived. Finally, a joint model and data driven method is proposed to sequentially train the design parameters of both the quantizer and FC modules. ### _MSE lower bound and optimality of mean-Fusion_ Considering the use of identical binary quantizer with \\(L=2\\) at all sensors, i.e., \\(u_{1},\\cdots,u_{K}\\in\\{0,1\\}\\), the following proposition gives the achievable MSE lower bound for the binary-quantization-based distributed estimation of \\(\\theta\\) at the FC. **Proposition III.1**: _When the binary quantized data \\(u_{1},\\cdots,u_{K}\\) from all sensors are conditionally i.i.d. with given \\(\\theta\\), the MSE for estimating \\(\\theta\\) using \\(\\theta\\) is lower bounded by_ \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\mathbf{u})|^{2}]\\geq\\mathcal{L}_{binary}^{K} (\\gamma), \\tag{6}\\] _where_ \\[\\mathcal{L}_{binary}^{K}(\\gamma)=\\mathbb{E}[\\theta^{2}]-\\sum_{k=0}^{K}C_{k}^{ k}\\frac{\\mathbb{E}_{\\theta}^{2}\\left[\\theta\\gamma(\\theta)^{k}(1-\\gamma(\\theta) )^{K-k}\\right]}{\\mathbb{E}_{\\theta}\\left[\\gamma(\\theta)^{k}(1-\\gamma(\\theta) )^{K-k}\\right]} \\tag{7}\\] _is the MSE lower bound, and_ \\[\\gamma(\\theta)=p(u=1|\\theta)=\\mathbb{E}_{X}[p(u=1|X)|\\theta] \\tag{8}\\] _denotes the noisy quantization probability for any quantized data \\(u\\) being \"1\" with given \\(\\theta\\). The equality in (6) holds if and only if \\(\\hat{\\theta}(\\mathbf{u})=\\mathbb{E}_{\\theta}\\left[\\theta p\\left(\\mathbf{u} \\left|\\theta\\right)\\right]/\\mathbb{E}_{\\theta}\\left[p\\left(\\mathbf{u}\\left| \\theta\\right)\\right]\\)._ Proof:: See Appendix A for details. **Remark III.1**: _It is observed from (7) that \\(\\mathcal{L}_{binary}^{K}(\\gamma)\\) is determined by the conditional probability distribution of the quantized data with given \\(\\theta\\), i.e., \\(p(u=1|\\theta)=\\gamma(\\theta)\\) and \\(p(u=0|\\theta)=1-\\gamma(\\theta)\\). With the MMSE criterion, the achievable MSE lower bound \\(\\mathcal{L}_{binary}^{K}(\\gamma)\\) serves as the benchmark for the quantization performance evaluation at the sensors, and the optimal quantizer is designed to minimize \\(\\mathcal{L}_{binary}^{K}(\\gamma)\\)._ Proposition III.1 established a design metric for the quantizer by minimizing the MSE lower bound. Moreover, by demonstrating the optimality of mean-fusion operation at the FC with conditionally i.i.d. quantized data, the estimation design problem at the FC can be further simplified. **Proposition III.2**: _If the binary quantized data \\(u_{k}\\), \\(k=1,\\cdots,K\\), from all sensors are conditionally i.i.d. with given \\(\\theta\\), then the quantized data \\(\\mathbf{u}=[u_{1},\\ldots,u_{K}]^{T}\\) and their average \\(\\bar{u}=\\frac{1}{K}\\sum_{k=1}^{K}u_{k}\\) own identical Fisher Information for any given \\(\\theta\\), i.e.,_ \\[\\mathbb{E}_{\\mathbf{u}}\\left[\\frac{\\partial^{2}\\ln p(\\mathbf{u}\\mid\\theta)}{ \\partial\\theta^{2}}\\right]=\\mathbb{E}_{\\bar{u}}\\left[\\frac{\\partial^{2}\\ln p (\\bar{u}\\mid\\theta)}{\\partial\\theta^{2}}\\right]. \\tag{9}\\] _Furthermore, estimations of \\(\\theta\\) by using \\(\\mathbf{u}\\) and \\(\\bar{u}\\) can achieve identical MSE lower bound, i.e.,_ \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\bar{u})|^{2}]\\geq\\mathcal{L}_{binary}^{K}( \\gamma). \\tag{10}\\] _where \\(\\mathcal{L}_{binary}^{K}(\\gamma)\\) is given in (7) as the lower bound for estimation using \\(\\mathbf{u}\\). The equality in (10) holds if and only if \\(\\hat{\\theta}(\\bar{u})=\\mathbb{E}_{\\theta}\\left[\\theta p\\left(\\bar{u}\\left| \\theta\\right)\\right]/\\mathbb{E}_{\\theta}\\left[p\\left(\\bar{u}\\left|\\theta\\right)\\right]\\)._ Proof:: See Appendix B for details. **Remark III.2**: _According to the reciprocity of the estimation MSE and Fisher information [28], i.e., higher Fisher information implies lower MSE and vice versa, both (9) and (10) indicate that the achievable minimum MSE for estimating \\(\\theta\\) with either \\(\\mathbf{u}\\) or \\(\\bar{u}\\) is equivalent. The employment of original quantized data \\(\\mathbf{u}\\) for estimation in FC renders its input dimension susceptible to changes in the number of sensors. Therefore, when the designed FC cannot adopt to fluctuations in the number of sensors, the performance is likely to deteriorate. The mean-fusion operation enables the FC to use \\(\\bar{u}\\) with fixed input dimension and robustly serve the system with dynamical number of sensors in the network._ ### _Design of probabilistic quantizer and fusion center_ According to Remarks III.1 and III.2, we are motivated to implement the binary probabilistic quantizer with random dithering and the FC with mean-fusion operation. #### Iii-B1 Probabilistic quantizer As shown in Fig. 2, we consider one implementation of the binary probabilistic quantizer design with random dithering. Define the probability controller \\(G(\\cdot):\\mathbb{R}\\rightarrow[0,1]\\). The local observation \\(X\\) is first sent to \\(G(\\cdot)\\), and the output \\(G(X)\\) is then fed into the quantization function \\(Q(\\cdot):[0,1]\\rightarrow\\{0,1\\}\\) to generate a random binary data \\[u=Q(G(X))=\\frac{1+\\text{sgn}(G(X)-z)}{2}\\in\\{0,1\\}, \\tag{11}\\] where \\(z\\sim\\text{U}(0,1)\\) is a standard uniform distributed dithering noise, and \\(\\text{sgn}(\\cdot)\\) is the sign function. It is observed from (11) that the probability of the local observation \\(X\\) being quantized as \\(u=1\\) is equal to \\(G(X)\\), i.e., \\[\\begin{split} p(u=1|X)=& p\\left(\\frac{1+\\text{sgn}(G(X )-z)}{2}=1\\Big{|}X\\right)\\\\ =& p(G(X)>z|X)\\\\ =& G(X),\\end{split} \\tag{12}\\] and intuitively we have \\(p(u=0|X)=1-G(X)\\). Therefore, by using Proposition III.1 and (12), to minimize the MSE lower bound \\(\\mathcal{L}_{binary}^{K}(\\gamma)\\) in (7) is equivalent to find the optimal probability controller \\(G(\\cdot)\\) for the quantizer, i.e., \\[\\min_{G(\\cdot)}\\quad\\mathcal{L}_{binary}^{K}(\\gamma), \\tag{13}\\]where \\(\\gamma(\\theta)\\) defined in (8) is rewritten as \\[\\gamma(\\theta)=\\mathbb{E}_{X}[G(X)|\\theta]. \\tag{14}\\] #### Iii-B2 Fusion center As shown in Fig 3, we are motivated by Remark 3.2 to implement a FC with mean-fusion operation. After the binary quantized data \\(\\{u_{k}\\}_{k=1}^{K}\\) from all sensors are received by the FC, they are first averaged to get \\(\\tilde{u}=\\frac{1}{K}\\sum_{k=1}^{K}u_{k}\\). Then, the desired parameter \\(\\theta\\) is estimated as \\[\\hat{\\theta}=F(\\tilde{u}), \\tag{15}\\] where \\(F(\\cdot)\\colon[0,1]\\to\\mathbb{R}\\) is the estimator function to be designed. From (15) and by using \\[p\\left(\\tilde{u}=\\frac{k}{K}\\big{|}\\theta\\right)=C_{K}^{k}\\left(\\gamma(\\theta) \\right)^{k}\\left(1-\\gamma(\\theta)\\right)^{K-k},\\] which is derived in (49), the estimation MSE for \\(\\theta\\) at the FC is computed as \\[\\mathcal{T}_{binary}^{K}(F) =\\mathbb{E}_{\\theta,\\tilde{u}}\\big{[}|\\theta-F(\\tilde{u})|^{2} \\big{]}\\] \\[=\\mathbb{E}_{\\theta}\\left[\\sum_{k=0}^{K}\\left|\\theta-F\\left( \\frac{k}{K}\\right)\\right|^{2}p\\left(\\tilde{u}=\\frac{k}{K}\\Big{|}\\theta\\right)\\right]\\] \\[=\\sum_{k=0}^{K}C_{K}^{k}\\mathbb{E}_{\\theta}\\left[\\left|\\theta-F \\left(\\frac{k}{K}\\right)\\right|^{2}\\gamma(\\theta)^{k}\\left(1-\\gamma(\\theta) \\right)^{K-k}\\right]. \\tag{16}\\] For the goal to minimize the estimation MSE in (16), the best estimator function for FC is designed as \\[\\min_{F(\\cdot)}\\mathcal{T}_{binary}^{K}(F). \\tag{17}\\] **Remark 3.3**: _If perfect statistic knowledge on the distributions of the desired parameter and noise is available, problems (13) and (17) are variational problems of \\(G(\\cdot)\\) and \\(F(\\cdot)\\), which can be solved by calculus of variations under certain conditions [28]. The non-closed-form solution of \\(G(\\cdot)\\) can also be obtained based on the parametric and data driven method with data samples generated from the respective distributions [27]. Since perfect information about the distributions of the desired parameter and observation noise is difficult to be obtained in practical systems, it is more interesting to study the scenarios that both the above distributions are unknown or only the noise distribution is known. Thus, a joint model and data driven method is proposed as shown in the next subsection._ ### _Joint model and data driven method_ A joint model and data driven method to solve problems (13) and (17) is proposed in the following: #### Iii-C1 Binary quantizer For the binary probabilistic quantizer defined in Fig. 2, its probability controller \\(G(\\cdot)\\) is implemented as a DNN \\(G_{\\Phi}\\) with \\(N\\) fully connected layers (FCLs) [31]. In the DNN \\(G_{\\Phi}\\) as shown in Fig. 4(a), the input observation \\(X\\) is fed into \\(N\\) FCLs, i.e., \\[G_{\\Phi}(X)=g_{N}\\circ\\cdots\\circ g_{2}\\circ g_{1}(X), \\tag{18}\\] where \\(g_{i}\\), \\(i\\in[N]\\), is the FCL with the parameter \\(\\alpha_{N}\\), the number of input dimensions being \\(L_{g,I,i}\\), and the number of output dimensions being \\(L_{g,O,i}\\). Note that \\(L_{g,I,1}=L_{g,O,N}=1\\) is fixed. To mitigate the issues of gradient explosion and vanishing, the ReLU function [31] is employed as the activation function for \\(g_{1},g_{2},\\cdots,g_{N-1}\\). In the output layer \\(g_{N}\\), the Sigmoid function is utilized as the activation function to ensure the output range is confined to \\([0,1]\\), as it represents the quantization probability. To summarize, the training parameter of \\(G_{\\Phi}\\) is \\(\\Phi=\\{\\alpha_{1},\\alpha_{2},\\cdots,\\alpha_{N}\\}\\). #### Iii-C2 Fc For the FC defined in Fig. 3, its estimator function \\(F(\\cdot)\\) is implemented as a DNN \\(F_{\\Psi}\\) with \\(N\\) FCLs, denoted as \\(f_{1},f_{2},\\cdots,f_{N}\\). Similarly, the ReLU function is employed as the activation function for \\(f_{1},f_{2},\\cdots,f_{N-1}\\) to address gradient-related challenges. In the output layer \\(f_{N}\\), the Tanh function is utilized as the activation function, as it ensures the output of \\(F_{\\Psi}\\) represents a normalized estimator within the Fig. 4: Architecture of the probability controller and estimator DNNs. Fig. 3: Mean-fusion operation for binary-quantization-based distributed estimation. Fig. 2: Binary probabilistic quantizerrange of \\([-1,1]\\). The training parameter of \\(F_{\\Psi}\\) is denoted as \\(\\Psi=\\{\\beta_{1},\\beta_{2},\\cdots,\\beta_{N}\\}\\), where \\(\\beta_{i}\\) is the parameter of the FCL \\(f_{i}\\), \\(i\\in[N]\\). \\(L_{f,I,i}\\) and \\(L_{f,O,i}\\) are the numbers of input dimensions and output dimensions of the FCL \\(f_{i}\\), \\(i\\in[N]\\), with \\(L_{f,I,1}=L_{f,O,N}=1\\) being fixed. Then, by adopting the DNNs, optimization problems (13) and (17) are rewritten as (19) and (20). **Remark III.4**: _Based on the MMSE criterion, problems (19) and (20) imply that the optimal quantizer design is independent of the FC design and the optimal FC design is obtained based on the optimal quantizer. For the practical scenario that the quantizer and FC are separated in space, the regular joint training method of the two modules a sequential deep learning training method to obtain optimal \\(G_{\\Phi}\\) and \\(F_{\\Psi}\\) is proposed in the next subsection. Notice that the sequential training_ ### _Sequential training method_ For the sequential training of the probability controller DNN \\(G_{\\Phi}\\) in quantizer and estimator DNN \\(F_{\\Psi}\\) in FC, we aim to first find the optimal parameter \\(\\Phi^{*}\\) which minimizes the loss function \\(\\mathcal{L}_{binary}^{K}(\\Phi)\\) in (19), and then find the optimal \\(\\Psi^{*}\\) that minimizes the loss function \\(\\mathcal{T}_{binary}^{K}(\\Phi^{*},\\Psi)\\) in (20), under two cases that the distributions of both the desired parameter and observation noise are unknown or only the noise distribution is known. Besides, it is observed that the two loss functions are contingent upon the number of sensors \\(K\\). Given the consideration of a practical network where the number of sensors may vary dynamically, a predetermined number of sensors is chosen during the training phase, while the efficacy of the trained model is assessed using varying sensor quantities during the testing phase. #### Iii-D1 Training with unknown parameter and noise distributions The training process is based on the data set \\(D_{1}=\\{\\theta_{t},\\mathbf{x}_{t}\\}_{t=1}^{T}\\), where \\(\\theta_{t}\\) is the \\(t\\)-th sample of the desired parameter \\(\\theta\\) to be estimated, \\(\\mathbf{x}_{t}=\\{x_{t,1},\\cdots,x_{t,Q}\\}\\) contains \\(Q\\) noise-corrupted observation samples of \\(\\theta_{t}\\), and \\(T\\) denotes the total number of training samples. The desired parameter \\(\\theta_{t}\\) is obtained from the experimental environment, and the observation \\(x_{t,1},\\cdots,x_{t,Q}\\) are obtained from the sensor by periodically observing \\(\\theta_{t}\\) under the same environment. At each epoch, based on the mini batch method [31], the whole data set is divided into \\(T/B\\) batches where \\(B\\) is the number of batch samples. Here, we consider the case that \\(T/B\\) is an integer, without loss of generality. In the exceptional scenario, a simple approach is to randomly select additional samples from the existing dataset and append them to form a new dataset that satisfies the requirement. Parameter \\(\\Phi\\) is trained for \\(T/B\\) times within an epoch, where each time a new batch set is utilized for training. Within each time, the loss function in (19) is approximated and averaged on the whole batch samples as \\[\\hat{\\mathcal{L}}_{1}=\\sum_{t=1}^{B}(\\theta_{t})^{2}-\\sum_{k=0}^{K_{S}}C_{K_{S }}^{k}\\frac{\\left(\\sum_{t=1}^{B}\\theta_{t}\\left(\\gamma_{\\Phi}^{t}\\right)^{K_{S }}\\left(1-\\gamma_{\\Phi}^{t}\\right)^{K_{S}-k}\\right)^{2}}{\\sum_{t=1}^{B}\\left( \\gamma_{\\Phi}^{t}\\right)^{K_{S}}\\left(1-\\gamma_{\\Phi}^{t}\\right)^{K_{S}-k}}, \\tag{21}\\] where \\(K_{S}\\) is the predetermined sensor quantity parameter in the training and \\[\\gamma_{\\Phi}^{t}=\\sum_{q=1}^{Q}G_{\\Phi}(x_{t,q}) \\tag{22}\\] is the empirical approximation of \\(\\gamma_{\\Phi}(\\theta_{t})=\\mathbb{E}_{X}[G_{\\Phi}(X)|\\theta_{t}]\\) over data set \\(\\{\\theta_{t},\\mathbf{x}_{t}\\}\\). By using the back propagation algorithm [31], the gradient \\(\ abla_{\\Phi}\\hat{\\mathcal{L}}_{1}\\) is calculated based on (21), and the parameter \\(\\Phi\\) is updated epoch by epoch. After the maximum number of training epochs is reached, the optimal parameter is obtained as \\(\\Phi^{*}\\). Once the optimal probability controller \\(G_{\\Phi^{*}}\\) in the quantizer is obtained, it is utilized to sequentially train the estimator DNN \\(F_{\\Psi}\\) in the FC. The training of \\(F_{\\Psi}\\) uses the same mini batch method based on data set \\(D_{1}\\). Within each time, the loss function in (20) is approximated and averaged on the whole batch samples as \\[\\hat{\\mathcal{T}}_{1}=\\sum_{t=1}^{B}\\sum_{k=0}^{K_{F}}\\left(\\theta_{t}-F_{\\Psi }\\left(\\frac{k}{K_{F}}\\right)\\right)^{2}C_{K_{F}}^{k}\\left(\\gamma_{t}^{*} \\right)^{k}\\left(1-\\gamma_{t}^{*}\\right)^{K_{F}-k}, \\tag{23}\\] where \\(K_{F}\\) is the predetermined sensor quantity parameter in the training and \\[\\gamma_{t}^{*}=\\sum_{q=1}^{Q}G_{\\Phi^{*}}(x_{t,q}). \\tag{24}\\] Based on (23), parameter \\(\\Psi\\) is updated epoch by epoch using the back propagation algorithm [31], and optimal \\(\\Psi^{*}\\) is obtained once the maximum training epochs are reached. #### Iii-D2 Training with known noise distribution If the statistic information about the distribution of the observation noise is available, then intuitively \\(f_{X}(\\cdot|\\theta)\\) is obtained and we can use only data set \\(D_{2}=\\{\\theta_{t}\\}_{t=1}^{T}\\) for the sequential training. Considering the case that the sensor's observation range is restricted to \\([-W,W]\\), we define an artificial observation set \\(O=\\{-W,-\\frac{Q-1}{Q}W,\\cdots,0,\\frac{1}{Q}W,\\cdots,W\\}\\) to cover the bounded observations. Based on the information of distribution \\(f_{X}(\\cdot|\\theta)\\), the loss function \\(\\mathcal{L}_{binary}^{K}(\\Phi)\\) in (19) is approximated based on the whole batch samples as \\[\\hat{\\mathcal{L}}_{2}=\\sum_{t=1}^{B}(\\theta_{t})^{2}-\\sum_{k=0}^{K_{S}}C_{K_{S }}^{k}\\frac{\\left(\\sum_{t=1}^{B}\\theta_{t}\\left(\\gamma_{\\Phi}^{t}\\right)^{k} \\left(1-\\gamma_{\\Phi}^{t}\\right)^{K_{S}-k}\\right)^{2}}{\\sum_{t=1}^{B}\\left( \\gamma_{\\Phi}^{t}\\right)^{k}\\left(1-\\gamma_{\\Phi}^{t}\\right)^{K_{S}-k}}, \\tag{25}\\] where \\(\\gamma_{\\Phi}^{t}=\\sum_{x\\in O}G_{\\Phi}(x)f_{X}(x|\\theta_{t})\\). By using the back propagation algorithm, parameter \\(\\Phi\\) is updated epoch by epoch based on (25), and optimal \\(\\Phi^{*}\\) is obtained after the maximum number of training epochs is reached. With the optimal probability controller \\(G_{\\Phi^{*}}\\), the estimator DNN \\(F_{\\Psi}\\) in the FC is sequentially trained. The training of FC utilizes the same mini batch method based on the sets \\(D_{2}\\) and \\(O\\). Within each time, the loss function in (20) is approximated and averaged on the whole batch samples as \\[\\hat{\\mathcal{T}}_{2}=\\sum_{t=1}^{B}\\sum_{k=0}^{K_{F}}\\left(\\theta_{t}-F_{\\Psi }\\left(\\frac{k}{K_{F}}\\right)\\right)^{2}C_{K_{F}}^{k}\\left(\\gamma_{t}^{*} \\right)^{K_{F}}\\left(1-\\gamma_{t}^{*}\\right)^{K_{F}-k}, \\tag{26}\\] where \\(\\gamma_{t}^{*}=\\sum_{x\\in O}G_{\\Phi^{*}}(x)f_{X}(x|\\theta_{t})\\). Based on (26), optimal parameter \\(\\Psi^{*}\\) is obtained by updating \\(\\Psi\\) epoch by epoch using the back propagation algorithm until the maximum number of training epochs are reached. Finally, the performance of the proposed sequential training scheme is compared with an canonical alternative training scheme, in which a total of 500 epochs are divided into 10 rounds, and the quantizer and FC DNNs are trained alternately for each round. Figure 5 illustrates the plotted MSE loss of both the quantizer and FC with respect to the training epochs. Notably, the proposed sequential training scheme demonstrates superior and smoother convergence during the training phase. Additionally, the final MSE loss achieved by the sequential training scheme is smaller compared to that of the alternated training approach. This strongly supports the effectiveness of the proposed sequential training method. ## IV Multi-bit Probabilistic Quantization In this section, we relax the assumption of one-bit quantization constraint at sensors and address the optimal design of multi-bit probabilistic quantizer and FC. We consider two different joint model and data driven multi-bit quantization schemes, corresponding to parallel and one-hot implementation of quantization for all bits in a multi-bit quantized information. Owing to the similarities in the optimization of the multi-bit quantizer and FC design with that under the binary quantization constraint discussed in Section III, we omit the details of the deep learning training process for the quantizer and FC in this section. ### _Parallel quantization_ The joint model and data driven multi-bit parallel quantizer and FC modules are shown in Fig. 6: * Quantizer: As shown in Fig. 6(a), we consider an \\(M\\)-bit parallel quantizer module where the quantization is implemented by \\(M\\) parallel binary quantizers (BQ). Each BQ adopts identical binary quantizer design shown in Fig. 2. Taking the \\(m\\)-th BQ as an example, it maps the input observation \\(X\\) into a binary output \\(u_{m}\\in\\{0,1\\}\\) following the distribution \\(p(u_{m}=1|X)=G_{\\phi_{m}}(X)\\) and \\(p(u_{m}=0|X)=1-G_{\\phi_{m}}(X)\\)), where \\(G_{\\phi_{m}}\\) is the probability controller DNN in the \\(m\\)-th BQ with design parameter \\(\\phi_{m}\\). Thus, \\(M\\) BQs map the observation as an \\(M\\)-bit quantization message \\(\\mathbf{u}=[u_{1},u_{2},\\cdots,u_{M}]^{T}\\in\\{0,1\\}^{M}\\). * FC: As shown in Fig. 6(b), we consider the scenario that \\(K\\) sensors adopt identical \\(M\\)-bit parallel quantizer with design parameters \\(\\Phi=\\{\\phi_{1},\\cdots,\\phi_{m}\\}\\). The FC receives \\(K\\) quantized messages \\(\\mathbf{U}=[\\mathbf{u}_{1},\\cdots,\\mathbf{u}_{K}]^{T}\\) from \\(K\\) sensors and uses the mean-vector-fusion operation to obtain \\(\\bar{\\mathbf{u}}=\\frac{1}{K}\\sum_{k=1}^{K}\\mathbf{u}_{k}\\). The desired parameter \\(\\theta\\) is estimated as \\[\\hat{\\theta}=F_{\\Psi}(\\bar{\\mathbf{u}}),\\] (27) where \\(F_{\\Psi}(\\cdot):[0,1]^{M}\\rightarrow\\mathbb{R}\\) is the estimator DNN with design parameters \\(\\Psi\\). Similar to Proposition III.1, the following proposition shows that if identical multi-bit parallel quantizer is deployed at all sensors, then adopting the mean-vector-fusion operation on the quantized data causes no performance degradation for the estimation of the desired parameter. **Proposition IV.1**: _If all sensors adopt the identical \\(M\\)-bit parallel quantizer with design parameter \\(\\Phi=\\{\\phi_{1},\\cdots,\\phi_{m}\\}\\), estimation of \\(\\theta\\) by using the original quantized data matrix \\(\\mathbf{U}\\) or only the mean-vector \\(\\bar{\\mathbf{u}}\\) can achieve identical MSE lower bound, i.e.,_ \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\mathbf{U})|^{2}]\\geq \\mathcal{L}_{\\text{parallel}}^{M,K}(\\Phi), \\tag{28}\\] \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\bar{\\mathbf{u}})|^{2}]\\geq \\mathcal{L}_{\\text{parallel}}^{M,K}(\\Phi), \\tag{29}\\] _where_ \\[\\mathcal{L}_{\\text{parallel}}^{M,K}(\\Phi)=\\sum_{i_{1}=0}^{K}\\cdots\\sum_{i_{M} =0}^{K}\\frac{\\mathbb{E}_{\\theta}^{2}\\left[\\theta p_{\\Phi}^{i_{1},\\cdots,i_{M} }(\\theta)\\right]}{\\mathbb{E}_{\\theta}\\left[p_{\\Phi}^{i_{1},\\cdots,i_{M}}( \\theta)\\right]}, \\tag{30}\\] _is the MSE lower bound,_ \\[\\begin{split} p_{\\Phi}^{i_{1},\\cdots,i_{M}}(\\theta)=& p\\left(\\bar{\\mathbf{u}}=\\frac{1}{K}\\left[i_{1},\\cdots,i_{M} \\right]^{T}\\right]\\theta\\\\ =&\\prod_{m=1}^{M}C_{K}^{i_{m}}(\\gamma_{\\phi_{m}}( \\theta))^{i_{m}}(1-\\gamma_{\\phi_{m}}(\\theta))^{K-i_{m}},\\end{split} \\tag{31}\\] _and_ \\[\\gamma_{\\phi_{m}}(\\theta)=p\\left([\\mathbf{u}]_{m}=1|\\theta\\right)=\\mathbb{E} _{X}[G_{\\phi_{m}}(X)|\\theta] \\tag{32}\\] _denotes the conditional probability for the \\(m\\)-th bit of any quantized vector being \"\\(1\\)\" with given \\(\\theta\\). The equality in (28) holds if and only if \\(\\hat{\\theta}(\\mathbf{U})=\\mathbb{E}_{\\theta}\\left[\\theta p\\left(\\mathbf{U}| \\theta\\right)\\right]/\\mathbb{E}_{\\theta}\\left[p\\left(\\mathbf{U}|\\theta\\right)\\right]\\), and the equality in (29) holds if and only if \\(\\hat{\\theta}(\\bar{\\mathbf{u}})=\\mathbb{E}_{\\theta}\\left[\\theta p\\left(\\bar{ \\mathbf{u}}|\\theta\\right)\\right]/\\mathbb{E}_{\\theta}\\left[p\\left(\\bar{ \\mathbf{u}}|\\theta\\right)\\right]\\)._ \\[\\mathcal{T}_{\\text{parallel}}^{M,K}(\\Phi,\\Psi)=\\mathbb{E}_{\\theta}\\left[ \\sum_{i_{1}=0}^{K}\\left[\\theta-F_{\\Psi}\\left(\\bar{\\mathbf{u}}\\right)\\right]^{2} p\\left(\\bar{\\mathbf{u}}|\\theta\\right)\\right]=\\sum_{i_{1}=0}^{K}\\cdots\\sum_{i_{M} =0}^{K}\\mathbb{E}_{\\theta}\\left[\\left|\\theta-F_{\\Psi}\\left(\\frac{i_{1}, \\cdots,i_{M}}{K}\\right)\\right|^{2}p_{\\Phi}^{i_{1},\\cdots,i_{M}}(\\theta)\\right]. \\tag{33}\\] Fig. 5: Convergence of the quantizer and FC training. Proof:: See Appendix C for details. By using (27) and (32) and following the analysis in (16), the estimation MSE for \\(\\theta\\) at the FC is computed as (33). From (30) and (33), the optimization problems for the multi-bit parallel quantizer and FC are derived as \\[\\min_{\\mathbf{\\phi}}\\ \\mathcal{L}_{\\text{parallel}}^{M,K}(\\Phi), \\tag{34}\\] \\[\\min_{(\\mathbf{\\phi},\\Psi)}\\ \\mathcal{T}_{\\text{parallel}}^{M,K}(\\mathbf{ \\Phi},\\Psi). \\tag{35}\\] Since the deep learning based training for \\(\\Phi\\) and \\(\\Psi\\) is similar to that for the binary quantization scenario discussed in section III-D, we ignore the details of the training process. ### _One-hot quantization_ The joint model and data driven multi-bit one-hot quantizer and FC modules are shown in Fig. 7: * Quantizer: As shown in Fig. 7(a), we consider an \\(M\\)-bit one-hot quantization module where the probability distribution of the \\(M\\)-bit quantized data is controlled by an one-hot probability controller DNN \\(G_{\\mathbf{\\phi}}(\\cdot):\\mathbb{R}\\rightarrow[0,1]^{2M}\\) with softmax output and design parameter \\(\\mathbf{\\phi}\\). Local observation \\(X\\) is first sent to \\(G_{\\mathbf{\\phi}}\\) and mapped as a \\(2^{M}\\)-dimensional probability vector \\(\\mathbf{\\mathbf{p}}=[p_{0},\\cdots,p_{2^{M}-1}]^{T}=G_{\\mathbf{\\phi}}(X)\\), where \\(0\\leq p_{m}\\leq 1\\) and \\(\\sum_{m=0}^{2^{M}-1}p_{m}=1\\). Then, the probability vector \\(\\mathbf{\\mathbf{p}}\\) is fed into the quantization function \\(Q(\\cdot):[0,1]^{2^{M}}\\rightarrow\\{0,1,\\cdots,2^{M}-1\\}\\) to generate a quantized value \\(u=Q(\\mathbf{\\mathbf{p}})\\). Similar to the quantization function \\(Q(\\cdot)\\) in binary quantizer, function \\(Q(\\cdot)\\) in \\(M\\)-bit one-hot quantizer is to make the probability distribution \\(\\{p(u=m|X)\\}_{m=0}^{2^{M}-1}\\) being controlled by the probability vector \\(\\mathbf{\\mathbf{p}}\\), i.e., \\[p(u=m|X)=p(Q(\\mathbf{\\mathbf{p}})=m|X)=p_{m}=[G_{\\mathbf{\\phi}}(X)]_{m},\\] (36) for \\(m=0,1,\\cdots,2^{M}-1\\), where \\([G_{\\mathbf{\\phi}}(X)]_{m}\\) is the \\(m\\)-th entry of vector \\(G_{\\mathbf{\\phi}}(X)\\). Then, the quantized value \\(u\\) is transformed into a corresponding binary vector \\(\\mathbf{\\mathbf{u}}\\in\\{0,1\\}^{M}\\), which is then being transmitted to the FC. * FC: As shown in Fig. 7(b), we consider the scenario that \\(K\\) sensors adopt the identical \\(M\\)-bit one-hot quantizer with design parameter \\(\\mathbf{\\Phi}\\). The FC receives \\(K\\) quantized messages \\(\\mathbf{\\mathbf{U}}=[\\mathbf{\\mathbf{u}}_{1},\\cdots,\\mathbf{\\mathbf{u}}_{K}]^{T}\\) from \\(K\\) sensors and transforms them into \\(K\\) one-hot vectors as \\(\\mathbf{\\mathbf{V}}=[\\mathbf{\\mathbf{v}}_{1},\\cdots,\\mathbf{\\mathbf{v}}_{K}]^{T}\\). Based on the one-hot encoding scheme [31], each vector \\(\\mathbf{\\mathbf{u}}_{k}\\in\\{0,1\\}^{M}\\) is transformed as a \\(2^{M}\\)-dimensional vector \\(\\mathbf{\\mathbf{v}}_{k}=\\left\\{\\begin{array}{ll}[1,\\mathbf{\\mathbf{0}}_{2^{M}-1}^{T }]^{T},&(\\mathbf{\\mathbf{u}}_{k})_{10}=0,\\\\ [\\mathbf{\\mathbf{0}}_{(\\mathbf{\\mathbf{u}}_{k})_{10}}^{T},1,\\mathbf{\\mathbf{0}}_{2^{M}-( \\mathbf{\\mathbf{u}}_{k})_{10}}^{T}]^{T},&(\\mathbf{\\mathbf{u}}_{k})_{10}\\geq 1,\\end{array}\\right.\\) where \\(\\mathbf{\\mathbf{0}}_{2^{M}-1}^{T}\\) is \\((2^{M}-1)\\)-dimensional zero vector and \\((\\mathbf{\\mathbf{u}}_{k})_{10}\\) is the decimal expression of \\(\\mathbf{\\mathbf{u}}_{k}\\). Then, the mean-one-hot-vector \\(\\tilde{\\mathbf{\\mathbf{v}}}=\\frac{1}{K}\\sum_{k=1}^{K}\\mathbf{\\mathbf{v}}_{k}\\) is utilized to estimate the desired parameter \\(\\theta\\) as \\(\\hat{\\theta}=F_{\\mathbf{\\mathbf{v}}}(\\tilde{\\mathbf{\\mathbf{v}}})\\), where \\(F_{\\mathbf{\\mathbf{v}}}(\\cdot):[0,1]^{2^{M}}\\rightarrow\\mathbb{R}\\) is the estimator DNN with design parameter \\(\\Psi\\). The following proposition shows that if identical multi-bit one-hot quantizer is deployed at all sensors, then adopting the mean-one-hot-vector-fusion operation on the quantized data causes no performance degradation for the estimation of the desired parameter. **Proposition IV.2**: _If all sensors adopt the identical \\(M\\)-bit one-hot quantizer with design parameter \\(\\mathbf{\\Phi}\\), estimation of \\(\\theta\\) by using the original quantized data matrix \\(\\mathbf{\\mathbf{U}}\\) or only the mean-one-hot-vector \\(\\tilde{\\mathbf{\\mathbf{v}}}\\) can achieve identical MSE lower bound, i.e.,_ \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\mathbf{\\mathbf{U}})|^{2}] \\geq\\mathcal{L}_{\\text{onhet}}^{M,K}(\\Phi), \\tag{37}\\] \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\tilde{\\mathbf{\\mathbf{v}}})|^{2}] \\geq\\mathcal{L}_{\\text{onhet}}^{M,K}(\\Phi), \\tag{38}\\] _where \\(\\mathcal{L}_{\\text{onhet}}^{M,K}(\\Phi)\\) in (39) is the MSE lower bound, \\(L=2^{M}\\),_ \\[\\mathcal{L}_{\\text{onhet}}^{M,K}(\\Phi)=\\mathbb{E}[\\theta^{2}]- \\sum_{i_{0}=0}^{K}\\cdots\\sum_{i_{k}=0}^{K-\\sum_{i_{k}=0}^{K-\\sum_{i_{k}=0}^{K- \\sum_{i_{k}=0}^{K-\\sum_{i_{k}=0}^{K-\\sum_{i_{k}=0}^{K-\\sum_{i_{k}=0}^{K-\\sum_{i_{ k}=0}^{K-\\sum_{i_{k}=0}^{K-\\sum_{i_{k}}=0}^{K-\\sum_{i_{k}}=0}^{K-\\sum_{i_{k}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\ ,\\right\\(i_{L-1}=K-\\sum_{l=0}^{L-2}i_{l}\\), \\[q_{\\Phi}^{i_{0},\\cdots,i_{L-1}}(\\theta)=\\prod_{l=0}^{L-2}C_{K-i_{0}\\cdots-i_{L-1} }^{i_{L}}(\\gamma_{\\Phi,l}(\\theta))^{i_{l}}, \\tag{40}\\] and, \\[\\gamma_{\\Phi,l}(\\theta)=\\mathbb{E}_{X}[[G_{\\Phi}(X)]_{l}|\\theta].\\] Proof:: See Appendix D for details. By using (40) and following the analysis in (16), the estimation MSE for \\(\\theta\\) at the FC is computed as (41). From (39) and (41), the optimization problems for the multi-bit one-hot quantizer and FC are derived as \\[\\min_{\\Phi}\\mathcal{L}_{\\text{onchot}}^{M,K}(\\Phi), \\tag{42}\\] \\[\\min_{\\{\\Phi,\\Psi\\}}\\mathcal{T}_{\\text{onchot}}^{M,K}(\\Phi,\\Psi). \\tag{43}\\] Since the training for \\(\\Phi\\) and \\(\\Psi\\) based on deep learning is similar to the binary quantization scenario discussed in section III-D, we omit the details of training process. ## V Simulation Results This section presents some simulations to validate the proposed method, and the corresponding simulation environment is specified as follows: Initially, the samples of the desired parameter in both the data and test sets are obtained from an uniform distribution over the interval \\([-1,1]\\)[28]. The observation samples from each sensor are contaminated by i.i.d. Gaussian noise with variance \\(\\sigma^{2}\\), and \\(f_{X}(\\cdot|\\theta)\\sim\\mathcal{N}(0,\\sigma^{2})\\) denotes the conditional distribution of the observation \\(X\\) with given desired parameter \\(\\theta\\). The observation signal-to-noise ratio (SNR) in the simulation is defined as the power ratio of the desired parameter to the observation noise. For the quantizer module, the number of FCLs in the probability controller DNN \\(G_{\\Phi}\\) is set as \\(N=3\\), with the number of neurons in each FCL being \\(L_{G}=20\\); for the estimator DNN \\(F_{\\Psi}\\) in FC module, the number of neurons in each FCL is set as \\(L_{F}=30\\). \\(G_{\\Phi}\\) is trained by a data set of 50000 samples over 500 epochs, with the predetermined sensor quantity parameter in the training of \\(G_{\\Phi}\\) being \\(K_{S}\\). Using the trained \\(G_{\\Phi^{*}}\\), \\(F_{\\Psi}\\) is then trained by the same data set over 500 epochs, with the predetermined sensor quantity parameter in the training of \\(F_{\\Psi}\\) being \\(K_{F}\\). Based on the pretest simulation, the maximum epoch number of 500 is deemed efficient enough to achieve the desired convergence of the model training under all scenarios. Besides, to ensure consistency throughout the simulation, the default values for both \\(K_{S}\\) and \\(K_{F}\\) are set to 250. However, it is important to note that certain figures explicitly indicate the usage of different values for \\(K_{S}\\) or \\(K_{F}\\). After the sequential training of \\(G_{\\Phi}\\) and \\(F_{\\Psi}\\), the entire trained system undergoes validation using an independent test set comprising 10,000 samples. The test set is generated separately from the training set, ensuring a rigorous evaluation of the system's performance and verifying that our model is not overfitted. The whole training uses the ADMM optimizer [31] with a constant learning rate \\(l_{r}=0.001\\). The implementation of the whole simulations is carried out using PyTorch 1.7.0 [31] on an NVIDIA 2080 Super Max-Q GPU. Under the aforementioned simulation environment, the training duration for the complete system is approximately 2 hours for the binary model, 4 hours for the 2-bit parallel models, and 10 hours for the 2-bit one-hot models. In the entire simulations, the DNN parameters are initialized by the widely adopted Kaiming Gaussian distribution method [32], which is the default setting in the Pytorch. Additionally, to ensure the reliability of the results, the simulation of each figure is repeated multiple times to verify their consistency. For comparisons, we implement the sine-quantization-maximum-likelihood-fusion (SQMLF) method [28], which was proved to be optimal for the estimation of a parameter with uniform distribution under the ideal noiseless scenario. In addition, we also examine the Posterior Cramer-Rao Lower Bound (PCRLB) [26], being the minimum MSE to be achieved by any unbiased estimation method, as the performance limit of the distributed estimation with binary quantization. ### _Asymptotically optimality and robustness of proposed algorithm_ First, we study the asymptotic optimality and robustness of the proposed method to the number of sensors. Fig. 8 plots Fig. 8: Probability controller \\(G_{\\Phi}\\) trained with different \\(K_{S}\\). Fig. 7: Multi-bit one-hot quantization and mean-one-hot-vector fusion. the binary probability controller \\(G_{\\Phi^{*}}\\) trained with different \\(K_{S}\\) under the noiseless observation scenario where local observations at sensors equal to the desired parameter. For the estimation of a parameter with uniform distribution under the noiseless scenario, the optimal probability controller function is proved to be \\(G_{\\text{sine}}(\\theta)=[1+\\sin(\\pi\\theta/2)]/2\\) used in the SQMLF method [26]. Therefore, the closer the trained \\(G_{\\Phi^{*}}\\) is to \\(G_{\\text{sine}}\\) under the noiseless scenario, the better quantitation performance and estimation performance are expected to be obtained. It is observed that with the increasing of \\(K_{S}\\), the trained \\(G_{\\Phi^{*}}\\) gradually approaches to the optimal \\(G_{\\text{sine}}\\), which implies the asymptotic optimality of the proposed method. Fig. 9 plots the estimation MSE of the proposed method as a function of the number of available sensors in the network, with different \\(K_{S}\\) being selected in the training of quantizer. It is observed that selecting bigger \\(K_{S}\\) in training improves the robustness of the proposed method to the variations of the number of sensors in the practical network. For instance, when \\(K_{S}=10\\), although the estimation MSE is close to the PCRLB at the initial phase of the curves, it decreases more slowly and deviates further from the PCRLB as the number of sensors in test increases. While with \\(K_{S}\\geq 100\\), the MSE decreases linearly with respect to the number of sensors and remains close to the PCRLB at the whole curves. This validates the robustness of the proposed FC design with mean-fusion operation for accommodating various number of sensors in practice. To further evaluate the robustness of our method, we conducted additional simulations to assess its performance in scenarios with sudden sensor failures. We consider the situation where each sensor in the network randomly fails to transmit information to the FC with a probability denoted as \\(p_{f}\\). Fig. 10 illustrates the estimation MSE of our method under different values of \\(p_{f}\\). The simulation result reveals a consistent linear decrease in the estimation MSE of our proposed method across varying \\(p_{f}\\) values, as the number of sensors increases. Notably, the MSE remains close to the PCRLB under all conditions. This result provide empirical evidence that our method is robust against sudden sensor failures and possesses a high level of generalization to the network variation. To investigate the robustness of our method to the neural network initializations, we conducted the simulations of our proposed method with various initial setting parameters, including the number of layers \\(N\\) in the DNNs, the number of neurons \\(L_{G}\\) in each layer of \\(G_{\\Phi}\\) and the number of neurons \\(L_{F}\\) in each layer of \\(F_{\\Psi}\\). Fig. 11(a) plots the convergence Fig. 11: Robustness to the neural network initialization. Fig. 10: Robustness to the failure of sensors. Fig. 9: Estimation MSE vs. the number of available sensors in the network. results of the probability controller DNN \\(G_{\\Phi}\\) under different numbers of layers and neurons. It is observed that \\(G_{\\Phi}\\) converges to almost the same structure despite the variations in DNN setting parameters. Furthermore, Fig. 11(b) portrays the estimation MSE of our method with different numbers of layers and neurons. Similarly, the estimation MSE curves of our method under diverse DNN setting parameters are almost the same, which verify the robustness of our method to the neural network initialization. Besides, it is observed that the minimum DNN initial configuration yielding comparable results for our proposed method is \"\\(N=3,L_{G}=10,L_{F}=15\\)\". As depicted in Fig. 11 our manuscript, a slight increase in the MSE is observed when employing this minimum configuration compared to simulations with a larger number of neurons and layers. This observation implies that selecting a number of neurons and layers smaller than the aforementioned minimum values would result in a more significant increase in the MSE of our method. ### _Noise suppression ability_ In this section, we investigate the noise suppression ability and robustness of the proposed method to the observation noise, considering both the stable noisy scenario with SNR being constant during the training and test process, and the unstable scenario with fluctuating SNRs. Fig. 12 plots the noisy quantization probability of the proposed method, i.e., \\(\\gamma_{\\Phi^{\\ast}}(\\theta)=\\mathbb{E}_{X}[G_{\\Phi^{\\ast}}(X)|\\theta]\\) defined in (8), under the stable noisy scenario. Similar to the noiseless scenario, the optimal \\(\\gamma_{\\Phi^{\\ast}}(\\theta)\\) should equal to \\(G_{\\text{sine}}(\\theta)\\)[26]. In particular, if \\(G_{\\text{sine}}(X)\\) is directly used as the probability controller at the sensor, its corresponding noisy quantization probability becomes \\(\\gamma_{\\text{sine}}(\\theta)=\\mathbb{E}_{X}[G_{\\text{sine}}(X)|\\theta]\\) and is no longer optimal for the noisy scenario. It is observed in Fig. 12 that under different SNRs, \\(\\gamma_{\\Phi^{\\ast}}(\\theta)\\) is closer to the optimal \\(G_{\\text{sine}}(\\theta)\\) than \\(\\gamma_{\\text{sine}}(\\theta)\\). Besides, \\(\\gamma_{\\Phi^{\\ast}}(\\theta)\\) is almost identical to \\(G_{\\text{sine}}(\\theta)\\) when SNR \\(\\geq 4\\)dB, whereas \\(\\gamma_{\\text{sine}}\\) becomes almost identical to \\(G_{\\text{sine}}(\\theta)\\) until SNR reaches 16 dB. This validates the superiority of the proposed method on the observation noise suppression. Fig. 13(a) and Fig. 13(b) plot the estimation MSE of the proposed and SQMLF methods as a function of the number of sensors under the stable observation scenario. It is observed that although the SQMLF method performs nearly optimally in the ideal noiseless scenario, its performance suffers severe degradation in the noisy scenario and its estimation MSE decreases more slowly with the increasing of the number of sensors. According to Fig. 12, this observation can be attributed to the fact that the deviation of the noisy quantization probability of the SQMLF method from the optimal structure becomes more pronounced as the SNR decreases, consequently resulting in a greater performance degradation. In contrast, the estimation MSE of the proposed method under different SNRs remains close to the PCRLB and is linearly decreasing with respect to the number of sensors. Next, we evaluate the robustness of the proposed method in the presence of the unstable noisy observations with fluctuating SNRs. Fig. 14 plots the estimation MSE as a function of the SNRs, with the number of sensors being set as 250. The estimation MSE of the proposed method trained under different SNRs are tested and compared with the SQMLF method. It is observed that with the increasing of the gap Fig. 12: Noisy quantization probability of the proposed and SQMLF methods under different SNRs. between the SNRs in test and training, the estimation MSE of the proposed method increases and presents to be a U-shape curve with respect to the SNRs. Consequently, in scenarios with very high SNRs, the performance of the proposed method trained under low SNRs is inferior to that of the SQMLF method. However, it is also observed that the proposed method trained under a higher SNR exhibits better robustness to the varying SNRs in test. For instance, the proposed method trained under SNR = 12 dB outperforms the SQMLF method for all SNRs tested. In practical systems, it is reasonable to account for the possibility of a sudden spike in sensor noise. We consider the scenario that all sensors in the network randomly have a 4 times power spike in its observation noise, i.e., 6 dB SNR degradation at the sensors, with the probability denoted as \\(p_{s}\\). Fig. 15 plots the estimation MSE of our method under different values of \\(p_{s}\\). It is observed that the estimation MSE of our proposed method remains close to the PCRLB under different number of sensors and values of \\(p_{s}\\). Notably, there is only a slight increase in the MSE when the network has a huge number of sensors and a high sensor spike probability like \\(P_{s}=0.3\\). This result provides empirical evidence that our proposed method is robust against the sudden spike in sensor noise. ### _Multi-bit quantization_ Finally, we investigate the performance of the proposed multi-bit parallel quantization and one-hot quantization schemes, and we select the multi-bit quantization dimension \\(M=2\\) for both the two models. Fig. 16 plots the 2-bit one-hot probability controller \\(G_{\\Phi^{\\prime}}\\) trained with different \\(K_{S}\\) under the noiseless scenario. It is observed that as \\(K_{S}\\) increases, the trained \\(G_{\\Phi^{\\prime}}\\) converges to a stable structure with a stable quantization probability distribution on the desired parameter \\(\\theta\\). Fig. 17 plots the estimation MSE of the 2-bit one-hot quantization scheme as a function of the total number of quantization bits from all sensors. It is observed that with the increasing of \\(K_{S}\\), the performance of the 2-bit one-hot quantization becomes asymptotically convergent and Fig. 14: Estimation MSE of the proposed method under unstable observation scenario. Fig. 13: Estimation MSE of the proposed and SQMLF methods under the stable observation scenario. Fig. 15: Robustness to a sudden spike in sensor noise, with 4 times power spike being considered. optimal, outperforming the performance limit of any binary-quantization-based estimation scheme. The result verifies the asymptotic optimality of the proposed method. Fig. 18 plots the estimation MSE of both the 2-bit one-hot quantization and 2-bit parallel quantization schemes as a function of the total number of quantization bits from all sensors. The PCRLB of the binary quantization with respect to the number of quantization bits is plotted as a comparison. It is observed that the estimation performance of both the one-hot quantization and parallel quantitation schemes outperform the performance limit of any binary quantization scheme. Besides, it is observed that the one-hot quantization scheme exhibits better performance than the parallel one. This can be explained by the inclusion relationship between the two schemes: the set of any potential quantization probability distributions based on the parallel quantizer is a subset of those realizable by the one-hot quantizer. Therefore, the performance ceiling of the one-hot quantization scheme is at least equivalent to or better than that of the parallel scheme. ## VI Conclusion In this paper, we propose a joint model and data driven method for the quantization-based distributed estimation prob Fig. 16: 2-bit one-hot probability controller \\(G_{\\Phi}\\) trained with different \\(K_{S}\\). Fig. 17: Estimation MSE of the 2-bit one-hot quantization vs. the total number of quantization bits, with \\(K_{F}=50\\). Fig. 18: Estimation MSE of 2-bit one-hot quantization and 2-bit parallel quantization models, with \\(K_{S}=K_{F}=50\\). lem. First, for sensors with binary quantization and conditionally i.i.d. observations, the MSE lower bound for the distributed estimation is derived, and a binary probabilistic quantizer is designed to minimize this lower bound. The optimality of the mean-fusion operation at the FC for the estimation with MMSE criterion is proved and a corresponding FC design is proposed. Considering the practical scenarios that the distributions of both the desired parameter and observation noise are unknown or only the noise distribution is known, a joint model and data driven method is proposed to train the probability controller in quantizer and the estimator in FC as DNNs. By relaxing the binary quantization constraint at the sensors, the results are extended to the cases of multi-bit parallel quantization and one-hot quantization. Simulation results reveal that the proposed method outperforms state-of-the-art schemes under certain conditions. ## Appendix A Proof of Proposition iii.1 The FC utilizes the quantized data \\(\\mathbf{u}=[u_{1}\\cdots,u_{K}]^{T}\\) from all sensors to estimate the desired parameter \\(\\theta\\) as \\(\\hat{\\theta}(\\mathbf{u})\\), and the estimator \\(\\hat{\\theta}(\\cdot)\\) determines the estimation MSE for \\(\\theta\\). According to [21], the MMSE estimator for the estimation of \\(\\theta\\) with given \\(\\mathbf{u}\\) is derived as \\[\\hat{\\theta}_{\\text{MMSE}}\\left(\\mathbf{u}\\right)=\\mathbb{E}_{\\theta, \\mathbf{u}}\\left[\\theta\\left|\\mathbf{u}\\right|\\right]=\\frac{\\mathbb{E}_{ \\theta}\\left[\\theta p\\left(\\mathbf{u}\\left|\\theta\\right)\\right]}{\\mathbb{E}_{ \\theta}\\left[p\\left(\\mathbf{u}\\left|\\theta\\right)\\right]\\right]}, \\tag{44}\\] and the achievable MSE in estimating \\(\\theta\\) using \\(\\mathbf{u}\\) is lower bounded by \\[\\begin{split}\\mathbb{E}[|\\theta-\\hat{\\theta}(\\mathbf{u})|^{2}]& \\geq\\mathbb{E}_{\\theta,\\mathbf{u}}[|\\theta-\\hat{\\theta}_{\\text{ MMSE}}(\\mathbf{u})|^{2}]\\\\ &=\\mathbb{E}_{\\theta}[\\theta^{2}]-\\sum_{\\mathbf{u}\\in\\mathcal{U}} \\frac{\\mathbb{E}_{\\theta}^{2}\\left[\\theta p\\left(\\mathbf{u}\\left|\\theta\\right) \\right]}{\\mathbb{E}_{\\theta}\\left[p\\left(\\mathbf{u}\\left|\\theta\\right)\\right]}, \\end{split} \\tag{45}\\] where \\(\\mathcal{U}=\\{0,1\\}^{K}\\) is the set of all possible results for \\(\\mathbf{u}\\). When the quantized data from all sensors are conditionally i.i.d. with given \\(\\theta\\), the above results can be further simplified. Define \\(\\{\\mathcal{U}_{k}\\}_{k=0}^{K}\\) as the sequence of non-overlapping subsets of \\(\\mathcal{U}\\), with \\(\\mathcal{U}_{k}=\\{\\mathbf{u}\\in\\mathcal{U}|\\frac{1}{K}\\sum_{i=1}^{K}u_{i}=k/K\\}\\). It's intuitively to see that \\(\\mathcal{U}=\\cup_{k=0}^{K}\\mathcal{U}_{k}\\) and \\(|\\mathcal{U}_{k}|=C_{K}^{k}\\). Besides, any \\(\\mathbf{u}\\) belonging to set \\(\\mathcal{U}_{k}\\) have the identical conditional probability with given \\(\\theta\\), i.e., \\[p(\\mathbf{u}|\\theta)=(\\gamma(\\theta))^{k}(1-\\gamma(\\theta))^{K-k}, \\tag{46}\\] \\(\\forall\\)\\(\\mathbf{u}\\in\\mathcal{U}_{k}\\), where \\(\\gamma(\\theta)=p(u=1|\\theta)\\) denotes the conditional probability of any quantized data being '1' with given \\(\\theta\\). By subsisting (46) into (45) and utilizing \\(|\\mathcal{U}_{k}|=C_{K}^{k}\\), the MSE lower bound in (45) is rewritten as (47). This completes the proof. ## Appendix B Proof of Proposition iii.2 According to [26], the Fisher Information of \\(\\mathbf{u}\\) with given \\(\\theta\\) is derived as \\[\\begin{split} I_{1}(\\theta)&=\\mathbb{E}_{\\mathbf{u }}\\left[-\\frac{\\partial^{2}\\ln p(\\mathbf{u}\\mid\\theta)}{\\partial\\theta^{2}} \\right]\\\\ &=K\\frac{(\\gamma^{\\prime}(\\theta))^{2}}{\\gamma(\\theta)(1-\\gamma( \\theta))},\\end{split} \\tag{48}\\] where \\(\\gamma(\\theta)\\) is defined in (8). Based on (46), the conditional probability distribution for the average \\(\\bar{u}\\) of all quantized data with given \\(\\theta\\) is computed as \\[p\\left(\\bar{u}=\\frac{k}{K}\\bigg{|}\\theta\\right)=\\ p(\\mathbf{u}\\in\\mathcal{U}_{ k}|\\theta)=\\ C_{K}^{k}\\left(\\gamma(\\theta)\\right)^{k}(1-\\gamma(\\theta))^{K-k}\\,, \\tag{49}\\] for \\(k=0,1,\\cdots,K\\). Then, by using (49), the Fisher Information of \\(\\bar{u}\\) with given \\(\\theta\\) is derived as (50). Similar to (44), the MMSE estimator for estimating \\(\\theta\\) by using \\(\\bar{u}\\) is obtained as \\(\\hat{\\theta}_{\\text{MMSE}}(\\bar{u})=\\mathbb{E}_{\\theta}\\left[\\theta p\\left( \\bar{u}\\left|\\theta\\right)\\right]/\\mathbb{E}_{\\theta}\\left[p\\left(\\bar{u} \\left|\\theta\\right)\\right]\\right]\\). The achievable MSE lower bound for the estimation of \\(\\theta\\) with \\(\\bar{u}\\) is derived as (51), and is identical to the achievable MSE lower bound for estimating \\(\\theta\\) with \\(\\mathbf{u}\\) as derived in (7). (50) and (51) complete the proof. ## Appendix C Proof of Proposition iv.1 According to Proposition II.1, when all sensors adopt the identical \\(M\\)-bit parallel quantizer with design parameter \\(\\Phi\\), the quantized data vectors \\(\\mathbf{u}_{1},\\mathbf{u}_{2},\\cdots,\\mathbf{u}_{K}\\in\\{0,1\\}^{M}\\) from all \\(K\\) sensors are conditionally i.i.d., i.e., \\[p(\\mathbf{u}_{i}=\\mathbf{u}|\\theta) =p(\\mathbf{u}_{j}=\\mathbf{u}|\\theta), \\tag{52}\\] \\[p(\\mathbf{u}_{1},\\cdots,\\mathbf{u}_{K}|\\theta) =\\prod_{k=1}^{K}p(\\mathbf{u}_{k}|\\theta), \\tag{53}\\] \\(\\forall i,j\\in\\{1,\\cdots,K\\},\\forall\\mathbf{u}\\in\\{0,1\\}^{M}\\). Then, it can be inferred from the parallel quantizer configuration in Fig. 6 that all entries of a quantized vector are conditionally independent with given \\(\\theta\\), i.e., \\[p(\\mathbf{u}_{k}|\\theta)=\\prod_{m=1}^{M}p(\\{\\mathbf{u}_{k}\\}_{m}|\\theta), \\tag{54}\\] \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\mathbf{u})|^{2}]\\geq\\mathbb{E}_{\\theta}[ \\theta^{2}]-\\sum_{k=0}^{K}\\sum_{\\mathbf{u}\\in\\mathcal{U}_{k}}\\frac{\\mathbb{E}_{ \\theta}^{2}\\left[\\theta p\\left(\\mathbf{u}\\left|\\theta\\right)\\right]}{\\mathbb{E}_ {\\theta}\\left[p\\left(\\mathbf{u}\\left|\\theta\\right)\\right]}=\\mathbb{E}_{\\theta}[ \\theta^{2}]-\\sum_{k=0}^{K}C_{K}^{k}\\frac{\\mathbb{E}_{\\theta}^{2}\\left[\\theta \\gamma(\\theta)\\right]^{K}(1-\\gamma(\\theta))^{K-k}\\right]}{\\mathbb{E}_{\\theta} \\left[\\gamma(\\theta)\\right]^{K}(1-\\gamma(\\theta))^{K-k}\\right]}. \\tag{55}\\]for \\(k=1,2,\\cdots,K\\). Based on (52), (53) and (54), it can be concluded that all entries in the quantized data matrix \\(\\mathbf{U}=[\\mathbf{u}_{1}^{T},\\cdots,\\mathbf{u}_{K}^{T}]^{T}\\) are conditionally independent with given \\(\\theta\\), i.e., \\[p(\\mathbf{U}|\\theta)=\\prod_{k=1}^{K}p(\\mathbf{u}_{k}|\\theta)=\\prod_{k=1}^{K} \\prod_{m=1}^{M}p([\\mathbf{u}_{k}]_{m}|\\theta). \\tag{55}\\] Define \\(\\mathcal{U}=\\{0,1\\}^{K\\times M}\\) as the set of all possible results for \\(\\mathbf{U}\\), along with a sequence of its non-overlapping subsets as \\(\\mathcal{U}_{i_{1},\\cdots,i_{M}}=\\{\\mathbf{U}\\in\\mathcal{U}|\\frac{1}{K}\\sum_ {i=1}^{K}[\\mathbf{U}]_{i,m}=\\frac{1}{K}\\sum_{i=1}^{K}[\\mathbf{u}_{i}]_{m}= \\frac{i_{m}}{K}\\) for \\(m=1,\\cdots,M\\}\\) for \\(i_{1},\\cdots,i_{M}\\in\\{0,\\cdots,K\\}\\). It's intuitively to see that \\(\\mathcal{U}=\\cup_{i_{1}=0}^{K}\\cdots\\cup_{i_{M}=0}^{K}\\mathcal{U}_{i_{1}, \\cdots,i_{M}}\\) and \\(|\\mathcal{U}_{i_{1},\\cdots,i_{M}}|=\\prod_{m=1}^{M}C_{K}^{i_{m}}\\). From (55), we have that if \\(\\mathbf{U}\\in\\mathcal{U}_{i_{1},\\cdots,i_{M}}\\), then \\[\\begin{split} p(\\mathbf{U}|\\theta)=&\\prod_{m=1}^{M} p\\left(\\frac{1}{K}\\sum_{i=1}^{K}[\\mathbf{u}_{i}]_{m}=\\frac{i_{m}}{K}\\bigg{|} \\theta\\right)\\\\ =&\\prod_{m=1}^{M}(\\gamma_{\\phi_{m}}(\\theta))^{i_{m} }(1-\\gamma_{\\phi_{m}}(\\theta))^{K-i_{m}},\\end{split} \\tag{56}\\] where \\[\\gamma_{\\phi_{m}}(\\theta)=p_{\\phi_{m}}([\\mathbf{u}_{k}]_{m}=1|\\theta),k=1, \\cdots,K.\\] Define \\(\\bar{\\mathbf{u}}=\\frac{1}{K}\\sum_{k=1}^{K}\\mathbf{u}_{k}\\in\\{0,1/K,\\cdots,1 \\}^{M}\\) as the mean vector of the quantized data matrix. Based on (55) and (56), the conditional probability distribution of \\(\\bar{\\mathbf{u}}\\) with given \\(\\theta\\) is derived as (57), \\(\\forall\\ i_{1},\\cdots,i_{M}\\in\\{0,1,\\cdots,K\\}\\). By utilizing (56) and \\(|\\mathcal{U}_{i_{1},\\cdots,i_{m}}|=\\prod_{m=1}^{M}C_{K}^{i_{M}}\\), the achievable MSE lower bound for estimating \\(\\theta\\) using \\(\\mathbf{U}\\) is derived as (58), where is the MMSE estimator for the estimation of \\(\\theta\\) with \\(\\mathbf{U}\\). From (57), the achievable MSE lower bound for estimating \\(\\theta\\) with \\(\\bar{\\mathbf{u}}\\) is derived as (59), where is the corresponding MMSE estimator. By subsisting (57) into (58) and (59), we complete the proof.. ## Appendix D Proof of Proposition 4.2 Since the one-hot representation of a binary information is invertible, estimating the desired parameter \\(\\theta\\) with either the original quantized matrix \\(\\mathbf{U}\\) or its one-hot representation \\(\\mathbf{V}\\) achieves the identical estimation MSE, i.e., \\[\\mathbb{E}[|\\theta-\\hat{\\theta}(\\mathbf{U})|^{2}]=\\mathbb{E}[|\\theta-\\hat{ \\theta}(\\mathbf{V})|^{2}]. \\tag{60}\\] Similar to the analysis in (52) and (53), the quantized vectors \\(\\mathbf{u}_{1},\\cdots,\\mathbf{u}_{K}\\) are conditionally i.i.d. with given \\(\\theta\\), and so as their one-hot representation \\(\\mathbf{v}_{1},\\cdots,\\mathbf{v}_{K}\\). Define \\[\\mathcal{V}=\\left\\{\\mathbf{V}\\in\\{0,1\\}^{K\\times L}\\left|\\sum_{I=0}^{L-1}[ \\mathbf{V}]_{k,J}=1,\\forall\\ k\\in[K]\\right.\\right\\}\\] as the set of all possible results for \\(\\mathbf{V}\\), with \\(L=2^{M}\\). Define a sequence of non-overlapping subsets of \\(\\mathcal{V}\\) as \\[\\mathcal{V}_{i_{0},\\cdots,i_{L-1}}=\\left\\{\\mathbf{V}\\in\\mathcal{V}\\left|\\sum_{ k=1}^{K}[\\mathbf{V}]_{k,J}=i_{I}\\ \\text{for}\\ l=0,\\cdots,L-1\\right.\\right\\},\\] for all positive integers \\(i_{0},\\cdots,i_{L-1}\\) satisfying \\(\\sum_{I=0}^{L-1}i_{I}=K\\). It's intuitively to see that \\(\\mathcal{V}=\\cup_{i_{0},\\cdots,i_{L-1}}\\mathcal{V}_{i_{0},\\cdots,i_{L-1}}\\) and \\(|\\mathcal{V}_{i_{0},\\cdots,i_{L-1}}|=\\prod_{I=0}^{L-1}C_{K-i_{0},\\cdots,i_{L-1}}^ {i_{M}}\\). The conditional probability distribution of \\(\\mathbf{V}\\) with given \\(\\theta\\) is derived as \\[p(\\mathbf{V}|\\theta)=\\prod\ olimits_{I=0}^{L-1}(\\gamma_{\\Phi,J}(\\theta))^{i_{I}}, \\tag{61}\\] It's intuitively to see that \\(\\mathcal{V}=\\cup_{i_{0},\\cdots,i_{L-1}}\\mathcal{V}_{i_{0},\\cdots,i_{L-1}}\\) and \\(|\\mathcal{V}_{i_{0},\\cdots,i_{L-1}}|=\\prod_{I=0}^{L-1}C_{K-i_{0},\\cdots,i_{L-1}}^ {i_{M}}\\). The conditional probability distribution of \\(\\mathbf{V}\\) with given \\(\\theta\\) is derived as \\[p(\\mathbf{V}|\\theta)=\\prod\ olimits_{I=0}^{L-1}(\\gamma_{\\Phi,J}(\\theta))^{i_{I}}, \\tag{61}\\] (62)\\(\\forall\\mathbf{V}\\in\\mathcal{V}_{i_{0},\\cdots,i_{L-1}}\\), where \\(\\gamma_{\\Phi,I}(\\theta)=\\mathbb{E}_{X}[[G_{\\Phi}(X)]_{I}|\\theta]\\). Define \\(\\bar{\\mathbf{v}}=\\frac{1}{K}\\sum_{k=1}^{K}\\mathbf{v}_{k}\\) as the mean-one-hot-vector of \\(\\mathbf{U}\\). Based on (61), the conditional probability distribution of \\(\\bar{\\mathbf{v}}\\) with given \\(\\theta\\) is derived as \\[p\\left(\\bar{\\mathbf{v}}=\\frac{1}{K}[i_{0},\\cdots,i_{L-1}]^{T} \\bigg{|}\\theta\\right)= p\\left(\\mathbf{V}\\in\\mathcal{V}_{i_{0},\\cdots,i_{L-1}}\\bigg{|} \\theta\\right)\\] \\[= \\prod_{l=0}^{L-1}C_{K-i_{0},\\cdots,i_{L-1}}^{\\theta}(\\gamma_{ \\Phi,I}(\\theta))^{i_{l}}, \\tag{62}\\] for all positive integers \\(i_{0},\\cdots,i_{L-1}\\) satisfying \\(\\sum_{l=0}^{L-1}i_{l}=K\\). By utilizing (60) and (61), the achievable MSE lower bound for estimating \\(\\theta\\) with \\(\\mathbf{U}\\) is derived as (63), where \\(\\widehat{\\theta}_{\\text{MMSE}}(\\mathbf{V})=\\mathbb{E}_{\\theta}\\left[\\theta p \\left(\\mathbf{V}\\left|\\theta\\right.\\right)\\right]/\\mathbb{E}_{\\theta}\\left[p \\left(\\mathbf{V}\\left|\\theta\\right.\\right)\\right]\\) is the MMSE estimator for estimating \\(\\theta\\) using \\(\\mathbf{V}\\). The achievable MSE lower bound for estimating \\(\\theta\\) using \\(\\bar{\\mathbf{v}}\\) is derived as (64), where \\(\\widehat{\\theta}_{\\text{MMSE}}(\\bar{\\mathbf{v}})=\\mathbb{E}_{\\theta}\\left[ \\theta p\\left(\\bar{\\mathbf{v}}\\left|\\theta\\right.\\right)\\right]/\\mathbb{E}_{ \\theta}\\left[p\\left(\\bar{\\mathbf{v}}\\left|\\theta\\right.\\right)\\right]\\) is the corresponding MMSE estimator. By subsisting (62) into (63) and (64), we complete the proof. ## References * [1] M. He, C. Huang, and S. Jiang, \"On probabilistic quantization and mean-value fusion design for distributed estimation in sensor networks,\" in _2022 IEEE/CIC International Conference on Communications in China (ICCC)_, 2022, pp. 1107-1112. * [2] A. Kotras, Z. Wang, and A. Rodriguez, \"Spatial modeling for risk assessment of extreme values from environmental time series: A bayesian nonparametric approach,\" _Environmetrics_, vol. 23, no. 8, pp. 649-662, Dec. 2012. * [3] J. P. French and S. R. Sain, \"Spatio-temporal exceedance locations and confidence regions,\" _Ann. Appl. Stat._, vol. 7, no. 3, pp. 1421-1449, Sep. 2013. * [4] F. S. Cattivelli and A. H. Sayed, \"Diffusion LMS strategies for distributed estimation,\" _IEEE Trans. Signal Process._, vol. 58, no. 3, pp. 1035-1048, Mar. 2010. * [5] I. D. Schizas, A. Ribeiro, and G. B. Giannakis, \"Consensus in Ad Hoc WSNs with noisy links-part I: Distributed estimation of deterministic signals,\" _IEEE Trans. Signal Process._, vol. 56, no. 1, pp. 350-364, Jan. 2008. * [6] F. S. Cattivelli, C. G. Lopes, and A. H. Sayed, \"Diffusion recursive least-squares for distributed estimation over adaptive networks,\" _IEEE Trans. Signal Process._, vol. 56, no. 5, pp. 1865-1877, May 2008. * [7] A. Ribeiro and G. Giannakis, \"Bandwidth-constrained distributed estimation for wireless sensor networks-part I: Gaussian case,\" _IEEE Trans. Signal Process._, vol. 54, no. 3, pp. 1131-1143, Mar. 2006. * [8] P. Venkitasubramaniam, G. Mergen, L. Tong, and A. Swami, \"Quantization for distributed estimation in large scale sensor networks,\" in _2005 3rd International Conference on Intelligent Sensing and Information Processing_, Dec. 2005, pp. 121-127. * [9] T. Wu and Q. Cheng, \"One-bit quantizer design for distributed estimation under the minimax criterion,\" in _2010 IEEE 71st Vehicular Technology Conference_, May 2010, pp. 1-5. * [10] A. Sani and A. Vosoughi, \"Distributed vector estimation for power- and bandwidth-constrained wireless sensor networks,\" _IEEE Trans. Signal Process._, vol. 64, no. 15, pp. 3879-3894, Aug. 2016. * [11] J.-J. Xiao, A. Ribeiro, Z.-Q. Luo, and G. Giannakis, \"Distributed compression-estimation using wireless sensor networks,\" _IEEE Signal Process. Mag., Special Issue on Distributed Signal Processing for Sensor Networks_, vol. 23, no. 4, pp. 27-41, Jul. 2006. * [12] J. Zhu, X. Lin, R. S. Blum, and Y. Gu, \"Parameter estimation from quantized observations in multiplicative noise environments,\" _IEEE Trans. Signal Process._, vol. 63, no. 15, pp. 4037-4050, Aug. 2015. * [13] M. El Gamal and L. Lai, \"On rate requirements for achieving the centralized performance in distributed estimation,\" _IEEE Trans. Signal Process._, vol. 65, no. 8, pp. 2020-2032, Apr. 2017. * [14] W.-M. Lam and A. Rebiman, \"Design of quantizers for decentralized estimation systems,\" _IEEE Trans. Commun._, vol. 41, no. 11, pp. 1602-1605, Nov. 1993. * [15] J. Gubner, \"Distributed estimation and quantization,\" _IEEE Trans. Inf. Theory_, vol. 39, no. 4, pp. 1456-1459, Jul. 1993. * [16] A. Ribeiro and G. Giannakis, \"Bandwidth-constrained distributed estimation for wireless sensor networks-part II: unknown probability density function,\" _IEEE Trans. Signal Process._, vol. 54, no. 7, pp. 2784-2796, Jul. 2006. * [17] Z.-Q. Luo, \"Universal decentralized estimation in a bandwidth constrained sensor network,\" _IEEE Trans. Inf. Theory_, vol. 51, no. 6, pp. 2210-2219, June 2005. * [18] J. Li and G. AlRegib, \"Rate-constrained distributed estimation in wireless sensor networks,\" _IEEE Trans. Signal Process._, vol. 55, no. 5, pp. 1634-1643, May 2007. * [19] S. Ghazanfar-Rad and F. Labeau, \"Formulation and analysis of lms adaptive networks for distributed estimation in the presence of transmission errors,\" _IEEE Internet of Things Journal_, vol. 3, no. 2, pp. 146-160, 2015. * [20] H. Poor, \"Fine quantization in signal detection and estimation,\" _IEEE Trans. Inf. Theory_, vol. 34, no. 5, pp. 960-972, Sep. 1988. * [21] T. A. Schonhoff and A. A. Giordano, _Detection and estimation theory and its applications_. Prentice Hall, 2006. * [22] S. Lloyd, \"Least squares quantization in pcm,\" _IEEE Trans. Inf. Theory_, vol. 28, no. 2, pp. 129-137, Mar. 1982. * [23] J. Max, \"Quantizing for minimum distortion,\" _IEEE Trans. Inf. Theory_, vol. 6, no. 1, pp. 7-12, Mar. 1960. * [24] M. Shirazi and A. Vosoughi, \"Bayesian cramer-rao bound for distributed estimation of correlated data with non-linear observation model,\" in _2014 48th Asilomar Conference on Signals, Systems and Computers_, Nov. 2014, pp. 1484-1488. * [25] A. Sani and A. Vosoughi, \"On distributed linear estimation with observation model uncertainties,\" _IEEE Trans. Signal Process._, vol. 66, no. 12, pp. 3212-3227, June 2018. * [26] H. Chen and P. K. Varshney, \"Performance limit for distributed estimation systems with identical one-bit quantizers,\" _IEEE Trans. Signal Process._, vol. 58, no. 1, pp. 466-471, Jan. 2010. * [27] S. Kar, H. Chen, and P. K. Varshney, \"Optimal identical binary quantizer design for distributed estimation,\" _IEEE Trans. Signal Process._, vol. 60, no. 7, pp. 3896-3901, Jul. 2012. * [28] A. Vemparty, H. He, B. Chen, and P. K. Varshney, \"On quantizer design for distributed bayesian estimation in sensor networks,\" _IEEE Trans. Signal Process._, vol. 62, no. 20, pp. 5359-5369, Oct. 2014. * [29] X. Li, J. Guo, U. Rogers, and H. Chen, \"Asymptotic optimal quantizer design for distributed bayesian estimation,\" in _2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, Mar. 2016, pp. 3711-3715. * [30] R. M. Gray and T. G. Stockham, \"Dithered quantizers,\" _IEEE Trans. Inf. Theory_, vol. 39, no. 3, pp. 805-812, May 1993. * [31] T. Oshea and J. Hoydis, \"An introduction to deep learning for the physical layer,\" _IEEE Trans. Cogn. Commun. Netw._, vol. 3, no. 4, pp. 563-575, Dec. 2017. * [32] K. He, X. Zhang, S. Ren, and J. Sun, \"Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,\" in _Proceedings of the IEEE international conference on computer vision_, 2015, pp. 1026-1034. \\begin{tabular}{c c} & Meng He (Member, IEEE) received the B.E. degree in communication engineering from University of Electronic Science and Technology of China, Chengdu, China, in 2017, and the Ph.D. degree in computer and information engineering from The Chinese University of Hong Kong, Shenzhen, China, in 2023. He was a TPC member for IEEE GLOBECOM 2019-2022. He has been serving as a reviewer for IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, and Journal of Communications and Information Networks. His current research interests include full-duplex communications, distributed estimation in wireless sensor networks and deep learning. Ran Li (Member, IEEE) received the B.E. degree in communication engineering from University of Electronic Science and Technology of China, Chengdu, China, in 2017, and the Ph.D. degree in computer and information engineering from The Chinese University of Hong Kong, Shenzhen, China, in 2023. His current research interests include reinforcement learning and resource allocation in wireless networks. \\\\ \\end{tabular} \\begin{tabular}{c c} & Chuan Huang (S'09-M'13) received his Ph.D. in Electrical Engineering from Texas A\\&M University, College Station, Texas, USA, in 2012. From August 2012 to July 2014, he was a Research Associate with Princeton University, Princeton, NJ, USA, and then a Research Assistant Professor with Arizona State University, Tempe, AZ, USA. He is currently an Associate Professor with The Chinese University of Hong Kong, Shenzhen. His current research interests include wireless communications and signal processing. He has been serving as an Editor for IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, IEEE ACCESS, Journal of Communications and Information Networks, and IEEE WRELESS COMMUNICATIONS LETTERS. He served as the Symposium Chair for IEEE GLOBECOM 2019 and IEEE ICCC 2019 and 2020. \\\\ \\end{tabular} \\begin{tabular}{c c} & Shulong Zhang received the B.E. degree in Control Science and Engineering from Harbin Institute of Technology, Harbin, China, in 2015, and the Master's degree in Control Science and Engineering from National University of Defense Science and Technology, Changsha, China. He is currently working at SF Technology Co., Ltd. as a system simulation engineer. His current research interests include digital twin and network planning. \\\\ \\end{tabular}
This paper considers the problem of distributed estimation in wireless sensor networks (WSN), which is anticipated to support a wide range of applications such as the environmental monitoring, weather forecasting, and location estimation. To this end, we propose a joint model and data driven distributed estimation method by designing the optimal quantizers and fusion center (FC) based on the Bayesian and minimum mean square error (MMSE) criterions. First, universal mean square error (MSE) lower bound for the quantization-based distributed estimation is derived and adopted as the design metric for the quantizers. Then, the optimality of the mean-fusion operation for the FC with MMSE criterion is proved. Next, by exploiting different levels of the statistic information of the desired parameter and observation noise, a joint model and data driven method is proposed to train parts of the quantizer and FC modules as deep neural networks (DNNs), and two loss functions derived from the MMSE criterion are adopted for the sequential training scheme. Furthermore, we extend the above results to the case with multi-bit quantizers, considering both the parallel and one-hot quantization schemes. Finally, simulation results reveal that the proposed method outperforms the state-of-the-art schemes in typical scenarios. Distributed estimation, deep neural network (DNN), minimum mean square error minimization (MMSE), joint model and data driven.
Summarize the following text.
275
mdpi/0099f71a_3314_41de_b43b_01d33cef8c36.md
# Implementing Sustainable Supply Chain Management: Reactive, Cooperative, and Dynamic Models Dominik Zimon 1Department of Management Systems and Logistics, Rzeszow University of Technology, 35-959 Rzeszow, Poland 11Department of Management Systems and Logistics, Rzeszow University of Technology, 35-959 Rzeszow, Poland 1 Jonah Tyan 2National Chengchi University, Taipei 116-05, Taiwan; [email protected] Robert Sroufe 3The Palumbo-Donahue School, Duquesne University, Pittsburgh, PA 15282, USA; [email protected] ## 1 Introduction Sustainable development is an attempt to formulate a program integrating various levels of human action, which was often considered separately before, based on moral reflection regarding human responsibility for the environment. When considering the complexity of sustainable development in the context of supply chain management, it is a management concept extending beyond a supply chain's performance metrics of cost, time, and flexibility. The efforts to implement environmentally and socially sustainable performance supporting current and future generations greatly expands transparency in supply chain management into moral, economic, legal, social and technical attributes of performance. Roy et al. [1] speak in a similar vein, recognizing that sustainable supply chain management (SSCM) addresses the management of the integration of economic and non-economic issues in a supply chain. Furthermore, SSCM explicitly integrates social and environmental dimensions with economic considerations into a triple bottom line (TBL), and includes both forward and reverse supply chains [2]. Some have extended the trend of integrated annual sustainability and financial reporting [3] to now call for integrated bottom line (IBL) performance [4], while building on the impacts green supply chains have on performance [5]. In the process of creating a sustainable development strategy fora firm, it is important to recognize and include links in the supply chain. Tuni et al. [6] concluded that companies beyond the focal firm are responsible for up to 80% of overall supply chain emissions. They take this further with the example of Marks & Spencer estimating that the environmental impact of its supply chain is 90%, with only 10% attributed to the focal firm. Precisely because of these emissions, environmental performance cannot be adequately addressed at a single company level anymore. On the contrary, it is necessary to have an integrated approach encompassing the supply chain. The implementation of Corporate Social Responsibility (CSR) in supply chains also seems to be important. The issue of CSR touches on the issues of economy, society, and values, as well as relations with the environment. In this sense, CSR issues include the interests of various stakeholder groups, consumers, local communities, and the natural environment. Voluntary CSR initiatives can contribute to the attractiveness and credibility of enterprises, making these initiatives highly attractive among companies with good practices. The idea is to make business better, make voluntary commitments for the local community, the environment and reduce the occurrence of negative phenomena. There is a growing body of literature regarding the SSCM implementation. This includes studies such as the TBL frameworks (see for example, references [7; 8]) drivers and barriers of implementation [9], and intersections with the UN's Sustainable Development Goals [10]. In the field of TBL framework research, a conceptual framework/model is normally derived from conducting systematic literature review or case study research. A total of 21 SSCM framework articles were identified from the sample of 311 over a period of 29 years (1990-2019) by means of a keyword search (i.e., SSCM + framework) through Science Citation Index Expanded, Social Sciences Citation Index, and Scopus databases. The selected literature is further categorized as SSCM implementation model [11], SSCM conceptual model [12], SSCM performance model [13], and SSCM contextual factor [14], whose corresponding numbers of each category are 2, 10, 7, and 2, respectively. Analyzing the literature sample using these categories reveals that the framework/model development in the area of SSCM implementation has gained less attention. These models are important as they help guide industries in taking a systematic approach to the adoption of SSCM practices [15]. We therefore argue that the examination and development of an implementation framework is a topic that will bridge a gap in the literature, and add value to both practitioners and the emerging SSCM field of research. The purpose of this study is to develop an SSCM implementation framework grounded in a literature review and categorizing SSCM published empirical practices. Even though some efforts have been made to conceptualize the SSCM implementation framework, we still have ample opportunities to understand this complex implementation topic. Bainipati and Panigrahi [11] develop a reduced implementation framework but confine its attention to barriers and risks in the decision making of alternatives for SSCM practices. Also, Luthra and Mangla [16] view adopting SSCM practices as a strategic matter, but they fail to formulate SSCM strategies to a framework level, instead treat implementation strategies as SSCM drivers such as \"management involvement, support and commitment.\" Therefore, it remains unclear on the connection between a sound implementation strategy and its corresponding practices. Then, we discuss RQ1. In response to sustainability requirements, what sustainable supply chain strategies and practices appear in the literature? The gap between theories and field practices also happens in the SSCM context. Most SSCM conceptual frameworks are developed through literature review [12; 17], which may not represent implementation practices in the real world. In contract, building an implementation framework through case studies seems more desirable as a way to resemble real-world practices. However, the scholarly SSCM literature remains a more credible source to conduct an implementation study. Thus, we arrive at RQ2. Are SSCM practices in the literature applicable in an empirical context? In building a framework, it can take different forming logic to structure the components. Ansari and Kant [12] conduct a comprehensive literature review and suggest that the most frequent reference logic used to build SSCM frameworks involves regulatory pressures/legal requirements, risk management, information transparency, and green purchasing. Meanwhile, some apparently critical SSCM logic such as corporate sustainability strategy and sustainable process management are used less often to construct such frameworks [12]. Gosling and colleagues [18] consider three SSCM strategies (i.e., reactive, contributive, and proactive) but apply supply chain leadership and learning to construct a conceptual SSCM framework. Similarly, Luthra and Mangla [16] operationalize SSCM strategies as implementation drivers or barriers instead of a matter of corporate strategy. Our study aims to build the SSCM implementation framework from the perspectives of corporate sustainability strategy and sustainable practices. Taking all discussions, we formulate the RQ3. What constitutes a generalizable SSCM implement framework? In the discussion section, we try to answer the question (RQ4.) whether Reactive, Cooperative and Dynamic Models can also be used in European countries and by North and South Americans. Given the evolution of this field and emerging trends in environmental and social sustainability, we examine the critical reasons why companies implement SSCM practices. In searching for an evidence-based implementation model, we can propose a generalizable model. To help enable this model's development, we examine a global hub of supply chain activity within a Taiwanese business context to characterize SSCM practices that integrate issues of economic, environmental and social performance. Our classification process allows for the development of multiple SSCM models: reactive, cooperative, and dynamic. The methods employed in this study will help to define general operating principles and basic guidelines for decision-makers considering the implementation of a specific model. The contribution of this work is important to the advancement of both practice and theory. It provides an understanding of SSCM implementation practices enabling decision-makers to choose an optimal SSCM strategy. For researchers, insights from this study provide a foundation for the further development and improvement of the models, hypothesis development, and empirical testing of SSCM relationships. In this study, we first propose that a review of the relevant literature is necessary to get an understanding of important practices and ground theory development regarding SSCM. Next, we conduct multiple case studies to propose a basic generalizable SSCM implementation model, and then further developed into more detailed models of implementation practices. To provide further insight and contributions to the field, we investigate the three primary research questions. To answer these questions, we next describe the research methodology in Section 2 and present our results in Section 3. In Section 4 we discuss the three major research questions our findings, before providing conclusions and contributions in Section 5. ## 2 Research Methodology Since the focus of this study is exploratory. The use of qualitative data collection methods will best allow the development and understanding of important SSCM practices and implementation insights [19; 20; 21; 22]. We conducted a literature review using content analysis to answer research questions, as this is a common practice in SCM studies [23]. A literature review methodology provides a systematic and reproducible design for collecting and evaluating the extant body of scholarly works on the topic studied. Through summary and critical analysis of the selected literature, these methods provide a more comprehensive understanding of the topic and form the basis for answering research questions [24]. A good review structure and process are essential to produce quality results, theory development, and drawing objective conclusions. Therefore, we adopted a literature review process [25; 26] approach as shown in Figure 1. In the exploration stage, we aim to achieve two objectives. Firstly, we conduct a literature review to search for ideas and gaps in the SSCM implementation framework, which was discussed in the previous section. In particular, we were looking for strategic practices in responding to sustainability requirements found within the SSCM framework literature. We identified key categories of the SSCM framework based on the literature. Then, we identified a plausible gap in the research and derive research questions to expand the understanding of the SSCM implementation field. Secondly, we conduct another narrative literature review with a focus on RQ1, as this will provide a foundation and reference structure to conduct content analysis and theory development in the next stage as suggestedby Yin [27] (p. 40). The review of our findings is presented in the result section and serves as the input to step 5. Our objective was to develop a conceptual SSCM implementation framework based on the extant literature. Peer-reviewed journal articles represent a relevant unit of analysis for this study. In answering RQ2, we adopted a condensed set of SSCM literature by focusing on a specific and representable subset under the context of the Taiwanese SCM environment. At this stage, case study methods suggested by Yin [27] are a useful research strategy supporting theory development. This same approach has been used by several authors Azevedo et al. [28]; Pagell and Wu [29] to develop SSCM theory. Therefore, we adopted a similar approach at this stage to conduct an analysis of multiple cases. Unlike typical case data collection through interviews and other documentation, we collected data from sets of case study related papers from a search of the literature. Then, the content analysis stage consisted of four interconnected and recursive steps: 1. Data collection: The data to be collected is defined in terms of inclusion and exclusion criteria. Additionally, the unit of analysis (i.e., the single paper) is defined. 2. Descriptive analysis: Formal aspects of the papers assessed, for example, the number of publications. This forms the background for the theoretical analysis. 3. Category selection: Analytic categories or dimensions selected to structure the literature review and analysis. In this study, the analytic categories are derived in the exploration stage. 4. Data evaluation:The data analyzed and sorted according to the structural categories. Specifically, we assess category frequencies to suggest relevant SSCM practices for corresponding category. In step 3 of this study, we focused on Taiwanese SSCM practices as the real-world context, and SSCM practices published in the Taiwanese Journals as primary sources of categories of cases to induce an SSCM implementation framework. The selection of cases was based on the Airiti Library online database, the most comprehensive collections of scholarly Journals in the Chinese research community, which was available to help conduct our targeted literature search (most selected papers were written in Chinese with English abstracts and some papers were written in English). From the historical development perspective, the sustainable considerations of the supply chain first evolved into the green supply chain in early 2000, and then become more widely labeled as SSCM in mid-2000 [30]. Next, using the keywords of \"Sustainable Supply Chain\" and \"Green Supply Chain\" between the years 2000 and 2018, it yielded 180 papers including journal papers, conference papers, and unpublished theses. Figure 1: Literature review process framework. Further criteria were used to determine what publications would be included in our study: (1) case study-based journal paper; (2) case descriptions in Taiwanese context; (3) SSCM related implementation and practices. This resulted in 19 papers that specifically address SSCM implementation in Taiwan. In step 4, the data analysis consists of anticipatory conceptual model development and simultaneous data coding, reduction, display, and conclusions testing. Therefore, step 4 has a recursive connection with step 5. In particular, we use the derived categories from step 1 as primary categories to conduct descriptive analysis. Due to the diversity of data sources, a descriptive analysis of case studies must be guided by a protocol to specify relevant evidence in step 6. The protocol in this study specifies a minimum amount of content analysis in operational terms. For example, in the content analysis on for 19 papers, we focused on: (1) whether the paper describes SSCM practices; (2) how do the practices within the paper respond to sustainability requirements; (3) is there an application of real-life SSCM empirical practices in the field instead of theoretical hypothesis; and (4) identification of the critical SSCM practices that are common across firms and industries. In step 7 of the interpretation stage that addresses RQ3, we synthesized the findings in the Taiwanese context to develop a basic model composed of SSMC implementation strategies and corresponding practices. This model represents a plausible and practical SSCM implementation framework. In step 8, then, we departed from the focused Taiwanese context to include a broader scope and developed three more detailed models. Finally, we could develop a generalizable model with adaptive flexibility so it could be generalizable to organizations globally. ## 3 Results In this section, we first present the literature review findings on the categories of strategic existing SSCM practices. Next, we provide analysis results of SSCM implementation in the Taiwanese context to formulate a basic implementation model. Finally, three progressive models are developed to constitute SSCM implementation frameworks. ### The Categories and Practices of SSCM Implementation From historical SCM development perspectives, sustainability requirements are a new business mandate for firms to balance their financial performance with environmental and social responsibilities [31]. According to their strategic priorities and available resources, organizations may take different strategic responses to sustainability and form a unique strategic SSCM model [32]. We proposed an implementation framework consisting of three broad strategic responses (namely reactive, cooperative, and dynamic), which represent firms' business priorities and their underlying strategic mindsets. The conception of reactive, cooperative, and dynamic strategies was grounded on prior SSCM literature, including that of Azevedo et al. [29], Markman and Krause [33], Rebs et al. [34], and Walker and Jones [35]. These responses vary in the strategic intent of organizations ranging from passivity to increasing activeness as well as in complexity from less to more complicated levels. For example, dynamic strategies [36] take SSCM practices as dynamic capabilities that enable organizations to take flexible and adaptive responses to rapid changing environments, and achieve competitive advantage [37; 38]. In this sense, a dynamic strategy has a high level of active intension and adaptive capabilities. Considering the idea that the implementation of SSCM practices is a long-term, complex, and complicated process, the proposed framework is expected to help organizations to co-create supply chains and to help choose SSCM strategies tailored to their needs and aspirations [39; 40; 41]. An effective implementation framework, composed of both grounded theory and corresponding practices, can translate SSCM implementation theory into practice [42]. The proposed implementation framework in this study can explain various aspects of implementation, including SSCM drivers, connections with firms' strategies, and expected performance. For this study, SSCM practices refer to a set of business processes among supply chain partners guiding sustainable activities. These practices have been extensively examined in the literature and are typically categorized into the upstream,focal company, and downstream practices. Practices related to upstream suppliers include green purchasing and raw material procurement, green packaging and transportation, material recycling, strategic supplier collaboration, and supplier sustainability assessment [43]. Practices pertained to internal operations of the focal company include but are not limited to green product design, green process design and planning, green manufacturing and remanufacturing, waste management, emission reduction, and green packaging [9; 12; 29]. Practices related to downstream customers cover collaborative inventory management, green warehousing, green shipping and distribution, product recycling and reverse logistics, and corporate green image management [11; 18; 44]. Some practices across the whole supply chain consist of green product innovation and design [44], supply chain integration system [45], ISO 14001 environmental management system [46], and corporate social responsibility [47]. Based on the content review of the literature, we found that the SSCM implementation practices typically extend existing SCM practices into the realm of sustainability [30]. Markman and Krause [33] argue that sustainable practices should satisfy two inseparable criteria: (1) they must enhance ecological health, meet ethical standards to enhance social justice, and improve economic vitality; and (2) they must prioritize the environment first, society second, and economics third. We adopt this line of thinking and suggest these criteria can be used to distinguish between typical SCM practices and SSCM practices. Combining conceptual strategic responses and sustainable practices, we were able to develop SSCM categories and their corresponding practices outlined in Table 1. Our approach provides a foundation for further content analysis and model development. ### Why Look at SSCM Implementation in a Taiwanese Context Taiwan, officially recognized as the Republic of China, shipped US$317.38 billion worth of goods around the globe in 2017, which earned Taiwan the rank of 17th worldwide for transportation commerce in 2017 (Statista, 2018). It is worth noting that each product exported from Taiwan is associated with multiple global supply chains and some generalizability of practices. The development of Taiwan's global supply chain began in early 1990 [48; 49], while the development of SSCM in Taiwan first appeared in literature in early 2007 [43; 50; 51]. With Taiwan being involved in highly globalized supply chain development, each of the selected SSCM papers reflects certain successful field practices. Given the significant importance of the global supply chains involved with products originating in Taiwan, this country context provides an important area of study and future opportunity for further exploration. We argue that the development of SSCM implementation frameworks through Taiwanese cases is a viable and feasible research strategy. To achieve this aim, we next address RQ2 to answer whether SSCM practices appeared in the literature are applied in business organizations, as reflected in the papers reviewed in this study. The first cycle of content analysis of 19 published papers identifies topic areas, industry type, primary SSCM drivers, and strategic intent toward sustainability requirements. We have summarized the results in Table 2. \\begin{table} \\begin{tabular}{p{113.8pt} p{284.5pt}} \\hline \\hline **Category** & **Description of Practices** \\\\ \\hline Reactive & Adopting a minimum set of actions to comply with sustainable regulations and requirements such that organizations can focus on managing their economic performance. \\\\ \\hline Cooperative & Going beyond basic compliance and a myopic profitability goal, organizations adopt sustainability as collaborative actions toward environmental friendliness and socially responsibility. \\\\ \\hline Dynamic & Embracing sustainability as part of the organization’s vision to build dynamic capabilities and competitive advantage, while actualizing “environment-first, society-second, and economics-third” principles in practices. \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Categories and their description of strategic responses in sustainable supply chain management (SSCM). \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline \\multirow{2}{*}{**ID**} & \\multirow{2}{*}{**Author (Year)**} & \\multirow{2}{*}{**Industry**} & \\multirow{2}{*}{**Topic Area**} & \\multirow{2}{*}{**SSCM Driver**} & \\multicolumn{4}{c}{**Strategic Intent Toward Sustainability**} \\\\ \\cline{5-8} & & & & & **Reactive** & **Cooperative** & **Dynamic** \\\\ \\hline \\multirow{3}{*}{1} & Chien \\& Shih (2007) & Manufacturing (Electronics) & Financial and environmental performance & Regulatory and customers’ pressure & Reduce emissions, waste, non-hazardous and non-toxic materials to & Green manufacturing to & \\\\ & & & & & & non-toxic materials to & maintain competitiveness & \\\\ \\hline \\multirow{3}{*}{2} & Chiou et al. (2007) & Manufacturing (Information and Electronics) & Green supplier selection and assessment & Environmental and regulatory requirements & Apply green supplier assessment to enhance & Implement environmental management system to & Improve corporate social responsibility \\\\ \\hline \\multirow{3}{*}{3} & Fang \\& Lin (2007) & Manufacturing (TFF-LCD) & Building competitive advantage through & Organizational vision and strategies & Compily with all environmental and suppliers to enhance environmental performance & \\\\ & & & & and strategies & operations & competitivadvantage & \\\\ \\hline \\multirow{3}{*}{4} & Chen et al. (2008) & Manufacturing (Teftile) & Maintaining market competitiveness in green clothing supply chain & Environmental and customers’ requirements & Compily with environmental regulations & Implement green manufacturing to maintain competitiveness & \\\\ \\hline \\multirow{3}{*}{5} & Han, Wang, \\& Tearing (2010) & Manufacturing (Printing) & Components of green supply chain & Environmental and & Implement material material & Work with upstream & \\\\ & Tearing (2010) & Manufacturing (Printing) & implementation & requirement and organizational strategy & Inspection system to sure & supplies to build green & \\\\ & & & & & & & supply chain \\\\ \\hline \\multirow{3}{*}{6} & Liao \\& Lian (2010) & Manufacturing (Consumer Electronics) & Enhancing sustainable performance & Regulatory requirement and organizational strategy & Compily with environmental regulations & Execute green manufacturing & Adopt green marketing and & Adopt green design design \\\\ \\hline \\multirow{3}{*}{7} & Lin, Wen, \\& Lin (2011) & Manufacturing (Computer) & Enhancing sustainable performance & Regulatory requirement and global trends & Compily with regulatory regulations & Implement green manufacturing to enhance environmental performance & \\\\ & & & & & & environmental performance & \\\\ \\hline \\multirow{3}{*}{8} & Tewe, Lai, \\& Wang (2011) & Manufacturing (CT) & Drivers and practices of GSM & Environmental and environmental regulations & Compily with environmental regulations & Implement green manufacturing to ensure environmental performance & Assume corporate social responsibility \\\\ \\hline \\multirow{3}{*}{9} & Hsu \\& Liang (2011) & Manufacturing (TFF-LCD) & Green procurement performance & Regulatory and customers’ requirement & Compily with regulatory regulations & Collaborative with suppliers to enhance green procurement & \\\\ & & & & & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Summary of evidence concerning focus topics, SSCM drivers and strategic intents [19; 43; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66]. \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline \\multirow{2}{*}{**ID**} & \\multirow{2}{*}{**Author (Year)**} & \\multirow{2}{*}{**Industry**} & \\multirow{2}{*}{**Topic Area**} & \\multirow{2}{*}{**SSCM Driver**} & \\multicolumn{3}{c}{**Strategic Intent Toward Sustainability**} \\\\ \\cline{6-8} & & & & & **Reactive** & **Cooperative** & **Dynamic** \\\\ \\hline 14 & Wang \\& Yu (\\begin{tabular}[]{c}\\end{tabular}\\) & \\begin{tabular}{c} All publications are related to manufacturing industries with the exception of one service organization, and research topics focus on environmental performance, green supplier management, and competitiveness. The SSCM drivers among these cases are regulatory requirements, customers' requirements, organizational strategies, and social responsibility, which are consistent with extant SSCM research findings. For the sake of the next phase of the data analysis process, we assigned practices corresponding bundle codes (i.e., R for reactive, C for cooperative, and D for dynamic). With the progressive nature of SSCM bundles, the reactive model is a subset of the cooperative model. Likewise, both the reactive and cooperative models are subsets of the dynamic model. The second cycle of content analysis of SSCM practices was used to verify our conjectures in the previous step, while simultaneously seeking answers to RQ2. The protocol for this second round of data collection identifies all applicable SSCM practices using quantitative tabulations. The results of the content analysis are summarized in Table 3. Further examination of SSCM practices and adoption provides several observations and insights. First, the adoption of SSCM in Taiwan is consistent with historical accounts of global SSCM development since 2007 [8; 67]. This observation can be explained by the fact that Taiwan has close economic and technological connections with western markets [43]. Second, SSCM implementation has gone through two distinct stages. The former stage is mainly an extension of traditional SCM and ends around 2010, during which time the implementation strategies were a combination of reactive and cooperative models. The later stage commenced in 2010 and it shows a clear focus on a dynamic model with SSCM practices about green design and corporate social responsibility [52; 53]. The shift of focus into the social dimension is related to a 2014 mandate requiring the inclusion of corporate social responsibility into the annual report of listed companies [67]. This finding is consistent with the concept that regulation is a strong driver of SSCM implementation. Third, academic research and publications lag behind the implementation practices used by practitioners and industries. A supplementary literature search in Google, Google scholar, and Airiti Library using the same search criteria described in the methodology, produced papers or articles with 83370, 701, and 180 hits, respectively. The articles found in Google search are mainly contributions by industrial practitioners and local SSCM interested groups. The results of this search imply that the actual field adoptions of SSCM in Taiwan are much more significant than what academic publications have been able to capture. In understanding the motives of SSCM implementation in Taiwan, the insights derived from the content analysis suggest that customer requirements and regulatory pressures are major drivers. In particular, the growing promotion of corporate social responsibility to public companies in Taiwan \\begin{table} \\begin{tabular}{c c c c c c c c c c c c c c} \\hline \\hline **ID** & **Author (Year)** & **R1** & **R2** & **R3** & **R4** & **R5** & **C1** & **C2** & **C3** & **C4** & **C5** & **C6** & **C7** & **D1** & **D2** & **D3** \\\\ \\hline 1 & Chien and Shih (2007) & 1 & 1 & & & & & 1 & & & & & & & \\\\ 2 & Chiou et al. (2007) & 1 & 1 & 1 & & 1 & 1 & 1 & & 1 & & & & 1 & & \\\\ 3 & Fang and Lin (2007) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & & \\\\ 4 & Chen et al. (2008) & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & 1 & & 1 & & & \\\\ 5 & Han, Wang, and Tzeng (2010) & & & 1 & 1 & 1 & & 1 & 1 & 1 & & & & & \\\\ 6 & Liao and Lin (2010) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & 1 \\\\ 7 & Lin, Wen, and Lin (2011) & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & 1 & 1 & 1 & & \\\\ 8 & Tseng, Lai, and Wang (2011) & 1 & 1 & & & & & & 1 & 1 & 1 & 1 & 1 & & \\\\ 9 & Hsu and Liang (2011) & 1 & 1 & 1 & 1 & & & 1 & 1 & 1 & 1 & 1 & & & \\\\ 10 & Chen and Han (2012) & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & 1 & 1 & & & 1 \\\\ 11 & Rau et al. (2013) & 1 & 1 & 1 & 1 & 1 & & & 1 & & & & & & \\\\ 12 & Tsai and Yen (2013) & 1 & 1 & & 1 & & 1 & & 1 & & & 1 & & & \\\\ 13 & Chen et al. (2014) & 1 & 1 & 1 & & & 1 & 1 & 1 & & & 1 & 1 & 1 & \\\\ 14 & Wang and Yu (2014) & 1 & 1 & 1 & & & & 1 & 1 & 1 & 1 & 1 & 1 & \\\\ 15 & Liu (2015) & 1 & 1 & & 1 & & 1 & 1 & & & & 1 & & 1 & \\\\ 16 & Lee and Yu (2015) & 1 & 1 & & 1 & & 1 & 1 & & & & 1 & 1 & \\\\ 17 & Lo (2016) & & & & & 1 & 1 & & & & 1 & & & & \\\\ 18 & Tseng and Chang (2016) & & & & & 1 & 1 & & & & & & & 1 & \\\\ 19 & Lin (2016) & 1 & 1 & 1 & & & & 1 & & & & & & 1 \\\\ **Subtotal** & 16 & 14 & 12 & 13 & 9 & 7 & 8 & 13 & 12 & 5 & 10 & 10 & 5 & 8 & 2 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Descriptive results of SSCM applications by strategic categories. has shifted the SSCM implementation mindset from \"reactive\" to \"dynamic\", while aligning with organizational visions and strategies. This implies that contemporary SSCM drivers are closely related to market competition and social responsibility, and reflected in recent literature [54; 55]. The industries examined over the 19 publications include the high-tech industry e.g. [49], traditional manufacturing [58], clean energy industry [68], and the service industry [69]. Taken together, we argue that the proposed SSCM implementation practices (i.e., reactive, cooperative, and dynamic models) derived from the Taiwanese context are consistent with findings in the SSCM literature and establish a plausible basic framework for further generalization. The strategic intent of sustainability can start to explain how case organizations respond to sustainable requirements. The findings indicate all of the studied organizations take reactive actions and certain cooperative actions, while fewer organizations engage in dynamic strategic actions. Next, we continue to build connections between strategic categories and SSCM practices (i.e., assign practices to their corresponding categories) through continued content analysis. A foundation of the analysis involves adopting the concept of \"bundles\" from the operations literature. This means bundles represent a set of best practices impacting the firm's performance of a specific category [52]. A two-steps process was used to achieve the objective: (1) apply principles suggested by Markman and Krause [33] to generate an SSCM practice pool from the literature review findings; (2) take strategic intent derived from cases as a guideline to sort practices into similar categories. It can be argued that a practice is not easily assigned to a single specific category, or if it belongs in multiple categories. We take the former view as our objective is to build a basic reference model and the categorization is open for adjustment by decision-makers and their organizations. To this end, we categorize SSCM practices into three progressive bundles, as described in Table 4. ### General Implementation Model Development It should be noted that there is no single, universal model. Every industry and every company operates in a specific environment and creates its own unique system of supply chain management. However, there are a number of universal phenomena that affect this sphere, and which are important from the point of view of SSCM. They are commonly characterized as three models that combine economic objectives with the good of society and the environment. The choice to pursue a particular model depends on the strategic intent, resources, and capabilities of the companies that are co-creating these supply chains. This now leads us to unpack the different models and to RQ3 and the development of a general SSCM implement framework. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline **Category** & **Coding** & **Selected SSCM Practices** \\\\ \\hline \\multirow{4}{*}{Reactive} & R1 & Waste, water, and air management \\\\ & R2 & Energy consumption and emissions reduction \\\\ & R3 & Procurement of non-hazardous and non-toxic \\\\ & R4 & Product recovery \\\\ & R5 & Supplier sustainability assessment \\\\ \\hline \\multirow{4}{*}{Cooperative} & C1 & Strategic supply chain collaboration \\\\ & C2 & ISO 14001 environmental management system \\\\ & C3 & Green manufacturing \\\\ \\cline{1-1} & C4 & Reverse logistics \\\\ \\cline{1-1} & C5 & Supply chain integration system \\\\ \\cline{1-1} & C6 & Green purchasing \\\\ \\cline{1-1} & C7 & Green shipping and distribution \\\\ \\hline \\multirow{2}{*}{Dynamic} & D1 & Green product innovation and design \\\\ \\cline{1-1} & D2 & Corporate social responsibility program \\\\ \\cline{1-1} & D3 & Corporate green image management \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Strategic implementation models and selected SSCM practices. The reactive model (see Figure 2) is the lightest implementation approach where external pressures are small and internal resources may be limited. In this model, the mindset toward SSCM is risk avoidance in an effort to comply with regulatory requirements. Here a minimum set of SSCM practices is used. Suggested critical practices include waste, water, and air management, energy consumption and emissions reduction, procurement of non-hazardous and non-toxic materials, product recovery, and supplier sustainability assessment. Implementation of these practices includes: * Limiting resource consumption; conservation efforts include the use of material-saving and energy-saving technologies. * Using higher quality components, ensuring a longer product life. * Optimizing packaging design: efficient packaging design strategies, abiding by regulations and utilizing end-of-life of packaging material [70]. * Replacing harmful materials with less harmful or harmless materials and looking for substitutes among renewable resources. * Eliminating toxic materials and reducing emissions. * Cooperating with suppliers who follow the basic guidelines for sustainable development. * Suspending cooperation with companies using unethical practices. Implementation of the above guidelines in Figure 2 improves not only some environmental performance but also have an impact on the improvement of economic performance. This is supported by Petljak et al. [71]. However, when merely striving to meet basic legal requirements and comply with applicable standards, the impact of the reactive model on society and the ecosystem is relatively small. Here, the organizations' co-creating supply chains focus on the economic aspects of creating the SSCM. This reactive model can be considered the basic form of SSCM and the foundation for implementing more dynamic practices of managing a sustainable supply chain. When these practices are part of the supply chain, managers and decision-makers should consider implementing more comprehensive development activities. With this model as a foundation, managers can next try to take on an environmental performance and a more comprehensive approach to fulfilling social needs. This is affirmed by Beske-Janssen and coauthors [72] who emphasized that it is necessary to develop SSCM models and design supply chains that are less harmful or even create new value for sustainable development. Figure 2: Reactive model. The cooperative model presented in Figure 3 builds on the reactive model. This model represents a mindset shift from reactive to cooperative, in which the focal company views SSCM as a business opportunity instead of a reaction to meeting external requirements. Therefore, SSCM is treated as a strategic business responsibility with resources allocated for integrating SSCM practices with supply chain members and business processes. The strategy presented in this model utilizes the best practices from the reactive approach and expands them to additional aspects of implementation that have a significant impact on the external environment. The most important improvements offered by the cooperative model include: * Development of a common mission, vision, and goals. Supply chain management strategy is perceived as many mutually harmonized elements and processes, the most important of which are: full focus on the needs of internal and external clients [73], development of basic values, and implementation of a standard accepted and recognized by the enterprises co-creating the supply chain, defining transparent rules and management procedures, and focusing on the needs of the client and the external environment. It should be stated that the policy of sustainable supply chain management is a strategic declaration for organizations that co-create the supply chain and sets out actions for all enterprises in this chain. However, the model must be implemented and developed in such a way that the supply chain can achieve the set of defined objectives [74]. * Sustainable procurement. The goal of sustainable procurement extends its basic green purchasing to consider environmental and social aspects of diversity, safety, human rights, philanthropy, and local procurement [75]. Figure 3: Cooperative model. * Implementation of the requirements of the ISO 14001 standard. The goal of this standard is to provide organizations co-creating the supply chain with guidelines for the development of an effective environmental management system that will facilitate the implementation of environmental and economic objectives [76]. It is also important that the standard is compatible with other management systems and supports them in an ecological context [77; 78]. We should also agree with Maletic et al. [79], who recognize that ISO 14001 can be an effective standard with supporting systems for pursuing sustainable development. The importance of ISO 14001 in the context of improving the SSCM is supported by Chiarini [80] and de Sousa Jabbour and co-authors [81], who claim that based on the requirements of ISO 14001, a firm can develop a multi-stage model for developing partnerships with suppliers in supply chains and also is improving the green efficiency. Also, Wu et al. [82] emphasize that enterprises in developing Asian countries treat ISO 14001 as a very important element of SSCM. It should be noted that ISO 14001 does not cover all aspects of sustainable development [83], so it must be properly developed to enable integration with this model and other business systems. * Improvement of reverse logistics. The increased interest in trends such as closed-loop systems and reverse logistics involving products and materials is something an enterprise is dealing with regarding their management of environmental performance. Waste reduction has become one of the main areas of scientific interest in industrialized countries. Due to legal, economic, and technical limitations, enterprises are looking for new solutions that would enable the reuse of products while capturing value in new ways [84]. That is why sustainability is part of solutions aimed at capturing value from used materials and re-introducing them into the value stream. The re-use of products is economically attractive due to the costs of disposal, availability of raw resources, and new opportunities to create added value [85]. * Implementation of QMS standards. Quality management, on a large scale, is important to improve key supply chain management processes [86]. Quality management in the supply chain is inextricably linked to enterprises co-creating the supply chain, customers and external stakeholders. The ability to adapt to constantly changing market conditions and the effective use of innovative solutions to improving quality management systems will continue to be necessary to improve and integrate basic areas of sustainable supply management. * Lasting partnerships. The basis for increasing the competitiveness of the supply chain is wide-ranging integration and cooperation. This includes enabling the selection and implementation of a common strategy aimed at increasing the efficiency of the supply chain while considering the needs of individual enterprises [87]. Building sustainable relationships with suppliers is based on the principles of sustainable development and provides opportunities for innovation. A company can learn from its suppliers and cooperate with them on new initiatives in the area of sustainable development [88; 89]. * Structural, technological and organizational preparation of production. There is a constant need to improve production technologies. Improvements include increasing productivity, increasing quality levels, integrating processes, and reducing negative impacts on the environment. These activities should be targeted, including technology development in the overall strategy, analyzing life cycle impacts, and introducing constant supervision while implementing processes that increase staff competence. The cooperative model is recommended for organizations that strive to simultaneously improve economic efficiency and implement sustainable solutions in the supply chain. Sustainable development, in this case, can begin with efforts to improve management practices for better economic and/or operational performance, which leads to better social performance. Conversely, integration of social concerns in operations brings economic benefits to firms both directly and indirectly. This does not mean that firms necessarily focus on social performance before economic consideration, or vice versa. Rather, progression in sustainability can occur in parallel with performance, which influences improvements in other areas [90]. The guidelines presented in this model more closely represent the approach adopted by European enterprises. This model would find support among the management staff of European enterprises and companies operating in developing countries as it promotes balanced solutions that improve supply chains both economically and socially. These considerations are confirmed by Wang and Dei [91] where they find that internal SSCM practices have a positive impact on a firm's environmental performance and social performance. Moreover, environmental performance and social performance are positively related to economic performance. It is worth emphasizing that Pagell and Wu [29], in their case study research, have concluded that those organizations that follow a sustainable supply chain strategy are successful in aligning their financial goals with environmental and social goals, and are successful in ensuring transparency in their business processes. The model with the most potential for a positive impact on the environment, society, and ecosystems, is a dynamic model (see Figure 4). Figure 4: Dynamic model. A dynamic model further extends the scope of problem-solving to ethical responsibility and value creation in a dynamic environment. Here, the focal company views SSCM as a new business opportunity. As a result, the focal company actively collaborates with its' upstream suppliers and downstream customers on value creation processes. These dynamic SSCM practices include: * Ethical leadership. Development of ethical management standards and the implementation of solutions that mobilize employees to fully engage in the implementation of adopted strategies [19]. This view is shared by Wichmann et al. [92], who recognize that employee involvement is crucial in the proper implementation of SSCM practices. * Green product innovation and design. Identification of environmental performance related to the product and taking this into account in the design process at the initial stage of its development, reduction of component consumption, extensive use of recycling, and searching for more environmentally friendly alternative components, design, and energy-saving products. * Corporate social responsibility programs. Provide a positive influence on business culture. This influence is visible in the treatment of employees, organizational culture, relations with investors, contractors, subcontractors, and suppliers, as well as in relationships with local communities and public administrations. * Corporate green image management. The presence of a green image has become an important element of the supply chain strategy [93]. Providing stakeholders with reliable information on the environmental aspects of products produced in the supply chain is part of this. Image management also includes demonstrating care for the ecological performance of the logistics processes. * High-quality recycling. Recycling involves the recovery of materials without contamination, to serve as raw materials for subsequent production systems of the same or similar quality products [70]. This model leans towards circular supply chain management as the concept that minimizes the amount of waste and maximizes the possibility of material reuse [94; 95]. New performance metrics include the level of green-design, community relations, green image and marketing. With a focus on the social aspect of sustainability and innovative green design, the dynamic model can turn SSCM into a competitive advantage in changing the business environment for a focal company [96]. The dynamic model, when applied to supply chains, can bring about distinct values in both the business and social context. In the business context, by strengthening internal environmental management and social responsibility management, firms can improve both environmental and social performance. Firms working closely with suppliers can promote corporate environmental performance. The continuous improvement of environmental and social performance will ultimately improve economic performance. According to Dubey et al. [97], sustainability can provide a significant competitive advantage and improved economic efficiency in the long run. In a social context, firms become a complementary social unit complementing national level actions promoting global sustainable development effort [10]. This idea is similar to the approach suggested by Markman and Krause [33] which recognizes that in supply chain management, or any other business activity must follow ethical standards to enhance ecological health and further social justice. When firms demonstrating ethical leadership through prioritizing the environment first, society second, and economics, their sustainable supply chain assumes an institutional role to achieve sustainable development goals. It should be noted that this is a long-term and complicated process that requires full involvement on the part of supply chain enterprises co-creating value and is brought down to the level of employees understanding of this value within enterprises. The efforts undertaken in this model can result in a positive and multifaceted shift in supply chain practices that positively impact society and consumers. This in turn, translates into an improvement in the economic situation of an entire supply chain. ## 4 Discussion Our contributions to the field from this study come from building on a foundation of information regarding sustainable supply chain strategies and best practices from a narrative literature review. We next discuss the answers and insights from our primary research questions. Our first question, RQ1, explores how firms respond to sustainability requirements by reviewing what supply chain practices appear in the literature. What we find is that firms take strategic actions to respond to stainability requirements depending on their intents, available resources, and capabilities. When looking for primary categories of SSCM implementation, we find evidence of reactive, cooperative, and dynamic models best capture the implementation of SSCM and a broad array of specific actions firms have taken. Further insights come from exploring what we see happening within a global hub of supply chain activity using a Taiwanese supply chain context to address RQ2, as we align field practices with the outcomes of the content analysis identified in the results section of this study. Based on the content analysis of 19 published studies, we can create a generalizable SSCM implementation model. Finally, we can provide more detailed information about SSCM implementation frameworks to help researchers and practitioners while addressing RQ3 when looking for what constitutes a generalizable framework. It should be noted that although the models in this study have been developed on the basis of an analysis of enterprises operating in Taiwan, we believe they are generalizable for several reasons. First, in the exploration stage of this study, we conducted a search and review of the literature from mainstream journals so the findings regarding strategies and practices uncover attributes of the most common practices that are also international. Next, the basic implementation model derived from Taiwanese cases is consistent with the existing literature. Third, Taiwanese business development is global in its reach with supply chain development that has already been integrated with leading western enterprises for over 30 years. Finally, the proposed models are flexible as related to the specificity of an industry or area of operation. Therefore, we believe organizations can successfully implement these practices within enterprises operating in different country contexts such as Europe and the USA. Currently, in European countries, sustainable development is a source of innovation, especially organizational and technological, which translate into increased profits and revenues. Also, the perception of the supply chain as an environmentally friendly entity contributes to a significant improvement of its image. Within the European Union (EU), more and more enterprises are and will be interested in implementing sustainable development practices [98]. For supply chains, this means taking actions towards innovation and transparency, which are new opportunities for development and increase of competitiveness and attractiveness on the market. Striving to maintain ecological and social sustainability, transforms the landscape of competition. For supply chain enterprises, this means changing thinking about products, technologies, procedures, processes and business models [99]. The models developed in this study are a response to the needs of organizations operating globally, and throughout Europe, and can be applied to the integration of sustainability at three different levels. The reactive model is applicable for Eastern European countries where there are they may be lagging infrastructural and technological development and research. Supply chains operating within these conditions need a baseline understanding of their supply chain systems. A systems understanding can lead to better solutions, allowing companies to implement foundational practices for sustainable development in a relatively short time. Cooperative and dynamic models can find opportunities for advancement in Western European countries. In this global, yet regional context supply chains are flexible, adapt to new conditions, and can also carry out complex change management projects [16]. It is worth noting that Europeans increasingly understanding that supply chains derive from the environment, social and human capital in order to generate profit for owners and shareholders. This integrated understanding of sustainability and supply chains provides a path to creating jobs. In return, there are expectations that these supply chain systems will give much more to society and the environment in which they function, that business will be run in a different, more balanced manner, and that it will eventually become a standard [100]. Meeting the expectations of the public, external stakeholders, and supply chains themselves in the areas outlined in this study will significantly support the development of the integrated bottom line (IBL) valuing environmental and social performance. North and South Americans can also find value in the models developed in this study. There are many opportunities for the application of reactive, cooperative and dynamic models in the implementation of sustainability into supply chains. There is already a shift underway in the focus on environmental management and operations, moving from local optimization of environmental factors to consideration of the entire supply chain [53]. Sustainability is a timely topic that captures an increasing focus on performance from public interests, legislation, and competitive opportunity. For those analysis and stakeholders evaluating performance across industries, there are already hundreds of environmental, social, and governance (ESG) performance metrics. See for example, MSCI Global Socrates, TruCost, CDP, and Bloomberg databases. These databases provide insight into each publicly traded firm in the United States and information on their supply chains. Research involving corporate social responsibility reports shows how companies are already integrating environmental and social performance within internal operations and external supply chains with institutional pressure as a leading driver of change [101]. Findings show different facets of environmental and social practices upstream and downstream in supply chains and implementation frameworks can help with further integration of sustainability into supply chains and business practices. ## 5 Conclusions Sustainable supply chain strategies and practices have evolved. Where in the past supply chains were a non-competitive, overlooked element of strategy before the 1970s, today, they are a synergistic and dynamic part of corporate competitive advantage. For competitive Taiwanese firms, there are many challenges and hidden opportunities in recognizing and integrating SSCM. Suppliers are critical to the competitive success of firms and the success of implementation projects. The fact that future supplier performance is expected to continuously improve and involve new attributes of sustainable performance adds to the complexity of the essential role of the supply chain management professionals. These decision-makers will need to understand several foundational elements of SSCM. They also must have the skills to then operationalize these elements through reactive, cooperative and dynamic development of their organizations while working with their supply chain partners. Building on several premises, sustainability is a dynamic system's approach to business management (Tables 3 and 4). Recognizing opportunities to integrate sustainability in an organization is an important part of management. Understanding that the elements of SSCM, no matter the model applied, contribute to the creation of value and success of larger systems, such as a supply chain, ecosystem services, and society. These foundational elements include the following: green purchasing, waste and water management, energy consumption and emission reduction, green manufacturing, product recovery, and reverse logistics. With the growing awareness of environmental protection and dynamic business environments, firms can opt for taking more active strategies to realize the potential opportunities sustainable business practices can provide. In this sense, our proposed SSCM implementation framework can be a useful guide for both researchers and practitioners. Limitations of this study include but are not limited to a qualitative approach to the identification of articles from a single country context, content analysis and coding in an attempt to develop generalizable insights within one region of global supply chain management. Fortunately, a literature review provides a foundation for later model development and generalization. The scope of the study does not go into the analysis of complex relationships between environmental, social, and financial performance or extensions of performance across other members of supply chains. We also do not look at testing relationships to firm performance or internal social sustainability attributes as we attempt to take a more integrated approach to the development of implementation models. In the future, researchers can extend the research in this study to a larger context and conduct empirical research based on the evolution of SSCM implementation. SSCM is an issue of managing perceptions, performance measurement, transparency, and trade-offs. Transforming the supply chain business model to include environmental and social sustainability performance is the corporate strategy of the future. Because much of a firm's impacts are likely to be in its supply chain, it makes sense to integrate the supply chain as early as possible through considering supply chains when making decisions about internal operations, in new product development, and business model development. For researchers, understanding how SSCM implementation unfolds and leveraging SSCM provides impacts globally provides opportunities for further research. To make SSCM a reality, technology providers, businesses, citizens, and policymakers need to collaborate to develop the right policies and infrastructure to drive environmental and social performance, economic growth, and motivate sound business model changes to ensure the sustainability of manufacturers and our communities for future generations. Conceptualization, D.Z. and J.T.; methodology, J.T., D.Z. and R.S.; formal analysis, D.Z., J.T. and R.S.; data curation, D.Z. and J.T.; writing--original draft preparation, D.Z., J.T. and R.S.; writing--review and editing, D.Z., J.T. and R.S.; visualization, D.Z; supervision, D.Z., J.T. and R.S.; project administration, D.Z.; funding acquisition, D.Z. This research received no external funding. The authors declare no conflict of interest. ## References * Roy et al. (2018) Roy, V.; Schoenherr, T.; Charan, P. The thematic landscape of literature in sustainable supply chain management (SSCM): A review of the principal facets in SSCM development. _Int. J. Oper. Prod. Manag._**2018**, _38_, 1091-1124. [CrossRef] * Ellington (2004) Ellington, J. Enter the triple bottom line. In _The Triple Bottom Line: Does It All Add Up?_ Henriques, A., Richardson, J., Eds.; Earthscan: London, UK, 2004. * Eccles and Krzus (2014) Eccles, R.; Krzus, M. _The Integrated Reporting Movement: Meaning, Momentum, and Materiality_; Wiley: Hoboken, NJ, USA, 2014. * Sroufe (2017) Sroufe, R. Integration and organizational change towards sustainability. _J. Clean. Prod._**2017**, _162_, 315-329. [CrossRef] * Green et al. (2012) Green, K.; Zelbst, P.; Meacham, J.; Bhadauria, V. Green supply chain management practices: Impact on performance. _Supply Chain Manag._**2012**, _17_, 290-305. [CrossRef] * Tuni et al. (2018) Tuni, A.; Rentizelas, A.; Duffy, A. Environmental performance measurement for green supply chains: A systematic analysis and review of quantitative methods. _Int. J. Phys. Distrib. Logist. Manag._**2018**, _48_, 765-793. [CrossRef] * Carter and Rogers (2008) Carter, C.R.; Rogers, D.S. A framework of sustainable supply chain management: Moving toward new theory. _Int. J. Phys. Distrib. Logist. Manag._**2008**, _38_, 360-387. [CrossRef] * Seuring and Muller (2008) Seuring, S.; Muller, M. From a literature review to a conceptual framework for sustainable supply chain management. _J. Clean. Prod._**2008**, _16_, 1699-1710. [CrossRef] * Luthra et al. (2014) Luthra, S.; Garg, D.; Haleem, A. Green supply chain management: Implementation and performance-a literature review and some issues. _J. Adv. Manag. Res._**2014**, _11_, 20-46. [CrossRef] * Costanza et al. (2016) Costanza, R.; Daly, L.; Fioramotti, L.; Giovannini, E.; Kubiszewski, I.; Mortensen, L.F.; Wilkinson, R. Modelling and measuring sustainable wellbeing in connection with the UN Sustainable Development Goals. _Ecol. Econ._**2016**, _130_, 350-355. [CrossRef] * Bahinipati and Panigrahi (2018) Bahinipati, B.K.; Panigrahi, S.S. A framework for sustainable supply chains: Evaluation of implementation barriers. _Int. J. Intell. Enterp._**2018**, \\(5\\), 231-265. [CrossRef] * Ansari and Kant (2017) Ansari, Z.N.; Kant, R. Exploring the Framework Development Status for Sustainability in Supply Chain Management: A Systematic Literature Synthesis and Future Research Directions. _Bus. Strategy Environ._**2017**, _26_, 873-892. [CrossRef] * Qorri et al. (2018) Qorri, A.; Mujkic, Z.; Kraslawski, A. A conceptual framework for measuring sustainability performance of supply chains. _J. Clean. Prod._**2018**, _189_, 570-584. [CrossRef] * Dubey et al. (2017) Dubey, R.; Gunasekaran, A.; Papadopoulos, T.; Childe, S.J.; Shibin, K.T.; Wamba, S.F. Sustainable supply chain management: Framework and further research directions. _J. Clean. Prod._**2017**, _142_, 1119-1130. [CrossRef]* _Seman et al. (2012)_ Seman, N.A.A.; Zakuan, N.; Jusoh, A.; Arif, M.S.M.; Saman, M.Z.M. Green supply chain management: A review and research direction. _Int. J. Manag. Value Supply Chain_. **2012**, \\(3\\), 1-18. [CrossRef] * _Luthra and Mangla (2018)_ Luthra, S.; Mangla, S.K. When strategies matter: Adoption of sustainable supply chain management practices in an emerging economy's context. _Resour. Conserv. Recyl._**2018**, _138_, 194-206. [CrossRef] * _Chen and Kitsis (2017)_ Chen, I.J.; Kitsis, A.M. A research framework of sustainable supply chain management: The role of relational capabilities in driving performance. _Int. J. Logist. Manag._**2017**, _28_, 1454-1478. [CrossRef] * _Gosling et al. (2017)_ Gosling, J.; Jia, F.; Gong, Y.; Brown, S. The role of supply chain leadership in the learning of sustainable practice: Toward an integrated framework. _J. Clean. Prod._**2017**, _140_, 239-250. [CrossRef] * _Eisenhardt (1989)_ Eisenhardt, K.M. Building theories from case study research. _Acad. Manag. Rev._**1989**, _14_, 532-550. [CrossRef] * Miles and Huberman (1984) Miles, M.B.; Huberman, A.M. _Qualitative Data Analysis: A Sourcebook of New Methods_; Sage Publications: Thousand Oaks, CA, USA, 1984. * _Mayring (2015)_ Mayring, P. Qualitative content analysis: Theoretical background and procedures. In _Approaches to Qualitative Research In Mathematics Education_; Bikner-Ahsbahs, A., Knipping, C., Presmeg, N., Eds.; Springer: New York, NY, USA, 2015. * _Sroufe et al. (2000)_ Sroufe, R.; Curkovic, S.; Montabon, F.; Melnyk, S.A. The new product design process and design for environment: \"Crossing the chasm\". _Int. J. Oper. Prod. Manag._**2000**, _20_, 267-291. [CrossRef] * _Gold et al. (2010)_ Gold, S.; Seuring, S.; Beske, P. Sustainable supply chain management and inter-organizational resources: A literature review. _Corp. Soc. Resp. Env. Manag._**2010**, _17_, 230-245. [CrossRef] * Hart (2018) Hart, C. _Doing a Literature Review: Releasing The Research Imagination_; Sage: London, UK, 2018. * Cooper (1998) Cooper, H.M. _Synthesizing Research: A Guide for Literature Reviews_; Sage: Thousand Oaks, CA, USA, 1998; Volume 2. * Seuring (2008) Seuring, S.A. Assessing the rigor of case study research in supply chain management. _Supply Chain Manag._**2008**, _13_, 128-137. [CrossRef] * Yin (2014) Yin, R.K. _Case Study Research and Applications: Design and Methods_; Sage Publications: Thousand Oaks, CA, USA, 2014. * _Azevedo et al. (2011)_ Azevedo, S.G.; Carvalho, H.; Machado, V.C. The influence of green practices on supply chain performance: A case study approach. _Transp. Res. Part E Logist. Transp. Rev._**2011**, _47_, 850-871. [CrossRef] * _Pagell and Wu (2009)_ Pagell, M.; Wu, Z. Building a more complete theory of sustainable supply chain management using case studies of 10 exemplars. _J. Supply Chain Manag._**2009**, _45_, 37-56. [CrossRef] * _Tundys and Wisniewski (2018)_ Tundys, B.; Wisniewski, T. The Selected Method and Tools for Performance Measurement in the Green Supply Chain--Survey Analysis in Poland. _Sustainability_**2018**, _10_, 549. [CrossRef] * _Fonseca et al. (2018)_ Fonseca, L.M.; Domingues, J.P.; Pereira, M.T.; Martins, F.F.; Zimon, D. Assessment of Circular Economy within Portuguese Organizations. _Sustainability_**2018**, _10_, 2521. [CrossRef] * _Sroufe and Melnyk (2017)_ Sroufe, R.; Melnyk, S. _Developing Sustainable Supply Chains to Drive Value, Management Issues, Insights, Concepts, and Tools--Foundations_; Business Expert Press: New York, NY, USA, 2017; Volume I. * _Markman and Krause (2016)_ Markman, G.D.; Krause, D. Theory building surrounding sustainable supply chain management: Assessing what we know, exploring where to go. _J. Supply Chain Manag._**2016**, _52_, 3-10. [CrossRef] * _Rebs et al. (2019)_ Rebs, T.; Brandenburg, M.; Seuring, S. System dynamics modeling for sustainable supply chain management: A literature review and systems thinking approach. _J. Clean. Prod._**2019**, _208_, 1265-1280. [CrossRef] * _Walker and Jones (2012)_ Walker, H.; Jones, N. Sustainable supply chain management across the UK private sector. _Supply Chain Manag._**2012**, _17_, 15-28. [CrossRef] * _Teece et al. (1997)_ Teece, D.J.; Pisano, G.; Shuen, A. Dynamic capabilities and strategic management. _Strat. Manag. J._**1997**, _18_, 509-533. [CrossRef] * Beske (2012) Beske, P. Dynamic capabilities and sustainable supply chain management. _Int. J. Phys. Distrib. Logist. Manag._**2012**, _42_, 372-387. [CrossRef] * _Markley and Davis (2007)_ Markley, M.J.; Davis, L. Exploring future competitive advantage through sustainable supply chains. _Int. J. Phys. Distrib. Logist. Manag._**2007**, _37_, 763-774. [CrossRef] * _Brandenburg and Rebs (2015)_ Brandenburg, M.; Rebs, T. Sustainable supply chain management: A modeling perspective. _Ann. Oper. Res._**2015**, _229_, 213-252. [CrossRef] * _Reefke et al. (2014)_ Reefke, H.; Ahmed, M.D.; Sundaram, D. Sustainable supply chain management--Decision making and support: The SSCM maturity model and system. _Glob. Bus. Rev._**2014**, _15_, 1S-125. [CrossRef] * _Wu and Pagell (2011)_ Wu, Z.; Pagell, M. Balancing priorities: Decision-making in sustainable supply chain management. _J. Oper. Manag._**2011**, _29_, 577-590. [CrossRef]* (42) Nilsen, P. Making sense of implementation theories, models and frameworks. _Impl. Sci._**2015**, _10_, 53-60. [CrossRef] [PubMed] * (43) Chiou, C.-Y.; Wang, P.-Y.; Chen, H.-C.; Yeh, C.-Y. Green Supplier Selection and Assessment in GSCM Using Analytic Hierarchy Process (AHP) for Information and Electronic Industry. _J. E-Bus._**2007**, \\(9\\), 147-176. * (44) Chiou, T.-Y.; Chan, H.K.; Lattice, F.; Chung, S.H. The influence of greening the suppliers and green innovation on environmental performance and competitive advantage in Taiwan. _Transp. Res. Part E Logist. Transp._**2011**, _47_, 822-836. [CrossRef] * (45) Vachon, S.; Klassen, R.D. Extending green practices across the supply chain: The impact of upstream and downstream integration. _Int. J. Oper. Prod. Manug._**2006**, _26_, 795-821. [CrossRef] * (46) Arimura, T.H.; Darnall, N.; Katayama, H. Is ISO 14001 a gateway to more advanced voluntary action? The case of green supply chain management. _J. Envir. Econ. Manag._**2011**, _61_, 170-182. [CrossRef] * (47) Ageron, B.; Gunasekaran, A.; Spalanzani, A. Sustainable supply management: An empirical study. _Int. J. Prod. Econ. Rev._**2012**, _140_, 168-182. [CrossRef] * (48) Arntzen, B.C.; Brown, G.G.; Harrison, T.P.; Trafton, L.L. Global supply chain management at Digital Equipment Corporation. _Interfaces_**1995**, _25_, 69-93. [CrossRef] * (49) Tyan, J.C.; Wang, F.K.; Du, T.C. An evaluation of freight consolidation policies in global third party logistics. _Omega_**2003**, _31_, 55-62. [CrossRef] * (50) Chien, M.-K.; Shih, L.-H. Practices of Green Supply Chain Management: Pressures, Drives and Organization's Performance with Taiwanese Electrics and Electronics Industry. _Npust Hum. Soc. Sci. Res._**2007**, \\(1\\), 72-98. * (51) Fang, S.-C.; Lin, S.-L. Green Supply Chain Management as Competitive Advantage a Perspective of Intellectual Capital. _Chaoyang Bus. Manag. Rev._**2007**, \\(6\\), 79-102. * (52) Shah, R.; Ward, P.T. Lean manufacturing: Context, practice bundles, and performance. _J. Oper. Manag._**2003**, _21_, 129-149. [CrossRef] * (53) Linton, J.D.; Klassen, R.; Jayaraman, V. Sustainable supply chains: An introduction. _J. Oper. Manag._**2007**, _25_, 1075-1082. [CrossRef] * (54) Chen, S.-S.; Chen, P.-Y.; Yu, M.; Chen, Y.-C. An empirical study on drivers affecting gscm practices from the institutional theory. _Commer. Manag. Quart._**2014**, _15_, 247-278. * (55) Liu, Y.-C. The Role of Internal Audit Department in Green Supply Chain: The Perspective of Stakeholder Theory. _Int. Audit. Quart._**2015**, _88_, 62-65. * (56) Chen, Y.-H.; Lin, G.-M. Taiwan Stock Exchange Corporation Rules Governing the Preparation and Filing of Corporate Social Responsibility Reports by TWSE Listed Companies. _Secur. Serv. Rev._**2015**, _633_, 115-116. * (57) Chen, L.M.; Han, F.N. The Construction of Assessment Indicators for Ecolabelling of Printing. _J. Cagst_**2012**, _34_, 16-40. * (58) Chen, Y.-S. The driver of green innovation and green image-green core competence. _J. Bus. Eth._**2008**, _81_, 531-543. [CrossRef] * (59) Hsu, C.H.; Liang, G.S. Constructing Performance Evaluation Model of Suppliers' Purchase in Green Supply Chain. _Mart. Quart._**2011**, _20_, 71-84. * (60) Lee, C.W.; Yu, H.M. The Influence of Lean Manufacture and Green Supply Chain on Product Design. _J. Adv. Eng._**2015**, _10_, 163-168. * (61) Liao, P.W.; Lian, S.B. The study discussion on green environmental issues: A case of after-sales maintenance services company in Taiwan Sony. _East-Asia Rev._**2010**, _467_, 15-31. * (62) Lin, K.Y.; Wen, Y.F.; Lin, S.C. Evaluation of Green Supply Chain Management of PC component suppliers in Taiwan by Applying AHP Method. _J. Innov. Res. Dev._**2011**, \\(7\\), 126-137. * (63) Lo, S.M.S. How Supplier Management Practices Affect Exchange Cost when Going Green: An Empirical Study. _Manag. Rev._**2016**, _35_, 71-92. * (64) Rau, H.; Lai, C.H.; Fang, Y.T.; Hus, H.Y.; Kuo, T.C.; Shiang, W.J. Optimal Multi-stage Green Suppliers Selection with Consideration of Product Life Cycle. _J. Adv. Eng._**2013**, \\(8\\), 151-159. * (65) Tsai, S.; Yen, S.Y. From the Perspectives of Policy and Industry to Analyze the EU Environmental Directives WEEE. _Bio-Ind. Tech. Manag. Rev._**2013**, \\(4\\), 51-96. * the moderating role of information technology. _Commer. Manug. Quart._**2011**, _12_, 23-51. * (67) Han, F.N.; Wang, S.M.; Tzeng, S.Y. Constructing the Essential Items of Green Supply Chain in Taiwan's Printing Industrials. _J. Cagst_**2010**, _12_, 122-147. * _Lin and S.S. Realization of cycling economic using green technology integration. Econ. Outlook Biom._**2016**, _168_, 115-119. * _Wang and Yu (2014)_ Wang, Y.-F.; Yu, T.-A. Can We Implement Green Management in Restaurant? J. Hosp. Tour._**2014**, _11_, 219-242. * _Kalmykova et al. (2018)_ Kalmykova, Y.; Sadagopan, M.; Rosado, L. Circular economy-From review of theories and practices to development of implementation tools. _Resour. Conserv. Recyl._**2018**, _135_, 190-201. [CrossRef] * _Petljak et al. (2018)_ Petljak, K.; Zuluuf, K.; Stulec, I.; Seuring, S.; Wagner, R. Green supply chain management in food retailing: Survey-based evidence in Croatia. _Supply Chain Manag._**2018**, _23_, 1-15. [CrossRef] * _Beske-Janssen et al. (2015)_ Beske-Janssen, P.; Johnson, M.P.; Schaltegger, S. 20 years of performance measurement in sustainable supply chain management-what has been achieved? _Supply Chain Manag._**2015**, _20_, 664-680. [CrossRef] * _Lu et al. (2018)_ Lu, H.E.; Potter, A.; Sanchez Rodrigues, V.; Walker, H. Exploring sustainable supply chain management: A social network perspective. _Supply Chain Manag._**2018**, _23_, 257-277. [CrossRef] * _Zimon and Domingues (2018)_ Zimon, D.; Domingues, P. Proposal of a Concept for Improving the Sustainable Management of Supply Chains in the Textile Industry. _Fibers Text. East. Eur._**2018**, _128_, 8-12. [CrossRef] * _Yun et al. (2018)_ Yun, G.; Yalcin, M.G.; Hales, D.N.; Kwon, H.Y. Interactions in sustainable supply chain management: A framework review. _Int. J. Logist. Manag._**2018**, _30_, 1-35. [CrossRef] * _De Vries et al. (2012)_ De Vries, H.J.; Bayramoglu, D.K.; van der Wiele, T. Business and environmental impact of ISO 14001. _Int. J. Qual. Reliab. Manag._**2012**, _29_, 425-435. [CrossRef] * _Fonseca and Domingues (2018)_ Fonseca, L.M.; Domingues, J.P. Exploratory Research of ISO 14001: 2015 Transition among Portuguese Organizations. _Sustainability_**2018**, _10_, 781. [CrossRef] * _Zimon (2017)_ Zimon, D. The Impact of Implementation of the Requirements of the ISO 14001 Standard for Creating Sustainable Supply Chains. _Qual. Access Success_**2017**, _18_, 99-101. * _Maletic et al. (2015)_ Maletic, M.; Podpecan, M.; Maletic, D. ISO 14001 in a corporate sustainability context: A multiple case study approach. _Manag. Envir. Qual. Int. J._**2015**, _26_, 872-890. [CrossRef] * _Chiarini (2012)_ Chiarini, A. Designing an environmental sustainable supply chain through ISO 14001 standard. _Manag. Environ. Qual. Int. J._**2012**, _24_, 16-33. [CrossRef] * _De Sousa Jabbour et al. (2014)_ De Sousa Jabbour, A.B.L.; Jabbour, C.J.C.; Latan, H.; Teixeira, A.A.; de Oliveira, J.H.C. Quality management, environmental management maturity, green supply chain practices and green performance of Brazilian companies with ISO 14001 certification: Direct and indirect effects. _Transp. Res. Part E Logist. Transp. Rev._**2014**, _67_, 39-51. [CrossRef] * _Wu et al. (2017)_ Wu, J.Z.; Santoso, C.H.; Roan, J. Key factors for truly sustainable supply chain management: An investigation of the coal industry in Indonesia. _Int. J. Logist. Manag._**2017**, _28_, 1196-1217. [CrossRef] * _Pesce et al. (2018)_ Pesce, M.; Shi, C.; Critto, A.; Wang, X.; Marcomini, A. SWOT Analysis of the Application of International Standard ISO 14001 in the Chinese Context. A Case Study of Guangdong Province. _Sustainability_**2018**, _10_, 3196. [CrossRef] * _Mastrogiacomo et al. (2016)_ Mastrogiacomo, L.; Barravecchia, F.; Franceschini, F. Service recycling and ecosystems: An intriguing similarity. _Inter. J. Qual. Ser. Sci._**2016**, \\(8\\), 555-562. [CrossRef] * _Meng et al. (2017)_ Meng, K.; Lou, P.; Peng, X.; Prybutok, V. Quality-driven recovery decisions for used components in reverse logistics. _Int. J. Prod. Res._**2017**, _55_, 4712-4728. [CrossRef] * _Yu and Huo (2018)_ Yu, Y.; Huo, B. Supply chain quality integration: Relational antecedents and operational consequences. _Supply Chain Manag._**2018**, _23_, 188-206. [CrossRef] * _Gualandris and Kalchschmidt (2014)_ Gualandris, J.; Kalchschmidt, M. Customer pressure and innovativeness: Their role in sustainable supply chain management. _J. Purch. Supply Manag._**2014**, _20_, 92-103. [CrossRef] * _Sinha and Anand (2018)_ Sinha, A.K.; Anand, A. Development of sustainable supplier selection index for new product development using multi criteria decision making. _J. Clean. Prod._**2018**, _197_, 1587-1596. [CrossRef] * _Miemczyk et al. (2012)_ Miemczyk, J.; Johnsen, T.E.; Macquet, M. Sustainable purchasing and supply management: A structured literature review of definitions and measures at the dyad, chain and network levels. _Supply Chain Manag._**2012**, _17_, 478-496. [CrossRef] * _Sroufe (2018)_ Sroufe, R. _Integrated Management: How Sustainability Creates Value for Any Business_; Emerald Publishing Limited: Bingley, UK, 2018. * _Wang and Dai (2018)_ Wang, J.; Dai, J. Sustainable supply chain management practices and performance. _Ind. Manag. Data Syst._**2018**, _118_, 2-21. [CrossRef]* _Wichmann et al. (2016)_ Wichmann, B.K.; Carter, C.R.; Kaufmann, L.; Wilson, J.R. Making environmental SCM initiatives work--Moving beyond the dyad to gain affective commitment. _J. Supply Chain Manag._**2016**, _52_, 21-40. [CrossRef] * _Garcia-Arca et al. (2014)_ Garcia-Arca, J.; Prado-Prado, J.C.; Gonzalez-Portela Garrido, A.T. \"Packaging logistics\": Promoting sustainable efficiency in supply chains. _Int. J. Phys. Distrib. Logist. Manag._**2014**, _44_, 325-346. [CrossRef] * _De Angelis et al. (2018)_ De Angelis, R.; Howard, M.; Miemczyk, J. Supply chain management and the circular economy: Towards the circular supply chain. _Prod. Plan. Contr._**2018**, _29_, 425-437. [CrossRef] * _Geissdoerfer et al. (2018)_ Geissdoerfer, M.; Morioka, S.N.; de Carvalho, M.M.; Evans, S. Business models and supply chains for the circular economy. _J. Clean. Prod._**2018**, _190_, 712-721. [CrossRef] * _Schulz and Flanigan (2016)_ Schulz, S.A.; Flanigan, R.L. Developing competitive advantage using the triple bottom line: A conceptual framework. _J. Bus. Ind. Mark._**2016**, _31_, 449-458. [CrossRef] * _Dubey et al. (2017)_ Dubey, R.; Gunasekaran, A.; Childe, S.J.; Papadopoulos, T.; Fosso Wamba, S. World class sustainable supply chain management: Critical review and further research directions. _Int. J. Logist. Manag._**2017**, _28_, 332-362. [CrossRef] * _Leszczynska (2018)_ Leszczynska, A. Sustainable supply chain capabilities-factors stimulating the processes and organisational performance. _Int. J. Sustain.__Econ._**2018**, _10_, 263-282. * _Darkow et al. (2015)_ Darkow, I.L.; Foerster, B.; von der Gracht, H.A. Sustainability in food service supply chains: Future expectations from European industry experts toward the environmental perspective. _Supply Chain Manag._**2015**, _20_, 163-178. [CrossRef] * _De Brito et al. (2008)_ De Brito, M.P.; Carbone, V.; Blanquart, C.M. Towards a sustainable fashion retail supply chain in Europe: Organisation and performance. _Int. J. Prod. Econ._**2008**, _114_, 534-553. [CrossRef] * _Tate et al. (2010)_ Tate, W.; Ellram, L.; Krichoff, J. Corporate social responsibility reports: A thematic analysis related to supply chain management. _J. Supply Chain Manag._**2010**, _46_, 19-44. [CrossRef]
The purpose of this research is to propose a Sustainable Supply Chain Management (SSCM) implementation framework grounded in a literature review while categorizing practices adopted by firms' and industries. Given the evolution of the SSCM field and emerging trends, we examine why and how companies implement SSCM practices within a country context. The research methods employed in this study include theory building from a review of the literature and synthesis of insights regarding the design of SSCM implementation frameworks using multiple cases in Taiwan. The review of the literature, content analysis, and findings provide new insights into designing an implementation model, and generalizable models for reactive, cooperative, and dynamic SSCM implementation. Practical implications include but are not limited to the generalization of implementation frameworks in supply chain management, and opportunities to improve global practices. Our development of the conceptual framework complements existing theory by offering new knowledge on SSCM implementation practices. This study can help guide research, practitioners, and policymakers in future sustainability and supply chain management initiatives. implementation framework; sustainability; sustainable supply chain management; ethical leadership + Footnote †: journal: 1
Summarize the following text.
218
arxiv-format/2012_02789v1.md
**Holding the Cosmos in Your Hand: Developing 3D Modeling and Printing Pipelines for Communications and Research** **Kimberly K. Arcand\\({}^{1}\\), Sara R. Price\\({}^{1}\\), Megan Watzke\\({}^{1}\\)** \\({}^{1}\\)Chandra X-ray Observatory, Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA, United States \\({}^{*}\\)**Correspondence:** Sara R. Price [email protected] **Keywords: 3D printing, 3D visualization, virtual reality, astrophysics, geophysics, science communication, inclusivity** ## 1 Introduction This paper will summarize recent achievements made in astronomy, astrophysics, and space science using three-dimensional (3D) technology to help reach beyond expert audiences, and discuss best practices that can be shared with other sciences such as geophysics. This type of 3D work also allows for new media -- both virtual and in tangible physical form -- to help non-experts interact with and learn about these discoveries. We contend that the lessons learned from these space-based 3D projects could have broader impacts for many fields throughout science, technology, engineering, and mathematics (STEM) fields and provide educational benefits to a great number of people. Parallel advances in acquiring 3D data from Earth and space and in 3D printing mean that today technologists can translate digital data into something that we can touch and feel, expanding the ways we can understand and communicate science, including with diverse communities such as the blind or visually impaired (BVI) (Trotta et al., 2020). Currently, the amount of astronomical and planetary geological objects able to be 3D printed is low compared to the vast range of objects being studied (Arcand et al., 2017). One practical challenge is making 3D modeling easier and more accessible as a research tool. This issue can be improved with the advent of enhanced data from the next generation of more technically advanced telescopes and sensors (Gerloni et al., 2018). Creating awareness among scientific researchers of 3D data representation is also essential to increasing this technique's frequency and convenience of use. This article describes recent innovative 3D processes and explores the developing pipelines and contributions of 3D visualization outputs in multimodal scientific study and communications. 3D printing, for example, is a potential place of interaction between terrestrial and space science through representations of discipline-specific and interdisciplinary data alike, aimed at both non-experts and experts that can use specific accessible outputs and products. Indeed, best practices for 3D printing of scientific data can and should be shared across traditionally separate disciplines. Astronomical and other scientific information can be shared through various human senses including touch, smell, and even taste [14]. The use of, and meaning-making from, scientific visualizations and related expressions of data \"depend strongly on who is viewing\" [15, para.4] or interpreting them. Consequently, accessibility and diversity are a critical area of emphasis for this topic, and we will summarize current evidence of and future needs for the impact of 3D visualization on enhancing equity via modeling and printing. Our analysis briefly encompasses extended reality (XR) - a suite of multisensory applications which typically involves augmented, virtual, and mixed reality formats [13] for diverse communities of users. The COVID-19 pandemic has shown the potential for 3D modeling and printing - from making personal protective equipment to respirators1 and ventilators (e.g., [11]) to understanding the structure and impact of the coronavirus through 3D printing [12] and virtual reality (VR) (e.g., [13]). Although geophysics and astrophysics data typically cover very different phenomena, they can benefit from multidimensional data delivery to diverse audiences during times of disruption and beyond. Footnote 1: [https://medeng.jpl.nasa.gov/covid-19/respiators/](https://medeng.jpl.nasa.gov/covid-19/respiators/), accessed 26 June 2020 ## 2 3D Visualizations & Related Outputs Learning about the mental manipulation of 3D images is an important step in the process of science education in both astronomy [15] and geology [14]. 3D modeling skills can form the basis of a framework for teaching astronomy and related topics in a manner that enables non-experts to use and understand the same visual and conceptual assets as professional astronomers [15]. Multidimensional models are also useful as teaching tools in geophysics to share the textures and geological arrangement of surfaces including sedimentary basins with students and the public [14]. Simultaneously, the ability to research celestial or geophysical objects from multiple angles and viewpoints can help improve understanding of how scientific objects are structured or how the underlying physics works [16], making them a valuable tool. Once a 3D visualization has been created, there are multiple output potentials from interactive 3D digital models or physical 3D prints to immersive XR experiences [14]. 3D printing and other outputs involving touch can help encourage more \"control\" [17] and interactivity on behalf of users [15] than computerized 3D visualization usually allows. 3D printing is a process where a type of material such as plastic, metal, or an organic substance is continually added layer by layer to formulate the object [12]. Recently, \"on-demand processes\" of 3D printing - typically characterized as fused deposition modeling - have become increasingly accessible and affordable for consumers [13], including those in libraries and related educational and community settings. This opens the opportunity to produce scientific or educational tools in these learning environments that may have been unachievable before [13]. On consumer scales, 3D printing is still rather new, but the production possibilities are far-reaching. 3D printing applications in science range from plans for an on-demand and sustainable lunar base 3D printed from Moon dust [11] to medical 3D printing of skin cells to help treat burn victims [12]. The prominence of VR as an entertainment product in recent years has made such technologies more widely available for scientific communications and research in astronomy as well [1]. Due to the extensive time and technical effort often needed to develop VR applications for astronomical or geological objects, modifiable generic VR templates could be helpful for the scientific community, and could help spark greater adoption for observational exploration [1]. VR is another 3D tool that can potentially be used by scientists for visualizing the immense amount of \"big data\" making up modern astronomy, in particular for data mining, detection, removal, and quick-look rendering [13, 14]. Interactive VR is also able to take advantage of human depth perception as an ergonomic factor that enables quick identification and characterization of complex structures [22]. Augmented reality (AR), which overlays digital elements onto a user's perception of the surrounding world, and VR can be important support tools in the processing of massive and complex scientific data sets. However, for any XR product, individual differences in the way people experience multimodal information should be taken into account [15]. ### Importance of Audience in Scientific Visualization: Inclusivity and Diversity Astronomy and astrophysics [16] and geology and geophysics [17] are often recognized as highly visual fields both historically and today. This can create challenges in sharing data from these disciplines with BVI audiences. Evaluative studies have shown benefits in using 3D astrophysical models [18, 19, 20, 21] for generating or positively impacting learning gains, inclusive practices, STEM identity, and mental visualization. Applications of 3D models in geology and geophysics have been demonstrated for use in museums and similar informal learning environments [15] and by other educators [16]. In the past several years, research has shown that 3D printed scientific data from astrophysics and geophysics with tactile features can help communicate with BVI participants across a spectrum of abilities [1] as well as with sighted people [17, 18], and also to promote inclusivity more broadly [19, 20]. XR technologies particularly have been broadly shown to provide low-risk, high-impact virtual spaces that can accommodate physical barriers to interaction by providing learner-specific experiences [19], when crafted particularly through universal design techniques [18, 19]. Human and interpersonal issues must be considered in order to enable meaningful use of 3D prints and dynamic virtual spaces for science communication with BVI and other audiences. Audience awareness and the resulting specific development of visualizations with audience needs in mind is critical for the development of inclusive practices in 3D visualizations, facilitated by 3D printing and other emerging technologies [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 2006), there are firm legal requirements to provide and maintain access to information, communication, and participatory opportunities for people with disabilities. Differently-abled populations need to be able to discover and share information in a way directly equivalent to how others complete such tasks (United Nations -- Disability, Department of Economic and Social Affairs, 2006). People with visual impairments, for example, exist with a specific spectrum of needs that can be affected by variables such as the timing and cause of their blindness' onset (National Federation of the Blind & Finkelstein, 1994) or the amount of Braille literacy instruction (National Federation of the Blind Jernigan Institute, 2009). Therefore, any effort should take into account how to effectively disseminate 3D prints and related Braille or tactile materials. Incurred costs can be problematic for organizations and individuals (Arcand _et al._, 2019; Beck-Winchatz & Riccobono, 2008; Weferling, 2007) as well as intended audiences, including BVI communities. These issues are particularly relevant during times of disruption and upheaval, as illustrated during the COVID-19 pandemic (McEvoy, 2020). ## 3 Key Examples of 3D Visualizing in Planetary Geophysics and Astrophysics Planetary geology and geophysics are particularly well suited areas of study for the development of 3D visualization outputs including 3D models, 3D prints, and XR. 3D visualizations of the Earth can clarify geological phenomena to the public (Kyriakopoulos, 2019), while also supplementing instruction (Koelemeijer _et al._, 2019) and making it more accessible (Dolphin _et al._, 2019). In exoplanet research, studies on the habitability of exoplanet bodies like the TRAPPIST-1 system2(Jet Propulsion Laboratory, 2020; NASA, 2019), and synthetic exoplanet systems (Alesina _et al._, 2019) have led to head-set based VR experiences for expert and non-expert audiences. 3D models have been useful in the study of asteroids (Kim, 2018) and aspects of other planetary bodies such as the atmospheres of Venus (Korycansky _et al._, 2002) and Titan (Charnav _et al._, 2014) as well as landscape evolution (Cornet _et al._, 2017). A wealth of data on our nearest neighbors, our Moon and Mars, has resulted in extensive 3D mapping of the various local topographies and geological structures (Edwards _et al._, 2005; Ellison, 2014; Swinner _et al._, 2015; Lowe & Klump, 2013; Mars Exploration Rovers, 2014), and morphometric globes (Florinsky & Filippov, 2017) of these objects with outputs of 3D visualizations, primarily for scientific analysis. Footnote 2: [http://www.spitzer.caltech.edu/vr](http://www.spitzer.caltech.edu/vr); accessed 17 June 2020 In astronomy and astrophysics, data and simulation driven 3D printed objects include multiple object types across a range of scales. In high-energy astrophysics, supernova remnants have been particularly conducive to 3D modeling, encompassing such examples as Cassiopeia A (Arcand _et al._, 2017; Arcand _et al._, 2018; Arcand _et al._, 2019; DeLaney _et al._, 2010; Orlando _et al._, 2016), Tycho's supernova remnant (Chandra X-ray Observatory, 2019; Ferrand _et al._, 2019), Supernova 1987a (Arcand _et al._, 2017; Arcand _et al._, 2019; Orlando _et al._, 2018), and the Crab Nebula (Arcand _et al._, 2020; Summers _et al._, 2020). Beyond supernovae, other areas of 3D research and output range from star formation regions such as M16 (McLeod _et al._, 2015), a massive star system with colliding winds called Eta Carinae (Arcand _et al._, 2017; Madura _et al._, 2015; Madura, 2017; Steffen _et al._, 2014), double star system V745 Sco (Arcand _et al._, 2019; Chandra X-ray Observatory, 2017), protostellar jets like DG Tau (Chandra X-ray Observatory, 2020a; Orlando _et al._, 2019; Ustamuic _et al._, 2016; Ustamuic _et al._, 2018), and beyond to much larger structures including the South Pole Wall (Pomarede _et al._, 2020), the Cosmic Web (Diemer & Facio, 2017) and the Cosmic Microwave Background (Arcand _et al._, 2017; Clements _et al._, 2016). Multidimensional renderings of astronomical and geological data in accessible formats can simplify the discovery of previously hidden or overlooked structures in objects and, through the presence of interactive features, can enable close-up views of data via a personalized perspective (Madura, 2017). This variety of technologies that have proven useful for accessible public engagement has helped to clarify the context of current observational science data (Arcand _et al._, 2017; Madura _et al._, 2015). The catalog of data-driven astrophysical or geological models continues to grow (Hurt _et al._, 2019), and can be used in communicating contemporary science with non-expert audiences (Lowe & Klump, 2013). Additionally, artistic imaginings with scientific underpinnings (Pauwels, 2020), in contrast to data-driven or simulation based 3D models, can tangibly provide perceptual value in communicating information that can be abstract, esoteric, or perhaps invisible to the human or robotic eye (Keefe _et al._, 2005) particularly in this area of multidimensional visualization. One important factor for the accessibility of 3D models, particularly with non-experts, is the availability of open access or Creative Commons data. The National Aeronautics and Space Administration (NASA)3 and the Smithsonian Institution (Pearlman, 2020), for example, each maintain open access or public domain databases of 3D objects, ranging from supernova remnants to geological maps of the Apollo lunar landing sites. Open access and Creative Commons materials can reduce complexities or legalities regarding the adaptation of content to customized learning experiences for multiple audiences (Zhang _et al._, 2020). Flexible, customizable materials, derived from openly accessible materials, can potentially improve experiences for all learners, particularly for those with special access requirements (Zhang _et al._, 2020). Footnote 3: [https://nasa3d.arc.nasa.gov/models](https://nasa3d.arc.nasa.gov/models), accessed 16 June 2020 ## 4 Discussion ### 3D Visualization Pipelines Once a 3D visualization has been created, there are myriad options for how to output that data into a usable product, specific to an intended audience. This underlines the importance of understanding and establishing pipelines in creating 3D data sets, whether ultimately working with file types from.vtk to.obj to.stl. to.unity3d or beyond, for scientific research or for scientific engagement. Narrative information should be provided to interpret context and scale. ### 4.2 XR Technology Examples Related to the Data Pipeline Preliminary astrophysical VR applications of simulated worlds (such as Farr _et al._, 2009) through recent astronomical VR experiences as individual applications (e.g., Arcand _et al._, 2018; Chandra X-ray Observatory, 2020; Ferrand & Warren, 2018; Russell _et al._, 2017), include artistically illustrated worlds, simulated data mapped to astronomical observations, and three-dimensional models derived from scientific observations. Users can comb Martian surfaces based on observational data from NASA's Jet Propulsion Laboratory (2017), travel to the TRAPPIST-1 exoplanets via scientifically-influenced 3D artists' impressions that were converted into VR applications (NASA, 2019), and explore the solar surface based on observational data from the Hinode spacecraft (Hinode Science Center at NAOJ, 2018). Explorers equipped with emerging technologies can also virtually walk inside the remains of an exploded star (Arcand _et al._, 2018) or experience a spiral galaxy like NGC 3198 via radio data cubes (Ferrand et al., 2016). Computational models and simulations constrained by scientific observations also provide numerous options for VR application development, including simulation and time-domain based applications of Sagittarius A, the supermassive black hole at our Galactic Center (Davelaar et al., 2018; Russell, 2017). Expert analysis of scientific data in XR spaces can provide active and immersive simulations of astrophysical topics like photometry of the billions of stars in our Milky Way (Ramirez et al., 2019). And at even greater scales, cosmological topics such as dark matter can be rendered in VR with particular attention towards accessibility, for example haptic (vibrational) cues that work together with wheelchair use (Aviles, 2018). Current research into, and evaluation of, such immersive projects that take advantage of human perception in 3D spaces are investigating potential outcomes in enabling better detections and manipulations of large or complex data sets on behalf of the scientist (Ferrand et al., 2016). Additionally, in geology and geophysics there are projects across topographical maps (Woods et al., 2015), geological surveys (Westhead et al., 2013) and other ways to provide 3D data in a digestible way to \"enable non-expert users to begin to interact with a complex science in an expert way which was not possible for previous generations\" (Westhead et al., 2013, p.189; see e.g, Gerloni et al., 2018; Mathiesen, 2012; Trexler et al., 2018). Data sonification is another example of a potential extension or output of 3D data sets. Sonification can help improve upon analysis of big data through vision alone by taking advantage of the unique capacities of sound to provide information from different dimensions simultaneously, with quick interaction or playback (Cooke et al., 2017). This strategy can work with XR in order to speedily evaluate features of 3D data (Ribeiro et al., 2012). Diaz-Merced (2013) demonstrated that listening function can be improved through targeted interventions as applied to such data sonification outputs. Beyond data sonification there are additional 3D visualization outputs and enhancements that can be considered to reach specific audiences, from holograms (Royal Astronomical Society, 2019) and haptic information (Isenberg, 2013; Trotta et al., 2020) to tactile tablets (Touch Graphics, Inc., 2015) and multisensory experiences (Nailar, 2020). ## 5 Conclusion At this time, the COVID-19 pandemic continues to result in monumental challenges that reach nearly every aspect of life. The pandemic does not spare the fields of science research, visualization or engagement, but instead presents particular obstacles - and opportunities - to these areas. Some of the recent difficulties related to the use of 3D printed materials include reliance on tactile surfaces while combating a virus that can spread through surface contact (Centers for Disease Control and Prevention, 2020) and a reduction in tangible resources presently available to populations who need enhanced accessibility, such as BVI audiences (McEvoy, 2020). However, due to social distancing there is now a greater demand than ever before in modern times for remote learning and work. The 3D astrophysical and geophysical models, prints, and virtual spaces could serve scientists, educators, learners and others who are unable to spend their usual time in physical settings. We propose partnering with groups who both advocate for and engage with affected audiences so the science and engagement communities can best serve the needs of their audiences. KA, as the principal investigator, provided the main points, literature references and scientific topics, as well as technical writings for the article. SP, as second author and research assistant, formatted the outline and citations, organized the literature review, and provided summative text on specific subsections. MW, as third author, provided detailed editing and drafting, gave input on overall organization, and wrote summary texts as needed. ## Funding This paper was written with funding from NASA under contract NAS8-03060 with the authors working for the Chandra X-ray Observatory. NASA's Marshall Space Flight Center manages the Chandra program. The Smithsonian Astrophysical Observatory's Chandra X-ray Center controls science and flight operations from Cambridge and Burlington, Massachusetts. ## Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Acknowledgements The Authors gratefully acknowledge Lisa Frattare for her stellar copy editing skills. The Authors also acknowledge the Chandra communications and public engagement group members, NASA's Universe of Learning, and the many scientists, technologists and researchers that have contributed to the ever growing library of 3D printed objects of our Universe. ## References * (2019) ADA National Network. (2019). _What is the Americans with Disabilities Act (ADA)?_[https://adata.org/learn-about-ada](https://adata.org/learn-about-ada) * (2018) Alesina, F., Cabot, F., Buchschacher, N., & Burnier, J. (2018, October 11-15). _Exoplanets data visualization in multi-dimensional plots using virtual reality in DACE_ [Paper presentation]. Astronomical Data Analysis Software and Systems XXVII, College Park, MD, United States. [http://aspbbooks.org/publications/523/025.pdf](http://aspbbooks.org/publications/523/025.pdf) * (2020) Arcand, K., Edmonds, P., & Watzke, M. (2020, April). _X-ray Universe: Make a pulsar: Crab Nebula in 3D_. Chandra X-ray Center. [https://chandra.cfa.harvard.edu/deadstar/crab.html](https://chandra.cfa.harvard.edu/deadstar/crab.html) * (2018) Arcand, K. K., Jiang, E., Price, S., Watzke, M., Sgouros, T., & Edmonds, P. (2018). Walking through an exploded star: Rendering supernova remnant Cassiopeia A into virtual reality. _Communicating Astronomy with the Public Journal, 1_(24), 17-24. [https://www.capjournal.org/issues/24/24](https://www.capjournal.org/issues/24/24) 17.php * (2019) Arcand, K.K., Jubett, A., Watzke, M., Price, S., Williamson, K. T. S., & Edmonds, P. (2019). Touching the stars: Improving NASA 3D printed data sets with blind and visually impaired audiences. _Journal of Science Communication, 18_(4), Article A01. [https://doi.org/10.22323/2.18040201](https://doi.org/10.22323/2.18040201)Arcand K., Watzke, M., DePasquale, J., Jubert, A., Edmonds, P., & DiVona, K. (2017). Bringing cosmic objects down to Earth: An overview of 3D modeling and printing in astronomy. _Communicating Astronomy with the Public Journal, 1(22)_, 14-20. [https://www.capjournal.org/issues/22/22](https://www.capjournal.org/issues/22/22) 14.pdf Argudo-Fernandez, M., Bonne, N., Krawczyk, C., Gupta, J., Rodriguez Quiroz, A., Longa-Pena, P., Colque-Saavedra, J. P., Boquien, M., Unda-Sanzana, E., Ortiz-Gil, A., Perez-Henao, A., Couto, G., & Martins, A. (2020). Highlights in the implementation of the AstroBVI project to increase quality education and reduce inequality in Latin America. _Communicating Astronomy with the Public Journal, 1(27)_, 35-38. [https://www.capjournal.org/issues/27/27](https://www.capjournal.org/issues/27/27) 35.pdf Aviles, R. B. (2018). _Perception Ultra: Using virtual reality technology to visualize astronomical data_ [Undergraduate honors thesis, University of California, Riverside]. California Digital Library. [https://escholarship.org/content/qt5hh7h2s0/qt5hh7h2s0.pdf](https://escholarship.org/content/qt5hh7h2s0/qt5hh7h2s0.pdf) Baracaglia, E. & Vogt, F. P. A. (2020). E0102-VR: Exploring the scientific potential of Virtual Reality for observational astrophysics. _Astronomy & Computing_, 30, Article 100352. [https://doi.org/10.1016/j.ascom.2019.100352](https://doi.org/10.1016/j.ascom.2019.100352) Beck-Winchatz, B. & Riccobono, M.A. (2008). Advancing participation of blind students in science, technology, engineering, and math. _Advances in Space Research_, 42(11), 1855-1858. [https://doi.org/10.1016/i.asr.2007.05.080](https://doi.org/10.1016/i.asr.2007.05.080) Bonne, N. J., Gupta, J. A., Krawczyk, C. M., & Masters, K. (2018). Tactile Universe makes outreach feel good. _Astronomy & Geophysics_, 59(1), 1.30-1.33. [https://doi.org/10.1093/astrogeo/atv028](https://doi.org/10.1093/astrogeo/atv028) Brown, C. (2017). 3D printing set to revolutionize medicine. _Canadian Medical Association Journal, 189_(29), E973-E974. [https://doi.org/10.1503/cmai.1095442](https://doi.org/10.1503/cmai.1095442) Carneiro, C. D. R., Santos, K. M. dos, Lopes, T. R., Santos, F. C. dos, Silva, J. V. L. da, & Harris, A. L. N. de C. (2018). Three-dimensional physical models of sedimentary basins as a resource for teaching-learning of geology. _Terrace Didatica, 14_(4), 379-384. [https://doi.org/10.20396/td.v14i4.8654098](https://doi.org/10.20396/td.v14i4.8654098) Centers for Disease Control and Prevention (2020, April 7). _Coronavirus Disease 2019 (COVID-19): People with disabilities_. U.S. Department of Health and Human Services. [https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/people-with-disabilities.html](https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/people-with-disabilities.html) Chandra X-ray Observatory (2017, September 18). _V745 Sco: Two stars, three dimensions, and oodles of energy_ [Press release]. [https://chandra.si.edu/photo/2017/v745/index.html](https://chandra.si.edu/photo/2017/v745/index.html) Chandra X-ray Observatory (2019, October 17). _The clumpy and lumpy death of a star_ [Press release]. [https://chandra.si.edu/photo/2019/tvcho/](https://chandra.si.edu/photo/2019/tvcho/) Chandra X-ray Observatory. (2020a, January 29). _Stellar explosions and jets showcased in new three dimensional visualizations_ [Press release]. [https://chandra.harvard.edu/photo/2020/3dmodels/](https://chandra.harvard.edu/photo/2020/3dmodels/) Chandra X-ray Observatory. (2020b, June 2). _A new Galactic Center adventure in virtual reality_ [Press release]. [https://chandra.harvard.edu/photo/2020/qcenter/](https://chandra.harvard.edu/photo/2020/qcenter/)Chandrashekar, S. (2018, May 17). _GAAD: How virtual reality can transform the way people with disabilities learn_. Desire2Learn. [https://www.d2l.com/corporate/blog/gaad-virtual-reality-people-disabilities-learn/](https://www.d2l.com/corporate/blog/gaad-virtual-reality-people-disabilities-learn/) * Charnay et al. (2014) Charnay, B., Forget, F., Tobie, G., Sotin, C., & Wordsworth, R. (2014). Titan's past and future: 3D modeling of a pure nitrogen atmosphere and geological implications. _Icarus_, _241_, 269-279. [https://doi.org/10.1016/j.icarus.2014.07.009](https://doi.org/10.1016/j.icarus.2014.07.009) * Christian et al. (2015) Christian, C. A., Nota, A. Greenfield, P., Grice, N. & Shaheen, N. (2015). You can touch these! Creating 3d tactile representations of Hubble Space Telescope images. _Journal and Review of Astronomy Education and Outreach_, \\(3\\), 282. * Clements et al. (2016) Clements, D. L., Sato, S., & Fonseca, A. P. (2016). Cosmic sculpture: a new way to visualise the cosmic microwave background. _European Journal of Physics_, _38_(1), Article 015601. [https://doi.org/10.1088/0143-0807/38/115601](https://doi.org/10.1088/0143-0807/38/115601) * Comparato et al. (2007) Comparato, M., Beccani, U., Costa, A., Larsson, B., Garilli, B., Gheller, C., & Taylor, J. (2007). Visualization, exploration, and data analysis of complex astrophysical data. _Publications of the Astronomical Society of the Pacific_, _119_, 898-913. [https://iopscience.iop.org/article/10.1086/5213](https://iopscience.iop.org/article/10.1086/5213) * Cooke et al. (2017) Cooke, J., Diaz-Merced, W., Foran, G., Hannam, J., & Garcia, B. (2017). Exploring data sonification to enable, enhance, and accelerate the analysis of big, noisy, and multi-dimensional data: Workshop 9. _Proceedings of the International Astronomical Union_, _14_, 251-256. [https://doi.org/10.1017/S1743921318002703](https://doi.org/10.1017/S1743921318002703) * Cornet et al. (2017) Cornet, T., Fleurant, C., Seignovert, B., Cordier, D., Bourgeois, O., Le Mouelic, S., Rodriguez, S., & Lucas, A. (2017, April 23-28). _Dissolution on Saturn's moon Titan: a 3D karst landscape evolution model_ [Conference presentation abstract]. European Geosciences Union (EGU) General Assembly 2017, Vienna, Austria. [https://meetingoranizer.copernicus.org/EGU2017/EGU2017-12475.pdf](https://meetingoranizer.copernicus.org/EGU2017/EGU2017-12475.pdf) * Davelaar et al. (2018) Davelaar, J., Bronzwaer, T., Kok., D., Younsi, Z., Moscibrodzka, M., Falcke, H. (2018). Observing supermassive black holes in virtual reality. _Computational Astrophysics and Cosmology_, 5, Article 1. [https://doi.org/10.1186/s40668-018-0023-7](https://doi.org/10.1186/s40668-018-0023-7) * De Kestelier et al. (2015) De Kestelier, X., Dini, E., Cesareti, G., Colla, V., & Pambaguian, L. (2015). _The design of a lunar outpost_. Foster + Partners. [https://www.fosterandpartners.com/media/2634652/lunar](https://www.fosterandpartners.com/media/2634652/lunar) * DeLaney et al. (2010) DeLaney, T., Rudnick, L., Stage, M. D., Smith, J. D., Isensee, K., Rho, J., Allen, G. E., Gomez, H., Kozasa, T., & Houck, J. C. (2010). The three-dimensional structure of Cassiopeia A. _The Astrophysical Journal_, _725_(2), 2038-2058. [https://doi.org/10.1088/0004-637X/725/2/2038](https://doi.org/10.1088/0004-637X/725/2/2038) * Diaz-Merced (2014) Diaz-Merced, W. (2014, September 22). Making astronomy accessible for the visually impaired. _Voices_. [https://blogs.scientificamerican.com/voices/making-astronomy-accessible-for-the-visually-impaired/](https://blogs.scientificamerican.com/voices/making-astronomy-accessible-for-the-visually-impaired/) * Diaz-Merced et al. (2011) Diaz-Merced, W. L., Candey, R. M., Brickhouse, N., & Schneps, M. (2011). Sonification of astronomical data. _Proceedings of the International Astronomical Union_, \\(7\\), 133-136. [https://doi.org/10.1017/S1743921312000440](https://doi.org/10.1017/S1743921312000440)Diaz-Merced, W. L. (2013). _Sound for the exploration of space physics data_ [Doctoral dissertation, University of Glasgow]. Enlighten. _[http://theses.gla.ac.uk/5804/_](http://theses.gla.ac.uk/5804/_) Diemer, B. & Facio, I. (2017). The fabric of the Universe: Exploring the cosmic web in 3D prints and woven textiles. _Publications of the Astronomical Society of the Pacific_, 129, Article 058013. [https://doi.org/10.1088/1538-3873/aa6a46](https://doi.org/10.1088/1538-3873/aa6a46) Dolphin, G., Dutchak, A., Karchewski, B., & Cooper, J. (2019). Virtual field experiences in introductory geology: Addressing a capacity problem, but finding a pedagogical one. _Journal of Geoscience Education_, 67(2), 114-130. [https://doi.org/10.1080/10899995.2018.1547034](https://doi.org/10.1080/10899995.2018.1547034) Donalek, C., Djorgovski, S. G., Cioc, A., Wang, A., Zhang, J., Lawler, E., Yeh, S., Mahabal, A., Graham, A., & Drake, A. (2014). _Immersive and collaborative data visualization using virtual reality platforms_ [Preprint]. arXiv. [https://arxiv.org/ftp/arxiv/papers/1410/1410.7670.pdf](https://arxiv.org/ftp/arxiv/papers/1410/1410.7670.pdf) Edwards, L., Sims, M., Kunz, C., Lees, D., & Bowman, J. (2005, 12 Oct.). _Photo-realistic terrain modeling and visualization for Mars Exploration Rover science operations_ [Paper presentation]. 2005 IEEE International Conference on Systems, Man, and Cybernetics, Waikoloa, HI, USA. [https://doi.org/10.1109/ICSMC.2005.1571341](https://doi.org/10.1109/ICSMC.2005.1571341) Ellison, D. (2014, June 26). _Gale Crater_. NASA 3D Resources. [https://nasa3d.arc.nasa.gov/detail/qale-crater](https://nasa3d.arc.nasa.gov/detail/qale-crater) Eriksson, U. (2019). Disciplinary discernment: Reading the sky in astronomy education. _Physical Review Physics Education Research_, 15, Article 010133. [https://doi.org/10.1103/PhysRevPhysEducRes.15.010133](https://doi.org/10.1103/PhysRevPhysEducRes.15.010133) Eriksson, U., Linder, C., Airey, J., & Redfors, A. (2014). Who needs 3D when the Universe is flat? _Science Education_, 98(3), 412-442. [https://doi.org/10.1002/sce.21109](https://doi.org/10.1002/sce.21109) European Southern Observatory. (2019, September 13). _A dark tour of the Universe_ [Press release]. [https://www.eso.org/public/announcements/ann19045/](https://www.eso.org/public/announcements/ann19045/) Everett-Green, R. (2013, January 20). A 3-D machine that prints skin? How burn care could be revolutionized. _The Globe and Mail. [https://www.thedobeandmail.com/life/health-and-fitness](https://www.thedobeandmail.com/life/health-and-fitness) health/a-3-d-machine-that-prints-skin-how-burn-care-could-be- revolutionized/article7540819/_ Farr, W., Hut, P., Ames, J., & Johnson, A. (2009). An experiment in using virtual worlds for scientific visualization of self-gravitating systems. _Journal of Virtual Worlds Research_, 2(3). [https://www.learntechlip.org/p/177669/](https://www.learntechlip.org/p/177669/) Ferrand, G., English, J., & Irani, P. (2016, June 1). _3D visualization of astronomy data cubes using immersive displays_ [Paper presentation]. Canadian Astronomical Society Conference (CASCA), Winnipeg, Manitoba, Canada. [http://hci.cs.umanitoba.ca/assets/publication](http://hci.cs.umanitoba.ca/assets/publication) files/Gilles.pdf Ferrand, G., & Warren, D. (2018). Engaging the public with supernova and supernova remnant research using virtual reality. _Communicating Astronomy with the Public Journal_, 1(24), 25-31. [https://www.capjournal.org/issues/24/24](https://www.capjournal.org/issues/24/24) 25.php Ferrand, G., Warren, D. C., Ono, M., Nagataki, S., Ropke, F. K., & Seitenzahl, I. R. (2019). From supernova to supernova remnant: The three-dimensional imprint of a thermonuclear explosion. _The Astrophysical Journal_, 877(2). [https://doi.org/10.3847/1538-4357/ab1a3d](https://doi.org/10.3847/1538-4357/ab1a3d)Florinsky, I. V., & Filippov, S. V. (2017). A desktop system of virtual morphometric globes for Mars and the Moon. _Planetary and Space Science_, 137, 32-39. [https://doi.org/10.1016/j.pss.2017.01.005](https://doi.org/10.1016/j.pss.2017.01.005) Gerloni, I. G., Carchiolo, V., Vitello, F. R., Sciacca, E., Becciani, U., Costa, A., Riggi, S., Bonali, F. L., Russo, E., Fallati, L., Marchese, F., & Tibaldi, A. (2018, Sept. 9-12). _Immersive virtual reality for earth sciences_ [Paper presentation]. 2018 Federated Conference on Computer Science and Information Systems (FedCSIS), Poznan, Poland. [https://doi.org/10.15439/2018F139](https://doi.org/10.15439/2018F139) Grice, N., Christian, C., Nota, A., & Greenfield, P. (2015). 3D printing technology: A unique way of Making Hubble Space Telescope images accessible to non-visual learners. _JBIR: Journal of Blindness Innovation and Research_, 5(1). Retrieved from [https://nfb.org/imaages/](https://nfb.org/imaages/) nfb/publications/ibir/ibir15/ibir050101.html Grove, E. (2020, March 24). _3D anatomic modeling lab prints model of virus that causes COVID-19_ [Press release]. Mayo Clinic. [https://newsnetwork.mayoclinic.org/discussion/3d-anatomic-modeling-lab-prints-model-of-virus-that-causes-covid-19/](https://newsnetwork.mayoclinic.org/discussion/3d-anatomic-modeling-lab-prints-model-of-virus-that-causes-covid-19/) Gwinner, K., Hauber, E., Jaumann, R., Michael, G., Hoffman, H., & Heipke, C. (2015, April 12-17). _Global topography of Mars from High Resolution Stereo Camera (HRSC) multi-orbit data products: The first quadrangle (MC-11E) and the landing site areas of ExoMars_. [Conference presentation abstract]. European Geosciences Union (EGU) General Assembly 2015, Vienna, Austria. [https://meetinaporanizer.copernicus.org/EGU2015/EGU2015-13158.pdf](https://meetinaporanizer.copernicus.org/EGU2015/EGU2015-13158.pdf) Hasiuk, F. J., Harding, C., Renner, A. R., & Winer, E. (2017). TouchTerrain: A simple web-tool for creating 3D-printable topographic models. _Computers & Geosciences_, _109_, 25-31. [https://doi.org/10.1016/j.cageo.2017.07.005](https://doi.org/10.1016/j.cageo.2017.07.005) Hinode Science Center at NAOJ. (2018, July 9). _VR app \"Excursion to the Sun\" has been released!_ [Press release]. [https://hinode.nao.ac.jp/en/news/notice/vr-app-excursion-to-the-sun-has-been-released/](https://hinode.nao.ac.jp/en/news/notice/vr-app-excursion-to-the-sun-has-been-released/) Hurt, R., Wyatt, R., Subbarao, M., Arcand, K., Faherty, J. K., Lee, J., & Lawton, B. (2019). _Making the case for visualization_ [White paper]. arXiv. [https://arxiv.org/pdf/1907.10181.pdf](https://arxiv.org/pdf/1907.10181.pdf) Isenberg, T. (2013). Position paper: Touch interaction in scientific visualization. In P. Isenberg, S. Carpendale, T., Hesselman, T., Isenberg, T., & B. Lee (Eds.), _Proceedings of the Workshop on Data Exploration on Interactive Surfaces_ (pp. 24 -27). HAL. [https://hal.inria.fr/file/index/docid/781512/filename/Isenberg](https://hal.inria.fr/file/index/docid/781512/filename/Isenberg) 2011 TIS.pdf Jet Propulsion Laboratory (2020, January 24). _Operate a Great Observatory with new VR experience_ [Press release]. [http://www.spitzer.caltech.edu/news/2222-ssc2020-05-Operate-a-NASA-Great-Observatory-With-New-VR-Experience](http://www.spitzer.caltech.edu/news/2222-ssc2020-05-Operate-a-NASA-Great-Observatory-With-New-VR-Experience) Jet Propulsion Laboratory (2017, October 19). _Take a walk on Mars - in your own living room_ [Press release]. [https://www.nasa.gov/feature/jpl/take-a-walk-on-mars-in-your-own-living-room](https://www.nasa.gov/feature/jpl/take-a-walk-on-mars-in-your-own-living-room) Keefe, D. F., Karelitz, D. B., Vote, E. L., & Laidlaw, D. H. (2005). Artistic collaboration in designing VR visualizations. _IEEE Computer Graphics and Applications_, _25_(2), 18-23. [https://doi.org/10.1109/MCG.2005.34](https://doi.org/10.1109/MCG.2005.34)Kim, R. (2018, November 5). _Asteroid Vesta_. NASA 3D Resources. [https://nasa3d.arc.nasa.gov/detail/asteroid-vesta](https://nasa3d.arc.nasa.gov/detail/asteroid-vesta) * Koelemeijer et al. (2019) Koelemeijer, P., Winterbourne, J., Toussaint, R., & Zaroli, C. (2019, December 9-13). _3D printing the world: Developing geophysical teaching materials and outreach packages_ [Poster presentation]. AGU 2019 Fall Meeting, San Francisco, CA, United States. [https://doi.org/10.1002/essoar.10501627.1](https://doi.org/10.1002/essoar.10501627.1) * Korycansky et al. (2002) Korycansky, D. G., Zahnle, K. J., & Mac Low, M. -M. (2002). High-resolution simulations of asteroids into the Venusian atmosphere II: 3D Models. _Icarus_, _157(1)_, 1-23. [https://doi.org/10.1006/icar.2002.6795](https://doi.org/10.1006/icar.2002.6795) * Kyriakopoulos (2019) Kyriakopoulos, C. (2019). 3D printing: A remedy to common misconceptions about earthquakes. _Seismological Research Letters_, _90_(4): 1689-1691. [https://doi.org/10.1785/0220190121](https://doi.org/10.1785/0220190121) * an emerging field for science communication_ [Paper presentation]. 8th International Symposium on Archaeological Mining History, Reichelsheim Odenwald, Hesse, Germany. [https://www.researchgate.net/profile/Peter_Loewe/publication/237079004_3D_Printouts_of_geo_loical_structures_land_surfaces_and_human](https://www.researchgate.net/profile/Peter_Loewe/publication/237079004_3D_Printouts_of_geo_loical_structures_land_surfaces_and_human) interaction - an emerging field in science communication/links/00b7d51b5c26fc35fe000000/3D-Printouts-of-geolocal-structures-land-surfaces-and-human-interaction-an-emeraging-field-in-science-communication.pdf * Madura et al. (2015) Madura, T. I., Clementel, N., Gull, T. R., Kruip, C. J. H., & Paardekooper, J. -P. (2015). 3D printing meets computational astrophysics: deciphering the structure of n Carinae's inner colliding winds. _Monthly Notices of the Royal Astronomical Society_, _449_(4), 3780-3794. [https://doi.org/10.1093/mnras/stv422](https://doi.org/10.1093/mnras/stv422) * Madura (2017) Madura, T. I. (2017). A case study in astronomical 3-D printing: The mysterious \\(\\eta\\) Carinae. _Publications of the Astronomical Society of the Public_, _129_(975), Article 058011. [https://doi.org/10.1088/1538-3873/129/975/058011](https://doi.org/10.1088/1538-3873/129/975/058011) * Mars Exploration Rovers (2014) Mars Exploration Rovers (2014, April 17). _Replicating a rock on Mars_ [Press release]. NASA. [https://mars.nasa.gov/mer/newsroom/pressreleases/20140407a.html](https://mars.nasa.gov/mer/newsroom/pressreleases/20140407a.html) * Mathiesen et al. (2012) Mathiesen, D., Myers, T., Atkinson, I., Trevathan, J. (2012, Sept. 26-28). _Geological visualisation with augmented reality_ [Paper presentation]. 15th International Conference on Network-Based Information Systems, Melbourne, Victoria, Australia. [https://doi.org/10.1109/NBiS.2012.199](https://doi.org/10.1109/NBiS.2012.199) * McEvoy (2020) McEvoy, J. (2020, May 24). _With a Braille printing press in his garage, this Sonoma Teacher goes the extra mile_. KQED. [https://www.kqed.org/news/11818669/with-a-braille-printing-press-in-his-garage-this-sonoma-teacher-goes-the-extra-mile](https://www.kqed.org/news/11818669/with-a-braille-printing-press-in-his-garage-this-sonoma-teacher-goes-the-extra-mile) * McLeod et al. (2015) McLeod, A.F., Dale, J. E., Ginsburg, A., Ercolano, B., Gritschneder, M., Ramsay, S., & Testi, L. (2015). The Pillars of Creation revisited with MUSE: gas kinematics and high-mass stellar feedback traced by optical spectroscopy. _Monthly Notices of the Royal Astronomical Society_, _450_(1), 1057-1076. [https://doi.org/10.1093/mnras/stv680](https://doi.org/10.1093/mnras/stv680) * McMahon & Walker (2019) McMahon, D. D., & Walker, Z. (2019). Leveraging emerging technology to design an inclusive future with universal design for learning. _Center for Educational Policy Studies Journal_, _9_(3), 75-93. [https://doi.org/10.26529/cepsi.639](https://doi.org/10.26529/cepsi.639)Menke, K., Beckmann, J., & Weber, P. (2020). 39. Universal Design for Learning in augmented and virtual reality trainings. In S. L. Gronseck & E. M. Dalton (Eds.), _Universal access through inclusive instructional design_ (pp. 294-304). Routledge. [https://books.google.com/books?hl=en&lr=&id=EdeuDwAAQBAJ&oi=fnd&pq=PA294&dq=9.+Universal+Design+for+Learning+in+augmented+and+virtual+reality+trainings&ots=U6-kP9qAqR&siq=r8QNGpuOSUfzZJIP](https://books.google.com/books?hl=en&lr=&id=EdeuDwAAQBAJ&oi=fnd&pq=PA294&dq=9.+Universal+Design+for+Learning+in+augmented+and+virtual+reality+trainings&ots=U6-kP9qAqR&siq=r8QNGpuOSUfzZJIP) BlxuclV7O4#v=onepage&q=9.%20Universal%20Design%20for%20Learning%20in%20audimented%20and%20virtual%20reality%20trainings&f=false Molitch-Hou, M. (2020, May 5). _3D printing and COVID-19, May 5, 2020 update_. 3DPrint.com. [https://3dprint.com/266910/3d-printing-and-covid-19-may-5-2020-update/](https://3dprint.com/266910/3d-printing-and-covid-19-may-5-2020-update/) NASA (2019, June 21). _'NASA selfies' and TRAPPIST-1 VR apps now available_ [Press release]. [https://www.nasa.gov/feature/ipl/nasa-selfies-and-trapist-1-vr-apps-now-available](https://www.nasa.gov/feature/ipl/nasa-selfies-and-trapist-1-vr-apps-now-available) National Federation of the Blind & Finkelstein, D. (1994). _Common eye conditions and causes of blindness in the United States_. National Federation of the Blind. [https://www.nfb.org/imaages/nfb/publications/books/ifblind/ifbInd10.htm](https://www.nfb.org/imaages/nfb/publications/books/ifblind/ifbInd10.htm) National Federation of the Blind Jernigan Institute. (2009). _The Braille literacy crisis in America: Facing the truth, reversing the trend, empowering the blind_. National Federation of the Blind. [https://theblindquide.com/wp-content/uploads/2018/06/Braille-Literacy-Crisis-in-America-NFB-29-Mar-18.pdf](https://theblindquide.com/wp-content/uploads/2018/06/Braille-Literacy-Crisis-in-America-NFB-29-Mar-18.pdf) Neitzke Adamo, L., Criscione, J., Irizarry, P., Pagenkopf, L., & Hayden, D. (2019, December 9-13). _Utilizing photogrammetry and 3D printers to create inclusive natural history tours and activities for the visually impaired at the Rutgers Geology Museum_ [Conference presentation abstract]. American Geophysical Union, Fall Meeting 2019, San Francisco, CA, United States. [https://ui.adsabs.harvard.edu/abs/2019AGUFMED33A..02N/abstract](https://ui.adsabs.harvard.edu/abs/2019AGUFMED33A..02N/abstract) Najjar, R. (2020, February 15). _Extended Reality (XR) explained through the 5+1 senses_. Medium. [https://uxdesign.cc/xr-through-5-1-senses-f396acf8a9f](https://uxdesign.cc/xr-through-5-1-senses-f396acf8a9f) Olshannikova, E., Ometov, A., Koucheryavy, Y., & Olsson, T. (2015). Visualizing big data with augmented and virtual reality: challenges and research agenda. _Journal of Big Data, 2_. [https://doi.org/10.1186/s40537-015-0031-2](https://doi.org/10.1186/s40537-015-0031-2) Orlando, S., Miceli, M., Petruk, O., Ono, M., Nagataki, S., Aloy, M. A., Mimica, P., Lee, S. -H., Bocchino, F., Peres, G., Guarrasi, M. (2018). 3D MHD Modeling of the expanding remnant of SN 1987A. Role of magnetic field and non-thermal radio emission. _Astronomy & Astrophysics_, 622, Article A73. [https://doi.org/10.1051/0004-6361/201834487](https://doi.org/10.1051/0004-6361/201834487) Orlando, S., Miceli, M., Pumo, M. L., & Bocchino, F. (2016). Modeling SNR Cassiopeia A from the supernova explosion to its current age: The role of post-explosion anisotropies of ejecta. _The Astrophysical Journal_, _822_(1), _[https://doi.org/10.3847/0004-637X/822/1/22](https://doi.org/10.3847/0004-637X/822/1/22) Orlando, S., Pillitteri, I., Bocchino, F., Danicello, L., & Leonardi, L. (2019). 3DMAP-VR, a project to visualize 3-dimensional models of astrophysical phenomena in virtual reality. _Research Notes of the AAS, 3_(11). [https://iopscience.iop.org/article/10.3847/2515-5172/ab5966/meta](https://iopscience.iop.org/article/10.3847/2515-5172/ab5966/meta) Pauwels, L. (2020). On the nature and role of visual representations in knowledge production and science communication. In Lefsmollman, A., Dascal, M., & Gloning, T. (Eds.), _Science Communication: Book 17_. Handbooks of Communication Science [HoCS] (pp. 235-256). [https://books.google.com/books?hl=en&l=&id=firEbwAAQBAJ&oi=fnd&pq=PA235&ots=git1AERmVI4&sig=TXU5tw7fuRKcrUlkvSRJUJO6Lqz8#v=onepaae&q&f=falsePearlman](https://books.google.com/books?hl=en&l=&id=firEbwAAQBAJ&oi=fnd&pq=PA235&ots=git1AERmVI4&sig=TXU5tw7fuRKcrUlkvSRJUJO6Lqz8#v=onepaae&q&f=falsePearlman), R. Z. (2020, February 28). _Smithsonian Open Access launches with space artifact 2D and 3D images_. Space.com. _[https://www.space.com/smithsonian-open-access-space-artifacts.htmlPearson](https://www.space.com/smithsonian-open-access-space-artifacts.htmlPearson), D. (2020, April 2). _Virtual reality shows COVID-19 lungs in vivid detail_. Al in Healthcare. _[https://www.ain.healthcare/topics/diagnostic-imaqing/virtual-reality-covid-19-lungsPomarede](https://www.ain.healthcare/topics/diagnostic-imaqing/virtual-reality-covid-19-lungsPomarede), D., Tully, R. B., Graziani, R., Courtois, H. M., Hoffman, Y., & Lezmy, J. (2020). _Cosmicflows-3_: The South Pole Wall. _The Astrophysical Journal_, _897_(2), [https://doi.org/10.3847/1538-4357/ab9952Pound](https://doi.org/10.3847/1538-4357/ab9952Pound), K. S. (2019, December 9-13). _'Seeing' geology: Development of learning materials for visually impaired students_ [Conference presentation abstract]. American Geophysical Union, Fall Meeting 2019, San Francisco, CA. [https://ui.adsabs.harvard.edu/abs/2019AGUFMIN21B..16P/abstractRamirez](https://ui.adsabs.harvard.edu/abs/2019AGUFMIN21B..16P/abstractRamirez), E., Nunez, J. G., Hernandez, J., Salgado, J., Mora, A., Lammers, U., Merin, B., Baines, D., de Marchi, G., & Arviset, C. (2019). Analysis of astronomical data using VR: The Gaia Catalog in 3D. In P. J. Teuben, M. W. Pound, B. A. Thomas, & E. M. Warner (Eds.), _Astronomical Data Analysis Software and Systems XXVIII_. _ASP Conference Series_ (Vol. 523, pp. 21-24). Astronomical Society of the Pacific. _[http://aspbooks.org/publications/523/021.pdf_](http://aspbooks.org/publications/523/021.pdf_) Rector, T., Arcand, K., & Watzke, M. (2015). _Coloring the Universe_ (1st ed.). University of Alaska Press. Ribeiro, F., Florencio, D., Chou, P. A., & Zhang, Z. (2012, 17 -19 Sept). _Auditory augmented reality: Object sonification for the visually impaired_ [Paper presentation]. 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP), Banff, Alberta, Canada. _[https://doi.org/10.1109/MMSP.2012.6343462_](https://doi.org/10.1109/MMSP.2012.6343462_) Royal Astronomical Society (2019, July 1). _3-D holograms bringing astronomy to life_. Phys.org. [https://phys.org/news/2019-07-d-holograms-astronomy-life.htmlRussell](https://phys.org/news/2019-07-d-holograms-astronomy-life.htmlRussell), C. M. P. (2017). _360-degree videos: A new visualization technique for astrophysical simulations_ [Preprint]. arXiv. _[https://arxiv.org/pdf/1707.06954.pdf_](https://arxiv.org/pdf/1707.06954.pdf_) Russell, C. M. P., Wang, Q. D., & Cuadra, J. (2017). Modelling the thermal X-ray emission around the Galactic Centre from colliding Wolf-Rayet winds. _Monthly Notices of the Royal Astronomical Society_, _464_(4), 4958-4965. _[https://doi.org/10.1093/mnras/stw2584_](https://doi.org/10.1093/mnras/stw2584_) Smithsonian Institution. (2014, June 2). _Smithsonian Directive 215: Accessibility for people with disabilities_. _[https://airandspace.si.edu/rfp/exhibitions/files/i3-directive-215.pdf_](https://airandspace.si.edu/rfp/exhibitions/files/i3-directive-215.pdf_) Steffen, W. (2016, June 27). _Why 3-D is essential in astronomy outreach_. 3D Astrophysics. [https://3dastrophysics.wordpress.com/2016/06/27/whv-3-d-is-essential-in-astronomy-outreach/Steffen](https://3dastrophysics.wordpress.com/2016/06/27/whv-3-d-is-essential-in-astronomy-outreach/Steffen), W., Koning, N., Wenger, S., Morisset, C., & Magnor, M. (2011). Shape: A 3D modeling tool for astrophysics. _IEEE Transactions on Visualization and Computer Graphics_, _17_(4), 454-465. _[https://doi.org/10.1109/TVCG.2010.62_](https://doi.org/10.1109/TVCG.2010.62_)Steffen, W., Teodoro, M., Madura, T. I., Groh, J. H., Gull, T. R., Mehner, A., Corcoran, M. F., Daminelli, A., & Hamaguchi, K. (2014). The three-dimensional structure of the Eta Carinae Homunculus. _Monthly Notices of the Royal Astronomical Society_, _442(4)_, 3316-3328. [https://doi.org/10.1093/mnras/stu1088](https://doi.org/10.1093/mnras/stu1088) * A 3D multiwavelength visualization of the Crab Nebula_ [Conference presentation abstract]. 235th Meeting of the American Astronomical Society, Honolulu, HI, United States. [https://aas.org/sites/default/files/2020-01/AAS235-Meeting-Abstracts.pdf](https://aas.org/sites/default/files/2020-01/AAS235-Meeting-Abstracts.pdf) * Takagishi & Umezu (2017) Takagishi, K. & Umezu, S. (2017). Development of the improving process for the 3D printing structure. _Scientific Reports_, 7, Article 39852. [https://doi.org/10.1038/srep39852](https://doi.org/10.1038/srep39852) * Touch Graphics (2015) Touch Graphics, Inc. (2015). _Talking Tactile Tablet 2 (TTT)_. [http://touchgraphics.com/portfolio/ttt/](http://touchgraphics.com/portfolio/ttt/) * Trexler et al. (2018) Trexler, C. C., Morelan, A. E., Oskin, M. E., & Kreylos, O. (2018). Surface slip from the 2014 South Napa earthquake measured with structure from motion and 3-D virtual reality. _Geophysical Research Letters_, _45_, 5985-5991. [https://doi.org/10.1029/2018GL078012](https://doi.org/10.1029/2018GL078012) * Trotta (2018) Trotta, R. (2018). The hands-on Universe: Making sense of the Universe with all your senses. _Communicating Astronomy with the Public Journal_, _1(23)_, 20-25. [https://www.capjournal.org/issues/23/23](https://www.capjournal.org/issues/23/23) 20.pdf * Trotta et al. (2020) Trotta, R., Hajas, D., Camargo-Molina, J. E., Cobden, R., Maggioni, E., & Obrist, M. (2020). Communicating cosmology with multisensory metaphorical experiences. _Journal of Science Communication_, _19(2)_, Article N01. [https://doi.org/10.22323/2.19020801](https://doi.org/10.22323/2.19020801) * Disability (2006) United Nations - - Disability, Department of Economic and Social Affairs (2006). Convention on the Rights of Persons with Disabilities (CRPD), Article 21 - Freedom of opinion and expression, and access to information. Retrieved from [https://www.un.orq/development/desa/disabilities/](https://www.un.orq/development/desa/disabilities/) convention-on-the-rights-of-persons-with-disabilities/article-21-freedom-of-expression-and-opinion-and-access-to-information.html * Ustamujic et al. (2016) Ustamujic, S., Orlando, S., Bonito, R., Miceli, M., Gomez de Castro, A. I., & Lopez-Santiago, J. (2016). Formation of X-ray emitting stationary shocks in magnetized protostellar jets. _Astronomy & Astrophysics_, _596_, Article A99. [https://doi.org/10.1051/0004-6361/01628712](https://doi.org/10.1051/0004-6361/01628712) * Ustamujic et al. (2018) Ustamujic, S., Orlando, S., Bonito, R., Miceli, M., Gomez de Castro, A. I. (2018). Structure of X-ray emitting jets close to the launching site: from embedded to disk-bearing sources. _Astronomy & Astrophysics_, _615_, Article A124. [https://doi.org/10.1051/0004-6361/201732391](https://doi.org/10.1051/0004-6361/201732391) * Weferling (2007) Weferling, T. (2007). Astronomy for the blind and visually impaired: An introductory lesson. _Astronomy Education Review_, _1(5)_, 102-109. [https://doi.org/10.3847/AER2006006](https://doi.org/10.3847/AER2006006) * Westhead et al. (2013) Westhead, R. K., Smith, M., Shelley, W.A., Pedley, R.C., Ford, J., & Napier, B. (2013). Mobile spatial mapping and augmented reality applications for environmental geoscience. _Journal of Internet Technology and Secured Transactions (JITST)_, _2_(1-4), 185-190. [http://nora.nerc.ac.uk/id/eprint/504475/1/Mobile%20spatial%20mapping.pdf](http://nora.nerc.ac.uk/id/eprint/504475/1/Mobile%20spatial%20mapping.pdf)Woods, T. L., Reed, S., Hsi, S., Woods, J. A. & Woods, M. R. (2016). Pilot study using the augmented reality sandbox to teach topographic maps and surficial processes in introductory geology labs. _Journal of Geoscience Education_, _64_, 199-214. _[https://doi.org/10.5408/15-135.1_](https://doi.org/10.5408/15-135.1_) * Zhang et al. (2020) Zhang, X., Tlili, A., Nascimbeni, F., Burgos, D., Huang, R., Changa, T. -W., Jemni, M., Khribi, M., K. (2020). Accessibility within open educational resources and practices for disabled learners: a systematic literature review. _Smart Learning Environments_, \\(7\\), Article 1. [https://doi.org/10.1186/s40561-019-0113-2](https://doi.org/10.1186/s40561-019-0113-2).
Three-dimensional (3D) visualization has opened up a universe of possible scientific data representations. 3D printing has the potential to make seemingly abstract and esoteric data sets accessible, particularly through the lens of translating data into forms that can be explored in the tactile modality for people who are blind or visually impaired. This article will briefly review 3D modeling in astrophysics, astronomy, and planetary science, before discussing 3D printed astrophysical and planetary geophysical data sets and their current and potential applications with non-expert audiences. The article will also explore the prospective pipeline and benefits of other 3D data outputs in accessible scientific research and communications, including extended reality and data sonification.
Write a summary of the passage below.
143
arxiv-format/2402_04384v2.md
# Denoising Diffusion Probabilistic Models in Six Simple Steps Richard E. Turner 1Department of Engineering, University of Cambridge, UK 1 Cristiana-Diana Diaconu 2Department of Engineering, University of Cambridge, UK 2 Stratis Markou 3Department of Engineering, University of Cambridge, UK 3 Aliakandra Shysheya 3Microsoft Research, Cambridge, UK3 ###### Preliminaries Let's start by defining the problem that we're interested in solving. **Problem definition**. We have a large amount of training data \\(x\\) that come from an underlying distribution \\(q(x)\\).1 We want to fit a probabilistic model \\(p(x)\\) to this training data2 which will approximate \\(q(x)\\) with the main aim being to sample new data \\(x\\sim p(\\cdot)\\).3 Footnote 1: To simplify the notation we will consider one dimensional real-valued data, but the generalisation to \\(D\\) dimensional data is straightforward. We will also assume that the data are zero mean and unit variance. Footnote 2: In practice a model with latent variables \\(z\\) will be used \\(p(x,z)\\) so \\(p(x)\\) is specified implicitly through an intractable marginalisation \\(p(x)=\\int p(x,z)\\;\\mathrm{d}z\\). Footnote 3: Note that we will want \\(p(x)\\) to be a better approximation of \\(q(x)\\) than the raw data distribution \\(\\frac{1}{N}\\sum_{n=1}^{N}\\delta(x-x_{n})-\\infty\\) samples from \\(p(x)\\) should not simply regurgitate the training data and should be plausible new samples from \\(q(x)\\). ## 2 The Six Steps of the DDPM We now break down the DDPM approach into six simple steps, each with a clear rationale and an associated design-space. ### Augmentation The goal of the first step of the DDPM is to turn the hard unsupervised generative modelling problem into a series of simple supervised regression problems. We can then leverage standard deep learning tools for supervised regression, which interpolate and generalise well, in order to learn a generative model. **The Augmentation Conditions**. The conversion from a generative modelling problem into a supervised learning problem is achieved by augmenting the original training data with \\(T\\) additional fidelity levels. Specifically we'll denote the augmented data as \\[x^{(0:T)}=[x^{(0)},x^{(1)},x^{(2)},\\ldots,x^{(T-1)},x^{(T)}]\\] and we will set up the augmentation to meet the following conditions 1. the highest fidelity data \\(x^{(0)}\\) will be the original training data (by construction), 2. the lowest fidelity level will be simple to sample from directly i.e. \\(p(x^{(T)})\\) will have a simple form, 3. predicting a higher fidelity level from the fidelity level below it will be a simple regression problem i.e. \\(p(x^{(t-1)}|x^{(t)})\\) is easy to model and learn.4 Footnote 4: One worry is that although each regression problem is simple to model and learn, small errors made at each stage could compound into large errors by the time we generate the data. However, in certain cases theory is available which guarantees that this does not happen [1, 2]. These papers bound the KL divergence between the data distribution and model marginal for certain augmentation processes, showing that the KL can be made arbitrarily small if a sufficient number of augmentations \\(T\\) are used. 5 In this construction the augmentations \\(x^{(1:T)}\\) can be considered to be latent variables so that \\[p(x^{(0)})=\\int p(x^{(0:T)})\\mathrm{d}x^{(1:T)}.\\] ## 3 Relationship to neural auto-regressive models Neural auto-regressive models [1] construct a generative model by using the product rule to decompose a joint distribution into a set of low-dimensional conditional distributions \\[p(\\mathbf{x})=p(x_{1})p(x_{2}|x_{1})p(x_{3}|x_{1:2})\\ldots p(x_{D}|x_{1:D-1}).\\] These low-dimensional conditional distributions \\(p(x_{d}|x_{1:d-1})\\) are now simple regression problems with inputs \\(x_{1:d-1}\\) and targets \\(x_{d}\\) for which neural networks are well suited. In this context, we can consider our setup as defining a neural auto-regressive model on an augmented dataset \\[p(x^{(0:T)}) =p(x^{(T)})\\prod_{t=1}^{T}p(x^{(t-1)}|x^{(t:T)})\\] \\[=p(x^{(T)})\\prod_{t=1}^{T}p(x^{(t-1)}|x^{(t)}).\\] Here we have used the fact that as the noising process is first order Markov, the optimal denoising process is also first order Markov and so longer range dependencies can be discarded. In the multi-dimensional case, blocks of variables \\(x^{(t-1)}\\) (rather than single scalar variables) are generated in each auto-regressive step. The parameters of the augmentation process, \\(\\{\\lambda_{t},\\sigma_{t}^{2}\\}_{t=1}^{T}\\) and the number of augmentation steps \\(T\\), need to be selected to fulfill conditions 2 and 3 above. Luckily the use of a linear-Gaussian augmentation strategy means that we can use analytic results to figure out how to do this. It turns out that there is some freedom, but one popular choice first sets \\(\\sigma_{t}^{2}=1-\\lambda_{t}^{2}\\) which ensures that marginally the augmentations have zero mean and unit variance i.e. \\(\\mathbb{E}_{q(x^{(t)})}[x^{(t)}]=0\\) and \\(\\mathbb{E}_{q(x^{(t)})}[(x^{(t)})^{2}]=1\\).9 This is known as a variance preserving augmentation and in this case the conditional distribution of an augmentation at level \\(t\\) given the data \\(x^{(0)}\\) is given by10 Footnote 9: To prove this, consider how the mean and variance of the previous state are affected when passing through the linear Gaussian dynamics. A full derivation can be found in appendix 4.1.1. Footnote 10: This can be shown by unrolling the auto-regressive dynamics. A full derivation can be found in appendix 4.1.2. \\[q(x^{(t)}|x^{(0)})=\\mathcal{N}\\left(x^{(t)};\\left(\\prod_{t^{\\prime}=1}^{t} \\lambda_{t^{\\prime}}\\right)x^{(0)},1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{ \\prime}}^{2}\\right). \\tag{1}\\] Notice that if we have lots of levels (\\(T\\rightarrow\\infty\\)) and select \\(\\lambda_{t}<1\\) then we will forget the initial condition (since \\(\\prod_{t^{\\prime}=0}^{T}\\lambda_{t^{\\prime}}\\to 0\\)) and then the marginal distribution of the lowest fidelity state takes a simple form \\(q(x^{(T)})\\rightarrow\\mathcal{N}(x^{(T)};0,1)=p(x^{(T)})\\) which is a very easy distribution to sample from. So condition 2 can be approximately satisfied by having a large number of levels. Finally, we can satisfy condition 3 by noticing that if we add only a small amount of noise between levels (equivalently \\(\\lambda_{t}\\lesssim 1\\) so that \\(\\sigma_{t}^{2}=1-\\lambda_{t}^{2}\\gtrsim 0\\)) then performing the regression problem \\(p(x^{(t-1)}|x^{(t)})\\) - which intuitively involves removing this small amount of noise from \\(x^{(t)}\\) to estimate \\(x^{(t-1)}\\) and providing an uncertainty estimate - will be simple (indeed in the limit \\(\\lambda_{t}\\to 1\\) the mapping is the identity \\(p(x^{(t-1)}|x^{(t)})\\rightarrow\\delta(x^{(t)}-x^{(t-1)})\\) ).11 Footnote 11: Note that condition 2 (the limiting distribution should be simple to sample from) and condition 3 (the denoising process should be simple to model and learn) are in tension with one another. On the one hand, for a fixed number of fidelity levels \\(T\\), using \\(\\lambda_{t}\\approx 0\\) leads to limiting distributions that are very close to a standard Gaussian, but they make denoising much harder as lots of noise is added in each step. On the other hand using \\(\\lambda_{t}\\approx 1\\) makes denoising simpler, but the limiting distribution is further from a standard Gaussian. ### Supervised Step-wise Objective Function We will initially consider the case where each of the regression problems has its own individual set of parameters \\(\\theta_{t-1}\\) i.e. \\(p(x^{(t-1)}|x^{(t)},\\theta_{t-1})\\). A simple way to train each regression model is to treat the augmented dataset just like a regular dataset and perform maximum-likelihood learning of the parameters \\[\\theta_{0:T-1}^{*}=\\operatorname*{arg\\,max}_{\\theta_{0:T-1}}\\;\\mathcal{L}( \\theta_{0:T-1})\\] Figure 1: The augmented data at different fidelity levels (left), the data generation or denoising process used at test time (centre), and the augmentation process that generates the different fidelity levels via the noising process for training (right). where the log-likelihood of the parameters is \\[\\mathcal{L}(\\theta_{0:T-1})=\\mathbb{E}_{q(x^{(0:T)})}[\\log p(x^{(0:T)}|\\theta_{0: T-1})].\\] I.e. we find the parameters \\(\\theta_{0:T-1}\\) that make the augmented dataset as probable as possible.12 Footnote 12: Practically the expectation over \\(q(x^{(0:T)})\\) will be performed by averaging over lots of samples from our augmented training data set \\(x_{n}^{(0:T)}\\sim q(x^{(0:T)})\\): \\[\\mathcal{L}(\\theta_{0:T-1}) =\\mathbb{E}_{q(x^{(0:T)})}[\\log p(x^{(0:T)}|\\theta_{0:T-1})]\\] \\[\\approx\\frac{1}{N}\\sum_{n=1}^{N}\\log p(x_{n}^{(0:T)}|\\theta_{0:T- 1})\\] Note that only the second term on the right hand side depends on the parameters and that each \\(\\theta_{t}\\) appears in a single term in the sum. So \\[\\theta_{t-1}^{*} =\\operatorname*{arg\\,max}_{\\theta_{t-1}}\\;\\mathcal{L}_{t-1}( \\theta_{t-1})\\;\\;\\text{where}\\] \\[\\mathcal{L}_{t-1}(\\theta_{t-1}) =\\mathbb{E}_{q(x^{(t-1)},\\;x^{(t)})}[\\log p(x^{(t-1)}|x^{(t)}, \\theta_{t-1})].\\] Which involves simply fitting a regression model by maximum likelihood learning that maps between each fidelity level in the augmented training data set.13 Footnote 13: **Relationship to variational auto-encoders**. We have previously mentioned that our model can be considered to be a latent variable model with latents \\(x^{(1:T)}\\). The Evidence Lower Bound (ELBO) is often used to perform learning and inference in such models [Kingma and Welling, 2022]. It turns out that the DDPM can be presented in this way when the approximate posterior is fixed (not learned) and corresponds to the noising distribution. In this way it can be seen as an instance of a variational auto-encoder. This perspective, and the reasons why we do not think it should have a central place in the exposition, is explained in appendix 4.3. ### Parameter Tying and a Joint Objective Typically diffusion models have large numbers of steps \\(T\\). If we want to use flexible neural networks to parameterise each conditional distribution with \\(K\\) parameters each, then this leads to an explosion in the number of parameters (there will be \\(T\\times K\\) of them) and a large memory cost from instantiating the different models. In order to retain flexibility with a small set of parameters, we will instead share parameters across each of the regression problems. A simple way to do this is to build models that amortize across fidelity levels i.e. the model takes in the fidelity level \\(t-1\\), a shared set of parameters \\(\\theta\\), and the previous level's variables \\(x^{(t)}\\) and outputs a distribution over the next fidelity level's variables \\(p(x^{(t-1)}|x^{(t)},\\theta,t-1)\\).14 Footnote 14: One example of amortization is in image modelling where \\(x^{(0)}\\) is an image. Here a convolutional neural network called a UNet is often used to map between levels, \\(x^{(t-1)}=\\text{UNet}(x^{(t)};\\theta_{t-1})\\). The level-specific parameters of the UNet \\(\\theta_{t-1}\\), the convolutional filters, are produced by modulating a global set of parameters \\(\\theta\\) in a level-dependent way \\(\\theta_{t-1}=\\text{FILM}(\\theta;1))\\). The FilM modulation simply scales and shifts each parameter so \\(\\theta_{i,t-1}=\\kappa_{i}(t-1)\\theta_{i}+\\delta_{i}(t-1)\\). The scale \\(\\kappa_{i}(t-1)\\) and the shift \\(\\delta_{i}(t-1)\\) are themselves produced using a multi-layer perceptron which performs the amortisation by taking \\(t-1\\) as input in the form of a sinusoidal encoding. \\[\\theta^{*} =\\operatorname*{arg\\,max}_{\\theta}\\sum_{t=1}^{T}w_{t-1}\\;\\mathcal{ L}_{t-1}(\\theta)\\;\\;\\text{where}\\] \\[\\mathcal{L}_{t-1}(\\theta) =\\mathbb{E}_{q(x^{(t-1)},\\;x^{(t)})}[\\log p(x^{(t-1)}|x^{(t)}, \\theta,t-1)].\\] This new training objective appears expensive to compute at first sight as each gradient computation requires computation of \\(T\\) terms each of which involves a forward pass through a neural network. However, stochastic optimisation saves the day. We rewrite the sum over the levels \\(t\\) as an expectation over a uniform distribution over the integers \\(1\\dots T\\) \\[\\frac{1}{T}\\sum_{t=1}^{T}w_{t-1}\\;\\mathcal{L}_{t-1}(\\theta)=\\mathbb{E}_{t \\sim\\text{Uniform}(1,T)}\\left[w_{t-1}\\;\\mathcal{L}_{t-1}(\\theta)\\right].\\] ## 2 Relationship to variational auto-encoders We have previously mentioned that our model can be considered to be a latent variable model with latents \\(x^{(1:T)}\\). The Evidence Lower Bound (ELBO) is often used to perform learning and inference in such models [Kingma and Welling, 2022]. It turns out that the DDPM can be presented in this way when the approximate posterior is fixed (not learned) and corresponds to the noising distribution. In this way it can be seen as an instance of a variational auto-encoder. This perspective, and the reasons why we do not think it should have a central place in the exposition, is explained in appendix 4.3. ### Parameter Tying and a Joint Objective Typically diffusion models have large numbers of steps \\(T\\). If we want to use flexible neural networks to parameterise each conditional distribution with \\(K\\) parameters each, then this leads to an explosion in the number of parameters (there will be \\(T\\times K\\) of them) and a large memory cost from instantiating the different models. In order to retain flexibility with a small set of parameters, we will instead share parameters across each of the regression problems. A simple way to do this is to build models that amortize across fidelity levels i.e. the model takes in the fidelity level \\(t-1\\), a shared set of parameters \\(\\theta\\), and the previous level's variables \\(x^{(t)}\\) and outputs a distribution over the next fidelity level's variables \\(p(x^{(t-1)}|x^{(t)},\\theta,t-1)\\).14 Footnote 14: One example of amortization is in image modelling where \\(x^{(0)}\\) is an image. Here a convolutional neural network called a UNet is often used to map between levels, \\(x^{(t-1)}=\\text{UNet}(x^{(t)};\\theta_{t-1})\\). The level-specific parameters of the UNet \\(\\theta_{t-1}\\), the convolutional filters, are produced by modulating a global set of parameters \\(\\theta\\) in a level-dependent way \\(\\theta_{t-1}=\\text{FILM}(\\theta;1))\\). The FilM modulation simply scales and shifts each parameter so \\(\\theta_{i,t-1}=\\kappa_{i}(t-1)\\theta_{i}+\\delta_{i}(t-1)\\). The scale \\(\\kappa_{i}(t-1)\\) and the shift \\(\\delta_{i}(t-1)\\) are themselves produced using a multi-layer perceptron which performs the amortisation by taking \\(t-1\\) as input in the form of a sinusoidal encoding. \\({}^{15}\\)As we will see later, often a Gaussian distribution is used for \\(p(x^{(t-1)}|x^{(t)},\\theta,t-1)\\) and only the mean is learned. In this case the weights allow the user to have independent control over the width of the conditional distributions during generation and the way each level trades-off against one another in the cost. If the variances are learned, it is not clear whether it is necessary to use these weights i.e. the maximum likelihood setting \\(w_{t}=1\\) might be sufficient and even when the variances are fixed some authors use the unweighted objective [Kingma et al., 2021] \\({}^{16}\\)Alternatively, these weights can be interpreted as performing unweighted maximum-likelihood learning on a modified augmented data set. See figure 2 for a description of this modified augmentation procedure. Now, at each step, we can sample a level \\(t\\) at random and compute the gradient just on this level to form an unbiased estimate of the full gradient, in much the same way as we subsample training data in stochastic gradient ascent. This computation is as expensive as a single step of gradient-based learning for a single level in the original untied model. Even better, practically, this new model can learn faster than the untied model as the parameter tying means that information from level \\(t\\) helps learn better models for \\(t^{\\prime}\ eq t\\). For example, the regression problems at adjacent levels \\(t+1\\) and \\(t-1\\) will be very similar. In this way, diffusion becomes very scalable and quick to train. ### Gaussian Regression Model: Variance Reduction We now need to select a model to perform the regression. Since the noising process is Gaussian and only a very small amount of noise is added at each step, a common choice for the denoising process is to also use a Gaussian17 Footnote 17: Amazingly, the optimal regression in a sensibly defined limit as \\(T\\rightarrow\\infty\\) has this same form (Anderson, 1982; Song et al., 2021), justifying the choice of this family. \\[p(x^{(t-1)}|x^{(t)},t-1,\\theta)=\\mathcal{N}\\left(x^{(t-1)};\\mu_{\\theta}(x^{(t) },t-1),\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\right).\\] Here the mean of the Gaussian \\(\\mu_{\\theta}(x^{(t)},t-1)\\) and the variance \\(\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\) depend on the data at the previous fidelity level \\(x^{(t)}\\) in a non-linear way and will be specified using a neural network. Choosing this simple regression model has another crucial advantage. It allows us to reduce the Monte Carlo noise in the training objective by performing one of the averages over the augmented training data analytically rather than by sampling.18 This trick means we effectively train the model on an infinite set of augmentations. Footnote 18: This trick relates to Rao-Blackwellisation and the local reparameterisation trick for Bayesian neural networks To see how to do this, we first compute the key term in the maximum-likelihood objective function using this model class \\[\\mathcal{L}_{t-1}(\\theta)=-\\frac{1}{2}\\mathbb{E}_{q(x^{(t-1)},\\;x^{(t)})}\\left[ \\frac{(x^{(t-1)}-\\mu_{\\theta}(x^{(t)},t-1))^{2}}{\\sigma_{\\theta}^{2}(x^{(t)}, t-1)}+\\log 2\\pi\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\right]\\] Practically, the expectations with respect to \\(q(x^{(t-1)},\\;x^{(t)})\\) can be computed using samples from our augmented data set. A naive approach samples a data point from \\(x^{(0)}\\sim q(x^{(0)})\\) and then samples the augmented versions from \\(x^{(t-1)},\\;x^{(t)}\\sim q(x^{(t-1)},\\;x^{(t)}|x^{(0)})\\) (which is a simple Gaussian distribution). However, a smarter approach performs the expectation over \\(x^{(t-1)}\\) analytically using the law of nested conditional expectations \\[\\mathbb{E}_{q(x^{(t-1)},\\;x^{(t)})}(f(x^{(t-1)},\\;x^{(t)})) =\\mathbb{E}_{q(x^{(0)},\\;x^{(t-1)},\\;x^{(t)})}[f(x^{(t-1)},\\;x^{(t )})]\\] \\[=\\mathbb{E}_{q(x^{(0)},\\;x^{(t)})}[\\mathbb{E}_{q(x^{(t-1)}|x^{(0) },\\;x^{(t)})}[f(x^{(t-1)},\\;x^{(t)})]]\\] The inner expectation here is analytic when the noising and denoising processes are Gaussian yielding19 Footnote 19: The formula for the inner expectation is closely related to the KL divergence between two Gaussians. \\[\\mathcal{L}_{t-1}(\\theta)=-\\frac{1}{2}\\mathbb{E}_{q(x^{(0)},\\;x^{( t)})}\\left[\\frac{(\\mu_{t-1|0,t}-\\mu_{\\theta}(x^{(t)},t-1))^{2}+\\sigma_{t-1|0,t}^{2}}{ \\sigma_{\\theta}^{2}(x^{(t)},t-1)}\\right.\\\\ \\left.+\\log 2\\pi\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\right]\\] Here \\(q(x^{(t-1)}|x^{(0)},\\;x^{(t)})=\\mathcal{N}(x^{(t-1)};\\mu_{t-1|0,t},\\sigma_{t-1 |0,t}^{2})\\) where the mean is a linear combination of \\(x^{(0)}\\) and \\(x^{(t)}\\), that is \\(\\mu_{t-1|0,t}=a^{(t-1)}x^{(0)}+b^{(t-1)}x^{(t)}\\).20 Figure 2: Weighted augmentation. The original augmentation scheme (left) starts by sampling a data point at random and then generates \\(T\\) progressively noisier versions. In the weighted augmentation scheme (right), we sample a fidelity level in proportion to its weight \\(w_{t}\\) and then sample pairs of points from adjacent fidelity models from the marginal. In this way, some levels will have more data points than others and will make a larger contribution to the training loss. ### Parameterising the Model We have said that a neural network will be used to parameterise \\(\\mu_{\\theta}(x^{(t)},t-1)\\) and \\(\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\). One option would be to use a standard architecture and to parameterise the mean and variance directly without modification. However, we can do better than this by building in an inductive bias that will help the network operate across the different fidelity levels. There are many ways to do this but here we will highlight two options for both the mean and the variance. **Mean Option 1: \\(x^{(0)}\\)-parameterisation**. First we take inspiration from \\(q(x^{(t-1)}|x^{(0)},\\ x^{(t)})\\). Specifically, consider the conditional mean (equation 2) which tells us how to compute the best guess for \\(x^{(t-1)}\\) if we know \\(x^{(t)}\\) and \\(x^{(0)}\\). \\[\\mu_{t-1|0,t}=a^{(t-1)}x^{(0)}+b^{(t-1)}x^{(t)}.\\] We could therefore ask the network to predict the clean data at each step, \\(\\hat{x}_{\\theta}^{(0)}(x^{(t)},t-1)\\approx x^{(0)}\\), and use this to compute the mean \\[\\mu_{\\theta}(x^{(t)},t-1)=a^{(t-1)}\\hat{x}_{\\theta}^{(0)}(x^{(t)},t-1)+b^{(t- 1)}x^{(t)}.\\] In this way the network is always asked to predict the same thing (the original data) regardless of the augmentation level \\(t\\) we are operating at which reduces how much it has to be adapted from one level to the next. Notice that this construction also builds in a sensible linear residual connection to \\(x^{(t)}\\) and a rescaling factor. The approach of estimating the clean data from noisy versions relates to the denoising auto-encoder [Vincent et al., 2008].21 Notice that the denoising process now depends on the parameters of the noising process through \\(a^{(t-1)}\\) and \\(b^{(t-1)}\\). **Mean Option 2: \\(\\epsilon\\)-parameterisation**. Alternatively, we can ask the network to estimate the noise which has been added to \\(x^{(0)}\\) to produce \\(x^{(t)}\\). Using equation 1 we have22 Footnote 21: **Relationship to the denoising auto-encoder**. The denoising auto-encoder performs representation learning by training a network \\(h(\\cdot)\\) to denote corrupted data \\(x^{(0)}+\\epsilon\\) via minimising \\(\\mathbb{E}_{q(x^{(0)})q(\\epsilon)}\\left[\\left(x^{(0)}-h(x^{(0)}+\\epsilon) \\right)^{2}\\right]\\). We can compare this to our case. As we have based the parameterisation of \\(\\mu_{\\theta}(x^{(t)},t-1)\\) on the form of the conditional mean \\(\\mu_{t-1|0,t}\\), the squared difference in the objective function now has a simple form \\[\\mathbb{E}_{q(x^{(0)},\\ x^{(t)})}\\bigg{[}(\\mu_{t-1|0,t}-\\mu_{ \\theta}(x^{(t)},t-1))^{2}\\bigg{]}=\\] \\[\\left(a^{(t-1)}\\right)^{2}\\mathbb{E}_{q(x^{(0)},\\ x^{(t)})}\\bigg{[} (x^{(0)}-\\hat{x}_{\\theta}^{(0)}(x^{(t)},t-1))^{2}\\bigg{]}.\\] So our network also tries to denoise each \\(x^{(t)}\\) into \\(x^{(0)}\\) as in the denoising auto-encoder (for more information see the Variational Diffusion Model objective in the next section). Footnote 22: Here we have defined \\[c^{(t)}=\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}\\ \\ \\text{and}\\ \\ d^{(t)}=\\sqrt{1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}}.\\] **Variance Option 1**. Learning of the variance \\(\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\) has been found to be challenging [Nichol and Dhariwal, 2021].24 Therefore it is typically set to a constant value which is independent of \\(x^{(t)}\\). One option is to set it equal to the variance of the noising process Footnote 24: **Generating the noise in the denoising auto-encoder**. The denoising auto-encoder performs representation learning by training a network \\(h(\\cdot)\\) to denote corrupted data \\(x^{(0)}+\\epsilon\\) via minimising \\(\\mathbb{E}_{q(x^{(0)})q(\\epsilon)}\\left[\\left(x^{(0)}-h(x^{(0)}+\\epsilon) \\right)^{2}\\right]\\). We can compare this to our case. As we have based the parameterisation of \\(\\mu_{\\theta}(x^{(t)},t-1)\\) on the form of the conditional mean \\(\\mu_{t-1|0,t}\\), the squared difference in the objective function now has a simple form \\[\\mathbb{E}_{q(x^{(0)},\\ x^{(t)})}\\bigg{[}(\\mu_{t-1|0,t}-\\mu_{ \\theta}(x^{(t)},t-1))^{2}\\bigg{]}=\\] \\[\\left(a^{(t-1)}\\right)^{2}\\mathbb{E}_{q(x^{(0)},\\ x^{(t)})}\\bigg{[} (x^{(0)}-\\hat{x}_{\\theta}^{(0)}(x^{(t)},t-1))^{2}\\bigg{]}.\\] So our network also tries to denoise each \\(x^{(t)}\\) into \\(x^{(0)}\\) as in the denoising auto-encoder (for more information see the Variational Diffusion Model objective in the next section). 25 Here we have defined \\[c^{(t)}=\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}\\ \\ \\text{and}\\ \\ d^{(t)}=\\sqrt{1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}}.\\] **Variance Option 2**. Learning of the variance \\(\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\) has been found to be challenging [Nichol and Dhariwal, 2021].24 Therefore it is typically set to a constant value which is independent of \\(x^{(t)}\\). One option is to set it equal to the variance of the noising process Footnote 24: Here we have defined \\[c^{(t)}=\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}\\ \\ \\text{and}\\ \\ d^{(t)}=\\sqrt{1-\\prod_{t^{\\prime}=1}^{t} \\lambda_{t^{\\prime}}^{2}}.\\] This is optimal when the data themselves are drawn from a unit Gaussian25 or when there are an infinite number of augmentations \\(T\\to\\infty\\)[Anderson, 1982]. However, generally this variance will be overly large. **Variance Option 2**. An alternative option again takes inspiration from the conditional distribution \\(q(x^{(t-1)}|x^{(0)},\\ x^{(t)})\\). Specifically, equation 2 tells us the variance of \\(x^{(t-1)}\\) if we know \\(x^{(t)}\\) and \\(x^{(0)}\\). We can set the denoising variance to be equal to this, \\[\\sigma_{\\theta}^{2}(x^{(t)},t-1)=\\sigma_{t-1|0,t}^{2}.\\] This is optimal when the data only take a single value (i.e. it is always known). So generally this will under-estimate the variance. ### Putting it all together and choosing the schedule The DDPM framework we have specified above comprises a family of data augmentations, associated objective functions, and models. On the augmentation side, we have to select a set of augmentation coefficients \\(\\{\\lambda_{t}\\}_{t=1}^{T}\\). On the objective function side we must choose the weights \\(\\{w_{t}\\}_{t=1}^{T}\\) (or use an unweighted objective \\(w_{t}=1\\)). On the modelling side, we can select how we parameterise the mean \\(\\mu_{\\theta}(x^{(t)},t-1)\\), whether we let the variance \\(\\sigma_{\\theta}^{2}(x^{(t)},t-1)\\) depend on \\(x^{(t)}\\), and what parameterisation we use for it. Below we will cover two popular choices for the objective function before discussing choices for the augmentation coefficients. **Simplified DDPM objective**. First we consider the simplified training objective in Ho et al. (2020). This is recovered from the framework described above when we make the following choices: we use the \\(\\epsilon\\)-parameterisation of \\(\\mu_{\\theta}(x^{(t)},t-1)\\), set \\(\\sigma_{\\theta}^{2}(x^{(t)},t-1)=\\sigma_{t}^{2}\\) and then use \\(w_{t-1}=\\left(\\frac{\\sigma_{t}\\,c^{(t)}}{a^{(t-1)}d(t)}\\right)^{2}\\) so that the weights cancel with the terms multiplying the noise variables (see note 23). We then drop constant terms in the objective that do not depend on the parameters \\(\\theta\\) to yield, \\[\\mathcal{L}(\\theta)=-\\frac{T}{2}\\mathbb{E}\\left[\\left(\\epsilon^{(t)}-\\hat{ \\epsilon}_{\\theta}^{(t)}(\\underbrace{\\Lambda_{t}x^{(0)}+\\sqrt{1-\\Lambda_{t}^{2 }}\\epsilon^{(t)}}_{x^{(t)}},t-1)\\right)^{2}\\,\\right]\\] where the expectation is over \\(t\\sim\\text{Uniform(1,T)}\\) and \\((x^{(0)},\\ x^{(t)})\\sim q(x^{(0)},\\ x^{(t)})\\), and we have used the shorthand \\(\\Lambda_{t}=\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}\\). We have coloured the noise variables to make clear that training involves adding noise to clean data and inputting this corrupted data into a neural network which then tries to estimate the noise that you have added. As mentioned in note 23, this training procedure is closely related to denoising score matching (Song and Ermon, 2019). **Variational Diffusion Model objective**. Second we consider one of the discrete time objectives in Kingma et al. (2021).26 This is recovered when we make the following choices: we use the \\(x^{(0)}\\)-parameterisation of \\(\\mu_{\\theta}(x^{(t)},t-1)\\), set \\(\\sigma_{\\theta}^{2}(x^{(t)},t-1)=\\sigma_{t-1|0,t}^{2}\\) and use the unweighted objective \\(w_{t-1}=1\\). Again we drop terms that do not depend on the parameters finding Footnote 26: Although it seems surprising, in light of what follows, that the following option does not work reasonably: \\[\\mathcal{L}(\\theta)=-\\frac{T}{2}\\mathbb{E}\\left[\\left(\\mathsf{SNR}(t)-\\mathsf{ SNR}(t-1)\\right)\\left(x^{(0)}-\\hat{x}_{\\theta}^{(0)}(x^{(t)},t-1)\\right)^{2}\\right] \\tag{3}\\] where the expectation is over \\(t\\sim\\text{Uniform(1,T)}\\) and \\((x^{(0)},\\ x^{(t)})\\sim q(x^{(0)},\\ x^{(t)})\\). Notice, as mentioned earlier, that this setup relates to the denoising auto-encoder (Vincent et al., 2008). \\[\\mathcal{L}(\\theta)=-\\frac{T}{2}\\mathbb{E}\\left[\\left(\\mathsf{SNR}(t)-\\mathsf{ SNR}(t-1)\\right)\\left(x^{(0)}-\\hat{x}_{\\theta}^{(0)}(x^{(t)},t-1)\\right)^{2}\\right] \\tag{4}\\] where the expectation is over \\(t\\sim\\text{Uniform(1,T)}\\) and \\((x^{(0)},\\ x^{(t)})\\sim q(x^{(0)},\\ x^{(t)})\\). Notice, as mentioned earlier, that this setup relates to the denoising auto-encoder (Vincent et al., 2008). Here we have defined a signal to noise ratio (\\(\\mathsf{SNR}^{27}\\)) of level \\(t\\) as the variance in \\(x^{(t)}\\) coming from the raw data \\(x^{(0)}\\) divided by the variance of the noise contribution to \\(x^{(t)}\\), 27 Footnote 27: Although it seems surprising, in light of what follows, that the following option does not work reasonably: \\[\\sigma_{\\theta}^{2}(x^{(t)},t-1)=\\] \\[\\rho_{\\theta}(x^{(t)},t-1)\\sigma_{t}^{2}+(1-\\rho_{\\theta}(x^{(t )},t-1))\\sigma_{t-1|0,t}^{2}\\] where \\(\\rho_{\\theta}(x^{(t)},t-1)\\) is a neural network with a sigmoid final layer. \\[\\text{SNR}(t)=\\frac{\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}}{1-\\prod_{t^ {\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}}.\\] Weighting by the difference in the SNRs is natural - steps where there is a large change in the SNR get more weight. **Choosing the schedule**. Finally, we have to select a set of augmentation coefficients \\(\\{\\lambda_{t}\\}_{t=1}^{T}\\). Here we are into a somewhat of a dark art with choices including using a linear schedule for the augmentation noise \\(1-\\lambda_{t}^{2}=\\beta_{1}+(t-1)\\beta_{2}\\)[Ho et al., 2020], quarter-cosine schedules for \\(\\Lambda_{t}\\)[Nichol and Dhariwal, 2021], and linear spacing according to log-SNR [Kingma et al., 2021]. One piece of theory we have available to us to guide this choice comes from taking the limit \\(T\\to\\infty\\) of equation 3. In this limit, the loss does not depend on the schedule -- only the signal to noise ratio at the end-points \\(t=1\\) and at \\(T\\to\\infty\\) matter [Kingma et al., 2021].29 So, in this limit, all that matters is the SNR at the start and end of the augmentation and the objective is invariant to the particular path between them. However in practice when approximating the loss via sampling, different schedules can lead to estimators with different variances and strategies to learn schedules which minimise this variance have been developed [Kingma et al., 2021]. Footnote 29: See appendix 4.2 for full details. Roughly, the difference in SNRs turns into a derivative w.r.t. time and the discrete uniform distribution turns into a uniform density. Since the SNR is monotonically decreasing by construction (as we add more and more noise as \\(t\\) increases) we can safely reparameterise from time to SNR-level via \\(u=\\text{SNR}(t)\\) and use the chain rule to give, \\[\\mathcal{L}(\\theta)\\to-\\frac{1}{2}\\int_{u=\\text{SNR-mix}}^{\\text{SNR-mix}} \\mathbb{E}_{x}\\left[\\left(x^{(0)}-\\dot{x}_{\\theta}^{(0)}(x^{(u)},u)\\right)^{2 }\\right]\\text{d}u\\] Here SNR-max is the SNR at level \\(t=1\\) and it is set by \\(\\lambda_{1}\\) and SNR-min is the SNR at level \\(T\\to\\infty\\) (and would typically be 0). We have used overloaded notation so that \\(\\dot{x}_{\\theta}^{(0)}(x^{(u)},u)=\\dot{x}_{\\theta}^{(0)}(x^{(t(u))},t(u)-1)\\). ## 3 Conclusion So, to conclude, the DDPM can be broken down into the following steps with the following rationale and design choices: 1. **augmentation scheme**: turns generative modelling into a sequence of simple regression problems, choices include Gaussian auto-regression with variance preserving or variance exploding formulations, blurring etc. 2. **step-wise objective**: used to train the non-linear regression models at each step, typically the choice is to use the maximum-likelihood fitting objective. 3. **parameter tying and weighted objective**: to reduce the number of parameters of the model and to accelerate training, this requires choosing an architecture which can be shared across augmentation levels. 4. **selecting the denoising model**: specification of the regression model architecture, this involves selecting between Gaussian versus non-Gaussian models, with Gaussian models allowing analytic averaging over augmentation data. Other choices for the Gaussian model include whether to use fixed variances, input dependent variances, or correlated noise models. 5. **parameterisation**: selecting how to use the neural network to parameterise the probabilistic model to make it simple for the network to operate across augmentation levels, choices include the \\(\\epsilon\\) and \\(x^{(0)}\\) parameterisations, and also a choice for the variances of the regression models. 6. **combining the above choices and setting a schedule**: finally we have to put everything together and select a set of augmentation parameters which lead to low-variance estimates of the training objective. 7. This can be seen intuitively by realising that if \\(x^{(0)}\\sim\\mathcal{N}(0,1)\\) then the DDPM is initialised with its invariant distribution and so running it from \\(x^{(T)}\\sim\\mathcal{N}(0,1)\\) to \\(x^{(0)}\\) is equivalent to running it from \\(x^{(0)}\\) to \\(x^{(T)}\\). 8. Although this objective plays a central role in Kingma et al. (2021), in practice they use the \\(\\epsilon\\)-parameterisation for the experiments. Footnote 27: In more detail we can take the relationship between \\(x^{(t)}\\) and the clean data (equation 1) and use this to decompose its the variance, \\[x^{(t)} =x^{(0)}\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}+\\epsilon_{t }\\left(1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}\\right)^{1/2}\\] \\[\\text{var}(x^{(t)}) =\\text{var}\\left(x^{(0)}\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{ \\prime}}\\right)+\\] \\[\\text{var}\\left(\\epsilon_{t}\\left(1-\\prod_{t^{\\prime}=1}^{t} \\lambda_{t^{\\prime}}^{2}\\right)^{1/2}\\right)\\] \\[=\\underbrace{\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}}_{ \\text{signal variance}}+1-\\underbrace{\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{ \\prime}}^{2}}_{\\text{noise variance}}.\\] These are the two terms that are used to define the SNR. The SNR arises in the objective since \\[\\frac{\\left(a^{(t-1)}\\right)^{2}}{\\sigma_{\\theta}^{2}(x^{(t)},t-1)} =\\frac{\\left(a^{(t-1)}\\right)^{2}}{\\sigma_{t-1|0,t}^{2}}\\] \\[=\\text{SNR}(t)-\\text{SNR}(t-1).\\] See appendix 4.2 for full details. Roughly, the difference in SNRs turns into a derivative w.r.t. time and the discrete uniform distribution turns into a uniform density. Since the SNR is monotonically decreasing by construction (as we add more and more noise as \\(t\\) increases) we can safely reparameterise from time to SNR-level via \\(u=\\text{SNR}(t)\\) and use the chain rule to give, \\[\\mathcal{L}(\\theta)\\to-\\frac{1}{2}\\int_{u=\\text{SNR-mix}}^{\\text{SNR-mix}} \\mathbb{E}_{x}\\left[\\left(x^{(0)}-\\dot{x}_{\\theta}^{(0)}(x^{(u)},u)\\right)^{2 }\\right]\\text{d}u\\] Here SNR-max is the SNR at level \\(t=1\\) and it is set by \\(\\lambda_{1}\\) and SNR-min is the SNR at level \\(T\\to\\infty\\) (and would typically be 0). We have used overloaded notation so that \\(\\dot{x}_{\\theta}^{(0)}(x^{(u)},u)=\\dot{x}_{\\theta}^{(0)}(x^{(t(u))},t(u)-1)\\). ## 4 Appendices ### Results for Gaussian AR(1) processes This section lays out the rather simple, but laborious algebra that underlies the results that the DDPM uses for the Gaussian AR(1) processes. We start by reminding the reader of the noising process here. The process is a joint distribution over the augmented data set \\[q(x^{(0:T)})=q(x^{(0)})\\prod_{t=1}^{T}q(x^{(t)}|x^{(t-1)})\\] The highest fidelity level is initialised from the data distribution \\(x^{(0)}\\sim q(x^{(0)})\\) where we have assumed that the data have zero mean \\(\\mathbb{E}_{q(x^{(0)})}[x^{(0)}]=0\\) and unit variance \\(\\mathbb{E}_{q(x^{(0)})}[(x^{(0)})^{2}]=1\\). Practically, as we are assuming a large data set, we can use the empirical mean and standard deviation to normalise the data. The noising process can then be defined in terms of conditional probability distributions \\[q(x^{(t)}|x^{(t-1)})=\\mathcal{N}(x^{(t)};\\lambda_{t}x^{(t-1)},\\sigma_{t}^{2}),\\] or in terms of samples \\[x^{(t)}=\\lambda_{t}x^{(t-1)}+\\sigma_{t}\\epsilon_{t}\\ \\ \\text{where}\\ \\epsilon_{t}\\sim \\mathcal{N}(0,1).\\] We will use both forms in the derivations that follow. #### 4.1.1 Variance preserving Gaussian AR(1) processes We will now show that using \\(\\sigma_{t}^{2}=1-\\lambda_{t}^{2}\\) will ensure that every variable \\(x^{(t)}\\) will be zero mean and unit variance. This is called the variance preserving AR(1) process. We will use a recursion to show this -- remember that we already know the data have zero mean and unit variance so we start by considering \\(x^{(1)}\\). It's relatively simple to show the mean of this variable is zero as it is formed from a linear combination of zero-mean variables \\[\\mathbb{E}_{q(x^{(1)})}[x^{(1)}]=\\mathbb{E}_{q(x^{(0)},\\epsilon_{1})}[\\lambda _{1}x^{(0)}+\\sigma_{1}\\epsilon_{1}]=\\lambda_{1}\\mathbb{E}_{q(x^{(0)}}[x^{(0)}] +\\sigma_{1}\\mathbb{E}_{q(x^{(0)},\\epsilon_{1})}[\\epsilon_{1}]=0\\] Now consider the variance of \\(x^{(1)}\\) \\[\\mathbb{E}_{q(x^{(1)})}[(x^{(1)})^{2}] =\\mathbb{E}_{q(x^{(0)},\\epsilon_{1})}[(\\lambda_{1}x^{(0)}+\\sigma _{1}\\epsilon_{1})^{2}]\\] \\[=\\lambda_{1}^{2}\\mathbb{E}_{q(x^{(0)}}[(x^{(0)})^{2}]+2\\sigma_{1} \\lambda_{1}\\mathbb{E}_{q(x^{(0)},\\epsilon_{1})}[x^{(0)}\\epsilon_{1}]+\\lambda_ {1}^{2}\\mathbb{E}_{q(\\epsilon_{1})}[\\epsilon_{1}^{2}]\\] \\[=\\lambda_{1}^{2}+2\\sigma_{1}\\lambda_{1}\\mathbb{E}_{q(x^{(0)})}[x^ {(0)}]\\mathbb{E}_{q(\\epsilon_{1})}[\\epsilon_{1}]+\\sigma_{1}^{2}\\] \\[=\\lambda_{1}^{2}+\\sigma_{1}^{2}.\\] So if the variance of \\(x^{(1)}\\) is set to unity, then \\(1=\\lambda_{1}^{2}+\\sigma_{1}^{2}\\) which means \\(\\sigma_{1}^{2}=1-\\lambda_{1}^{2}\\). Now that \\(x^{(1)}\\) has zero mean and unit variance we can apply precisely the same argument outlined above to \\(x^{(2)}\\) finding \\(\\sigma_{2}^{2}=1-\\lambda_{2}^{2}\\), and so on. So, \\(\\sigma_{t}^{2}=1-\\lambda_{t}^{2}\\) leads to the variance preserving process. #### 4.1.2 Conditioning the Gaussian AR(1) Process on an initial value We will now show that, for a variance preserving process where \\(\\sigma_{t}^{2}=1-\\lambda_{t}^{2}\\), when we condition the process on an initial value \\(x^{(0)}\\) the distribution over \\(x^{(t)}\\) is given by \\[q(x^{(t)}|x^{(0)})=\\mathcal{N}\\left(x^{(t)};\\left(\\prod_{t^{\\prime}=1}^{t} \\lambda_{t^{\\prime}}\\right)x^{(0)},1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime }}^{2}\\right)\\] To show this, we can unroll the sampling formula for the process \\[x^{(t)}=\\lambda_{t}x^{(t-1)}+\\sigma_{t}\\epsilon_{t}=\\lambda_{t} (\\lambda_{t-1}x^{(t-2)}+\\sigma_{t-1}\\epsilon_{t-1})+\\sigma_{t}\\epsilon_{t}\\] \\[\\qquad=\\lambda_{t}(\\lambda_{t-1}(\\lambda_{t-2}x^{(t-3)}+\\sigma_{t -2}\\epsilon_{t-2}))+\\sigma_{t-1}\\epsilon_{t-1})+\\sigma_{t}\\epsilon_{t}\\] If we keep unrolling back to \\(x^{(0)}\\) we will get \\[x^{(t)}= \\left(\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}\\right)x^{(0)} +\\sigma_{t}\\epsilon_{t}+\\lambda_{t}\\sigma_{t-1}\\epsilon_{t-1}+ \\lambda_{t}\\lambda_{t-1}\\sigma_{t-2}\\epsilon_{t-2}\\] \\[\\qquad\\qquad+\\lambda_{t}\\lambda_{t-1}\\lambda_{t-2}\\sigma_{t-3} \\epsilon_{t-3}+\\ldots+\\left(\\prod_{t^{\\prime}=2}^{t}\\lambda_{t^{\\prime}} \\right)\\sigma_{1}\\epsilon_{1}.\\] Remembering that the variance of independent variables, such as \\(\\epsilon_{1:t}\\), adds we have \\[q(x^{(t)}|x^{(0)})=\\mathcal{N}\\left(x^{(t)};\\left(\\prod_{t^{\\prime}=1}^{t} \\lambda_{t^{\\prime}}\\right)x^{(0)},\\sigma_{t|0}^{2}\\right)\\] where the conditional variance is given by \\[\\sigma_{t|0}^{2}=\\sigma_{t}^{2}+\\lambda_{t}^{2}\\sigma_{t-1}^{2}+ \\lambda_{t}^{2}\\lambda_{t-1}^{2}\\sigma_{t-2}^{2}+\\lambda_{t}^{2}\\lambda_{t-1}^ {2}\\lambda_{t-2}^{2}\\sigma_{t-3}^{2}+\\ldots+\\left(\\prod_{t^{\\prime}=2}^{t} \\lambda_{t^{\\prime}}^{2}\\right)\\sigma_{1}^{2}\\] Now we substitute in the variance preserving condition \\(\\sigma_{t}^{2}=1-\\lambda_{t}^{2}\\) and the magic happens: the sum telescopes away to leave only the end points \\[\\sigma_{t|0}^{2}=(1-\ ot{\\chi}_{t}^{2})+\\lambda_{t}^{2}(\ ot{I}- \ ot{\\chi}_{t-1}^{2})+\\lambda_{t}^{2}\\lambda_{t-1}^{2}(\ ot{I}-\ ot{\\chi}_{t-2 }^{2})+\\lambda_{t}^{2}\\lambda_{t-1}^{2}\\lambda_{t-2}^{2}(\ ot{I}-\ ot{\\chi}_{t- 3}^{2})+\\] \\[\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\ldots+\\left(\\prod_{t^ {\\prime}=2}^{t}\\lambda_{t^{\\prime}}^{2}\\right)(\ ot{I}-\\lambda_{1}^{2})=1- \\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}\\] #### 4.1.3 Conditioning on an initial value and the fidelity level below We will now show that \\[q(x^{(t-1)}|x^{(0)},\\;x^{(t)})=\\mathcal{N}(x^{(t-1)};\\mu_{t-1|0,t},\\sigma_{t-1 |0,t}^{2})\\] where \\[\\mu_{t-1|0,t} =a^{(t-1)}x^{(0)}+b^{(t-1)}x^{(t)}\\] \\[=\\frac{\\left(\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{\\prime}} \\right)\\left(1-\\lambda_{t}^{2}\\right)}{1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{ \\prime}}^{2}}x^{(0)}+\\frac{\\left(1-\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{ \\prime}}^{2}\\right)\\lambda_{t}}{1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}} ^{2}}x^{(t)}\\] \\[\\sigma_{t-1|0,t}^{2} =\\frac{\\left(1-\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{\\prime}}^{2} \\right)(1-\\lambda_{t}^{2})}{1-\\prod_{t^{\\prime}=1}^{t}\\lambda_{t^{\\prime}}^{2}}.\\]We can prove these results by first applying Bayes' rule to decompose \\(q(x^{(t-1)}|x^{(0)},\\ x^{(t)})\\) into a product of two distributions that we know the form of already \\[q(x^{(t-1)}|x^{(0)},x^{(t)}) \\propto q(x^{(t-1)}|x^{(0)})q(x^{(t)}|x^{(t-1)})\\] \\[=\\mathcal{N}\\left(x^{(t-1)};\\left(\\prod_{t^{\\prime}=1}^{t-1} \\lambda_{t^{\\prime}}\\right)x^{(0)},1-\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{ \\prime}}^{2}\\right)\\] \\[\\qquad\\qquad\\times\\mathcal{N}\\left(x^{(t)};\\lambda_{t}x^{(t-1)}, 1-\\lambda_{t}^{2}\\right).\\] We will now make use of the following identity (which can be shown by comparing the quadratic and linear terms in \\(x\\) inside the exponential on both left and right hand sides) \\[\\mathcal{N}(x;\\mu_{1},\\sigma_{1}^{2})\\times\\mathcal{N}(\\mu_{2};ax,\\sigma_{2}^ {2})\\propto\\mathcal{N}\\left(x;\\mu=\\sigma^{2}\\left(\\frac{\\mu_{1}}{\\sigma_{1}^{ 2}}+a\\frac{\\mu_{2}}{\\sigma_{2}^{2}}\\right),\\sigma^{2}=\\frac{\\sigma_{2}^{2} \\sigma_{1}^{2}}{\\sigma_{1}^{2}+a^{2}\\sigma_{2}^{2}}\\right)\\] and we make the following identifications: \\(x=x^{(t)}\\), \\(\\mu_{1}=\\left(\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{\\prime}}\\right)x^{(0)}\\), \\(\\sigma_{1}^{2}=1-\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{\\prime}}^{2}\\), \\(\\mu_{2}=x^{(t)}\\), \\(a=\\lambda_{t}\\), and \\(\\sigma_{2}^{2}=1-\\lambda_{t}^{2}\\). Substituting in \\[\\sigma_{t-1|0,t}^{2} =\\sigma^{2}=\\frac{\\sigma_{1}^{2}\\sigma_{2}^{2}}{\\sigma_{2}^{2}+a^ {2}\\sigma_{1}^{2}}=\\frac{\\left(1-\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{\\prime} }^{2}\\right)\\left(1-\\lambda_{t}^{2}\\right)}{\\left(1-\\lambda_{t}^{2}\\right)+ \\lambda_{t}^{2}\\left(1-\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t^{\\prime}}^{2}\\right)}\\] \\[\\mu_{t-1|0,t} =\\mu=\\sigma^{2}\\left(\\frac{\\mu_{1}}{\\sigma_{1}^{2}}+a\\frac{\\mu_{2 }}{\\sigma_{2}^{2}}\\right)=\\sigma^{2}\\left(\\frac{\\left(\\prod_{t^{\\prime}=1}^{t -1}\\lambda_{t^{\\prime}}\\right)x^{(0)}}{1-\\prod_{t^{\\prime}=1}^{t-1}\\lambda_{t ^{\\prime}}^{2}}+\\lambda_{t}\\frac{x^{(t)}}{1-\\lambda_{t}^{2}}\\right)\\] which simplifies down to the expressions given at the start of this section. #### 4.1.4 Relating my notation to standard diffusion notation In this section we relate my notation to the notation which is more commonly used in DDPM papers \\[\\alpha_{t} =\\lambda_{t}^{2}\\] \\[\\beta_{t} =1-\\lambda_{t}^{2}\\] \\[\\bar{\\alpha}_{t} =\\prod_{t^{\\prime}=1}^{t}\\alpha_{t^{\\prime}}=\\prod_{t^{\\prime}=1}^ {t}\\lambda_{t^{\\prime}}^{2}=\\Lambda_{t}^{2}.\\] ### Reparameterising the loss using the SNR Here we take the formula for the variational diffusion loss given in equation 3, slightly generalised to include weights, and take the limit as \\(T\\to\\infty\\). \\[\\mathcal{L}(\\theta) =-\\frac{1}{2}\\sum_{t=1}^{T}\\mathbb{E}_{x}\\left[w_{t-1}\\left( \\text{\\sf SNR}(t)-\\text{\\sf SNR}(t-1)\\right)\\left(x^{(0)}-\\hat{x}_{\\theta}^{(0 )}(x^{(t)},t-1)\\right)^{2}\\right]\\] \\[=-\\frac{1}{2}\\sum_{t=1}^{T}\\mathbb{E}_{x}\\left[w_{t-1}\\frac{\\text {\\sf SNR}(t)-\\text{\\sf SNR}(t-1)}{1/T}\\left(x^{(0)}-\\hat{x}_{\\theta}^{(0)}(x ^{(t)},t-1)\\right)^{2}\\right]1/T\\] \\[\\to-\\frac{1}{2}\\int_{t=1}^{T}\\mathbb{E}_{x}\\left[w(t-1)\\frac{ \\text{\\sf dSNR}(t)}{\\text{d}t}\\left(x^{(0)}-\\hat{x}_{\\theta}^{(0)}(x^{(t)},t-1) \\right)^{2}\\right]\\text{d}t \\tag{4}\\]Since the SNR is monotonically decreasing by construction (as we add more and more noise as \\(t\\) increases) we can safely reparameterise using \\(u=\\text{SNR}(t)\\) and use the chain rule to give \\[\\mathcal{L}(\\theta)\\to-\\frac{1}{2}\\int_{u=\\text{SNR-min}}^{\\text{ SNR-max}}\\mathbb{E}_{x}\\left[w(u)\\left(x^{(0)}-\\hat{x}_{\\theta}^{(0)}(x^{(t)},u) \\right)^{2}\\right]\\mathrm{d}u \\tag{5}\\] Here SNR-max is the SNR at level \\(t=1\\) and it is set by \\(\\lambda_{1}\\) and SNR-min is the SNR at level \\(T\\to\\infty\\) which is set to \\(\\Lambda_{T}=\\prod_{t=1}^{T}\\lambda_{t}\\) (and would typically be close to 0). We have used overloaded notation so that e.g. \\(w(u)=w(t(u)-1)\\). Crucially, this equation does not depend on the schedule -- only the starting and ending signal to noise ratios. ### Relationship to the Evidence Lower Bound (ELBO) The DDPM can be viewed as a latent variable model for observed data \\(x^{(0)}\\) with latents \\(x^{(1:T)}\\). The marginal probability density of a data point is given by \\(p(x^{(0)}|\\theta)=\\int p(x^{(0:T)}|\\theta)\\;\\mathrm{d}x^{(1:T)}\\). As this expression measures how probable each data point is under the model, it would make for a sensible training objective for the model parameters \\(\\theta\\) e.g. using maximum-likelihood training which would average the log-density over the training data30 Footnote 30: The average log-likelihood of the parameters with respect to the observed data \\(\\mathbb{E}_{q(x^{(0)})}\\left[\\log p(x^{(0)}|\\theta)\\right]\\) would arguably be a preferable objective to the per-step average log-likelihoods used above \\(\\mathbb{E}_{q(x^{(1:T)})}\\left[\\log p(x^{(t-1)}|x^{(t-1)},\\theta)\\right]\\) as it measures the quality of the entire model on the observed data, rather than measuring the quality of part of the model on augmented data. \\[\\mathbb{E}_{q(x^{(0)})}\\left[\\log p(x^{(0)}|\\theta)\\right]\\geq\\mathbb{E}_{q(x ^{(0)})}\\left[\\text{ELBO}(x^{(0)},\\theta,q(x^{(1:T)}))\\right].\\] The ELBO is formed by subtracting a KL divergence term from the intractable log-likelihood. The KL between two densities \\(q(x)\\) and \\(p(x)\\) is defined as \\[\\text{KL}(q(x)||p(x))=\\int q(x)\\log\\frac{q(x)}{p(x)}\\;\\mathrm{d}x\\geq 0.\\] The KL is non-negative and takes the value zero when \\(q(x)=p(x)\\). In our case, \\(p(x)\\) will be selected so that when the KL is subtracted from the log-likelihood, the resulting ELBO is tractable. It turns out that choosing it to be the posterior distribution over the latent variables given the observed variables \\(p(x)=p(x^{(1:T)}|x^{(0)})\\) leads to a simple form. We will talk about the form of \\(q(x^{(1:T)})\\) in a moment (for one thing, we will let it depend on the data \\(x^{(0)}\\)), but at this point it can be considered arbitrary. With this setup, the ELBO can be written as follows \\[\\text{ELBO}(x^{(0)},\\theta,q(x^{(1:T)})) =\\log p(x^{(0)}|\\theta)-\\text{KL}(q(x^{(1:T)})||p(x^{(1:T)}|x^{(0 )},\\theta))\\] \\[=\\int q(x^{(1:T)})\\log\\frac{p(x^{(0:T)}|\\theta)}{q(x^{(1:T)})}\\; \\mathrm{d}x^{(1:T)}\\] \\[=\\mathbb{E}_{q(x^{(1:T)})}\\left[\\log\\frac{p(x^{(0:T)}|\\theta)}{ q(x^{(1:T)})}\\right].\\] Notice that the choices made above for the KL and the distributions within it result in us only requiring the joint distribution for the model \\(p(x^{(0:T)}|\\theta)=\\log p(x^{(0:T)}|\\theta)\\). \\(p(x^{(0:T)})\\prod_{t=1}^{T}p(x^{(t-1)}|x^{(t-1)},\\theta)\\) which is how we specified the model in the first place - there is no longer a requirement for us to compute intractable marginalised quantities. Typically, variational inference then involves computing an estimate of the average ELBO by simple Monte Carlo sampling, \\[\\mathbb{E}_{q(x^{(0:T)})}\\left[\\log\\frac{p(x^{(0:T)}|\\theta)}{q(x^{(1:T)})} \\right]\\approx\\frac{1}{N}\\sum_{n=1}^{N}\\log\\frac{p(x_{n}^{(0:T)}|\\theta)}{q(x_ {n}^{(1:T)})}\\;\\;\\text{where}\\;\\;x_{n}^{(0:T)}\\sim q(x^{(0:T)}).\\] This estimator can then be optimised with respect to the parameters \\(\\theta\\). Normally the approximate posterior \\(q(x^{(1:T)})=q(x^{(1:T)}|\\phi)\\) would be parameterised and optimised along side the generative model so that the bound remains as tight as possible at each point, thereby minimising the discrepancy between the ELBO and the log-likelihood (Kingma and Welling, 2022). After the optimisation the approximate posterior depends on the data \\(q(x^{(1:T)}|\\phi(x^{(0)}))\\). This is not the approach used in the DDPM. Instead, to recover the DDPM, we use an approximate posterior distribution which depends on the data, but whose parameters are fixed. Specifically, this approximate posterior is equal to the noising process \\[q(x^{(1:T)}|x^{(0)},\\phi)=\\prod_{t=1}^{T}q(x^{(t)}|x^{(t-1)},\\lambda_{t})= \\prod_{t=1}^{T}\\mathcal{N}(x^{(t)};\\lambda_{t}x^{(t-1)},1-\\lambda_{t}^{2}).\\] So this approximate posterior distribution depends on the data as the noising process is initialised at \\(x^{(0)}\\), but its parameters \\(\\phi=\\lambda_{1:T}\\) are not learned. We can now show that the ELBO objective is equivalent to the unweighted DDPM training objective. First note that \\[\\mathbb{E}_{q(x^{(0)})}\\left[\\text{ELBO}(x^{(0)},\\theta,q(x^{(1:T )}|x^{(0)},\\phi))\\right]=\\] \\[\\underbrace{\\mathbb{E}_{q(x^{(0:T)}|\\phi)}\\left[\\sum_{t=1}^{T} \\log p(x^{(t-1)}|x^{(t)},\\theta)\\right]}_{\\text{unweighted DDPM loss}}+c(\\phi)\\] where the constant \\(c(\\phi)\\) is the average entropy of the augmented variables and does not depend on the model parameters \\(\\theta\\), \\[c(\\phi)=-\\mathbb{E}_{q(x^{(0:T)}|\\phi)}\\left[\\log\\left(q(x^{(1:T)}|x^{(0)}, \\phi)\\right)\\right].\\] As such optimisation of the ELBO w.r.t. \\(\\theta\\) is equivalent to optimising the unweighted DDPM objective. #### 4.3.1 Why I don't like the ELBO perspective I don't particularly like the ELBO perspective for the following reasons 1. The ELBO perspective immediately suggests generalisations to the DDPM which do not work well including learning the approximate posterior (i.e. learning the noising process) and using a non-linear noising process parameterised by a neural network. Both of these extensions are natural from the ELBO and VAE perspectives as they will lead to tighter bounds on the log-likelihood. The fact that they do not work well is a challenge to the utility of the ELBO perspective.31 Footnote 31: Part of the reason that variational methods do not work well in general though is that often the optimisation takes us to parameter settings \\(\\theta\\) where the bound is tight (\\(q\\) is accurate) rather than where the underlying likelihood is high (\\(\\theta\\) is close to the underlying maximum likelihood parameter). So, counterintuitively, there are cases where simpler approximate posteriors perform better e.g. an approximation which is equally tight across the parameter space is optimal from a parameter-learning perspective (see figure 3 and Turner and Sahani (2011) for more information). This could be used to justify a fixed and simple \\(q\\), but this is a nuanced justification. 2. In many practical situations, authors use the weighted version of the DDPM loss \\(w_{t}\ eq 1\\) to improve performance. In which case, the clean Figure 3: Here’s a toy example where the log-likelihood is shown in black with an optimum \\(\\theta^{*}\\). Below it are two different variational lower bounds (ELBOs) formed from using different approximate posteriors \\(q_{1}\\) and \\(q_{2}\\). These two bounds have different optima \\(\\theta_{1}\\) and \\(\\theta_{2}\\). The first bound is tighter to the log-likelihood, but its optimum is biased towards large parameter values as the bound is tighter there. The second bound is looser, but as it is equally loose everywhere, its optimum coincides with the log-likelihood. For this reason, simpler families of approximate posterior distribution can sometimes outperform more complex ones as their tightness can be less parameter dependent. correspondence to the ELBO is lost (the objective is no longer a bound) and again the ELBO does not usefully constrain the design space.[32] So, not only does the ELBO suggests generalisations that do not perform well (point 1 above), it is also incompatible with generalisations that do perform well. 3. If you fix the approximate posterior to a simple distribution and are just learning the generative model, then the ELBO perspective does not bring substantial additional technical insights or utility (although see point 5 below for an exception). However, it comes at the cost of needing to know about KL divergences, bounds etc. In this light, I prefer the simpler explanation given in the main body of this note. 4. In the normal exposition of probabilistic modelling, latent variables are a hierarchy of high- and low-level variables that underlie the data. For example, for images they might comprise the objects present in a scene, their pose, position and properties, the lighting conditions, camera position and so on. These types of variables necessitate complex learned inference networks \\(q\\). Fixing the inference network and using a simple a linear Gaussian form would make no sense if the goal is to extract such variables. However, here in the DDPM the latent variables have quite a different character -- they are just noise corrupted versions of the data rather than meaningful high-level representations. It is remarkable from the standard latent variable modelling perspective that latent variables which are representationally meaningless can be used to produce a good generative model. This feature of the DDPM means connections to latent variable modelling are conceptually dissonant, although mathematically accurate. 5. In general an ELBO formed with a linear Gaussian approximate posterior would be extremely loose and a poor optimisation target. This will be true for the DDPM for small to moderate numbers of noise levels \\(T\\). Remarkably, in the limit \\(T\\to\\infty\\) though, there exist non-linear generative models that are universal approximators which also have optimal Gaussian AR(1) approximate posteriors [Anderson, 1982]. It seems likely in this limit that the ELBO will be tight to the log-likelihood, although we do not know of a proof at this time. This behaviour is very surprising, and essential to understand why the variational approximation is sensible, but again this requires substantial technical explanation and the continuous time version of the DDPM. **Acknowledgements.** We would like to thank Jose Miguel Hernandez-Lobato for helpful feedback. Richard E. Turner is supported by Microsoft, Google, Amazon, ARM, Improbable and EPSRC grant EP/T005386/1 together with Aliaksandra Shysheya. ## References * Anderson [1982] Brian D.O. Anderson. Reverse-time diffusion equation models. _Stochastic Processes and their Applications_, 12(3):313-326, 1982. ISSN 0304-4149. doi: [https://doi.org/10.1016/0304-4149](https://doi.org/10.1016/0304-4149)(82)90051-5. URL [https://www.sciencedirect.com/science/article/pii/0304414982900515](https://www.sciencedirect.com/science/article/pii/0304414982900515). * Benton et al. [2024] Joe Benton, Valentin De Bortoli, Arnaud Doucet, and George Deligiannidis. Nearly \\(d\\)-linear convergence bounds for diffusion models via stochastic localization, 2024. * Brendel et al. [2015]Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R. Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions, 2023a. * Chen et al. (2023b) Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R. Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions, 2023b. * Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In _Proceedings of the 34th International Conference on Neural Information Processing Systems_, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. * Kingma and Welling (2022) Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2022. * Kingma et al. (2021) Diederik P. Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. _ArXiv_, abs/2107.00630, 2021. URL [https://api.semanticscholar.org/CorpusID:235694314](https://api.semanticscholar.org/CorpusID:235694314). * Larochelle and Murray (2011) Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In Geoffrey Gordon, David Dunson, and Miroslav Dudik, editors, _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_, volume 15 of _Proceedings of Machine Learning Research_, pages 29-37, Fort Lauderdale, FL, USA, 11-13 Apr 2011. PMLR. URL [https://proceedings.mlr.press/v15/larochelle1ia.html](https://proceedings.mlr.press/v15/larochelle1ia.html). * MacKay (2003) David J. C. MacKay. _Information Theory, Inference, and Learning Algorithms_. Copyright Cambridge University Press, 2003. * Nichol and Dhariwal (2021) Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_, volume 139 of _Proceedings of Machine Learning Research_, pages 8162-8171. PMLR, 18-24 Jul 2021. URL [https://proceedings.mlr.press/v139/nichol21a.html](https://proceedings.mlr.press/v139/nichol21a.html). * Song and Ermon (2019) Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019. URL [https://proceedings.neurips.cc/paper_files/paper/2019/file/3001ef25740745a371a96dcd947c7d93-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2019/file/3001ef25740745a371a96dcd947c7d93-Paper.pdf). * Song et al. (2021) Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021. URL [https://openreview.net/forum?id=PXTIG12RRHS](https://openreview.net/forum?id=PXTIG12RRHS). * Turner and Sahani (2011) R. E. Turner and M. Sahani. Two problems with variational expectation maximisation for time-series models. In D. Barber, T. Cemgil, and S. Chiappa, editors, _Bayesian Time series models_, chapter 5, pages 109-130. Cambridge University Press, 2011. * Vincent et al. (2008) P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In _International Conference on Machine Learning proceedings_. 2008. * Vincent et al. (2019)
Denoising Diffusion Probabilistic Models (DDPMs) [10] are a very popular class of deep generative model that have been successfully applied to a diverse range of problems including image and video generation, protein and material synthesis, weather forecasting, and neural surrogates of partial differential equations. Despite their ubiquity it is hard to find an introduction to DDPMs which is simple, comprehensive, clean and clear. The compact explanations necessary in research papers are not able to elucidate all of the different design steps taken to formulate the DDPM and the rationale of the steps that are presented is often omitted to save space. Moreover, the expositions are typically presented from the variational lower bound perspective which is unnecessary and arguably harmful as it obfuscates why the method is working and suggests generalisations that do not perform well in practice. On the other hand, perspectives that take the continuous time-limit are beautiful and general, but they have a high barrier-to-entry as they require background knowledge of stochastic differential equations and probability flow. In this note, we distill down the formulation of the DDPM into six simple steps each of which comes with a clear rationale. We assume that the reader is familiar with fundamental topics in machine learning including basic probabilistic modelling, Gaussian distributions, maximum likelihood estimation, and deep learning.
Summarize the following text.
267
isprs/682a8fb0_59ad_4a0f_8e70_65f3d2a8126b.md
Terrasar-X and rapideye data for the parameterisation of relational characteristics of urban ATKIS DLM objects S. Hese M. Lindner M. Voltersen C. Berger Friedrich-Schiller University Jena Dept. of Geography 07737 Jena Germany - [email protected] ## 1 Introduction With the start of the two German satellite missions TerraSAR-X (2007) and RapidEye (2008), new opportunities for a wide range of remote sensing data applications unfolded. The synthetic aperture Radar (SAR) data provided by the TerraSAR-X mission are acquired at a spatial resolution of up to 1 m. Rapideye offers multispectral imagery at 6.5 m spatial resolution and in five multispectral bands. Amongst many other potential applications, the complementary nature of both satellite missions may be of particular interest to study different aspects of the urban environment. The present work is part of the ENVILAND-2 research project ([http://www.enviland-2.uni-jena.de/](http://www.enviland-2.uni-jena.de/)), which investigates the synergy between SAR and multispectral satellite data for ENVIronment and LAND use applications. The urban work package focuses on the combined analysis of TerraSAR-X and Rapideye data and is split into three main steps. In a first phase, mapping of urban land cover units by means of feature level fusion techniques is done. In the second phase an object-based analysis of urban land use structures is carried out by taking the spatial relations between individual elements of the urban land cover into account. Finally, the derived information is used to develop new spatial indicators relevant to support urban planning. Data from TerraSAR-X and Rapideye is used together with very high resolution (VIR) multispectral imagery, object height information obtained from airborne laser scanning (ALS) and different types of vector data. This paper focuses on the development of a transferable, object-based rule set for the development of urban land use structures at block level and improved measures of imperviousness. The data basis used consists of Rapideye and TerraSAR-X imagery, as well as height information of a LiDAR mDSM (normalized Digital Surface Model) and object boundaries of ATKIS (Official Topographic Cartographic Information System) vector data for a study area in the city of Rostock (Mecklenburg-Vorpommern), Germany. The used ATKIS vector data denotes the Basis-DLM (Digital Base Landscape Model). It describes topographic objects of the landscape and includes information about position, shape and characteristic attributes of the mapped objects defined in the ATKIS object catalogue. These objects are differentiated by point, line and area objects. DLM information is available Germany-wide for every federal state. Regarding the data accuracy the DLM of Mecklenburg-Vorpommern refers to the topographic map TK 1:10000. Since 2004 data capture is based on high-resolution Digital Ortho Photos (DOP). Currently there are about 130 object types which characterize the landscape structure by their transportation network, waters and boundaries of re. residential, industrial, commercial and forest areas. The mapping date for the study area of Rostock relates to the year 2008. Consistent mapping of impervious areas and housing density is required for various environmental parameters. Soil sealing can be described as the isolation of the Earth's surface, primarily caused by anthropogenic influences (Burghardt, 2006). In Europe, about three quarters of the population live in cities and the sealed area takes up nine percent of the continent (Scalenghe and Ajmone-Marsan, 2009). Currently there is no reliable and consistent method for extensive mapping of impervious areas although the environmental consequences of soil sealing are important. By isolating the soil, its permeability, infiltration rate and groundwater level is reduced, additionally the surface runoff increases. The hydrological cycle is affected by impervious areas. Furthermore soil sealing has effects on energy and heat transfer between the soil and the atmosphere. Especially in urban areas periods of heat stress are known by a phenomena like the Urban Heat Island effect (Katzschner et al., 2009). Additionally the degree of compactness of the buildings (geometry, height, orientation) affects wind speed and ventilation within urban environments (Klysik and Fortuniak, 1999). Hence the urban microclimate is subject to the degree of soil sealing and urban imperviousness (but also building height and building density and orientation). A decreasing land consumption and effective handling of soil as an important resource are some of the important tasks for city planners (Thunig et al., 2010). For that reason an advanced imperviousness intensity index was generated by combining object relational horizontal sealing attributes and vertical (3D) building structure indices. This imperviousness measure is also taking building height and neighbourhood object information into account and is therefore referred to as a \"Build-up Intensity Metrics\". ## 2 Data and tests The area of investigation is the city of Rostock in Germany. Rostock is located at the Baltic Sea. It has a total population of 200,000 inhabitants and comprises an area of about 180 km\\({}^{2}\\), which makes it the largest city in the federal state of Mecklenburg-Western Pomerania. Rostock is characterized by a large variety of urban land use categories including core areas, industrial and commercial sites, single and double family houses, row houses, allotment gardens and many other urban land use types. The case studies presented in this paper are based on one or more of the following datasets: (1) a TerraSAR-X scene, (2) RapidEye imagery, (3) a normalized digital surface model (nDSM) and (4) the digital landscape model (DLM) for the city of Rostock. An overview of the TerraSAR-X and RapidEye data is given in Table 1. The TerraSAR-X dataset has been recorded in high resolution spotlight mode on June 8, 2010. Data pre-processing comprised radiometric calibration, multi-looking, geocoding and topographic normalization. Moreover, speckle filtering was applied to the SAR scene using the Enhanced Lee algorithm (Lopez et al., 1990) with a 3\\(\\times\\)3 kernel size. The effective resolution of the geocoded and despecled multi-look intensity (ML1) image was 2 m. The multispectral RapidEye scene was collected over Rostock on June 6, 2010. After the selection of 24 well-distributed Ground Control Points (GCPs), the imagery was orthroroverified with the help of a Digital Elevation Model (DEM) covering the area. Orthroverification resulted in a geocoded multispectral image with a geometric resolution of 6.5 m. An atmospheric correction was applied to the data using ATCOR algorithms. The nDSM was derived from Light Detection and Ranging (LiDAR) data provided by the city of Rostock. The LiDAR data were acquired in 2006 with 2 m spatial resolution. The DEM is subtracted from the DSM in order to obtain a dataset with information on the height of objects relative to the ground (i.e., the nDSM). The DLM of Rostock is a vector dataset describing urban land use categories on a per-parcel basis. It is part of the official topographic cartographic information system (ATKIS) in Germany and is used in a modified form as a reference in the context of the classification of urban land use structures. As the last step of preprocessing, all raster datasets were coregistered to digital ortho photos (DOPs) provided by the city of Rostock. ## 3 Methods ### Urban Land Use Structure The classification of various land cover units forms the basis of this analysis. An object-based land cover classification is implemented that uses feature level fusion to combine the information of all available input datasets. Additionally to spectral information also shape and context image features are employed to characterize and extract specific land cover objects. The different land use structures are then determined by quantifying typical horizontal combinations and constellations of the extracted land cover classes and land cover proportions. Accuracy assessment is done by utilizing the available ATKIS information. Because of their detailed differentiation concerning the land use the existing DLM classes had to be harmonized. The DLM classes centered, grove, heath, park f.e. were aggregated to the land use structure class \"green space\" due to similarities of their definitions and their appearance. A typical feature combination is explained exemplarily for the land use structure class \"residential\". Three representative indicators for the prevalent land use are identified: single family houses, linear buildings and space enclosing building structures. All residential buildings are characterized by the land cover classes for dark or red roofs. Bright roofs are rare. Furthermore single-family houses usually have an approximately square shape and cover only a small area. Linear building types are characterized by a rectangular shape, a distinctive length to width ratio and a specific maximum area. For the land cover classification the parametric Jeffries-Matusita distance is utilized as a separability measure. Buildings are characterized by low NDVI values and high mean object heights. For the differentiation of the various roof types the multispectral RapidEye data leads to the best results. However, the discrimination of red and dark roofs occurs with very poor separability. Figure 2 shows some of the employed features for the class differentiation of the three roof types and clarifies the feature values for the best separability or the least overlap. Water and shadows are differentiated using additionally TerraSAR-X coherence (VV) based statistics assuming that the water surface is not dominated by larger waves. Water is therefore classified by its surface shape characteristics and not using spectral properties. We accept that this could be a disadvantage for the transferability of the concept and for the overall robustness. \\begin{tabular}{l l l} \\hline TerraSAR-X & RapidEye \\\\ \\hline Acquisition date & 2010-06-08 & Acquisition date & 2010-06-06 \\\\ Acquisition mode & HR Spotlight & Cloud cover & 0 \\% \\\\ Incidence angle & 44 \\({}^{\\circ}\\) & Incidence angle & 10.6 \\({}^{\\circ}\\) \\\\ Effective res. & 2 m (MLI) & Geometric res. & 6.5 m (orthrocertified) \\\\ Radiometric res. & 32 bit float & Radiometric res. & 16 bit unsigned \\\\ Frequency & X-band (9.6 GHz) & Spectral bands & \\\\ Wavelength & 3 cm & Blue & 440-510 nm \\\\ Polarization (mode) & HH, VV (dual) & Green & 520-590 nm \\\\ Pass direction & Descending & Red & 630-685 nm \\\\ Looking direction & Right & Red Edge & 690-730 nm \\\\ Scene extent & 5\\(\\times\\)10 km & Near Infrared & 760-850 nm \\\\ \\hline \\end{tabular} For the classification of land use classes: shape, area and image object context features are used. Single-family houses are characterized by high values for the feature _density_: Furthermore the object area has to be between 100 and 400 m' and the relative area of the roof type sub objects represents almost completely red or dark roofs. Linear buildings objects are determined by using a _rectangular fit_, _width_ and _length/width_ approximation. The _rectangular fit_ shows high values, the width is limited between 12 m and 30 m and the object length must not exceed six times of the width. Additionally the object area has to be below 1500 m\\({}^{2}\\) and the relative area of the roof type sub objects represents almost completely red or dark roofs as well. Making use of ATKIS objects in this analysis the number and proportion of the land cover classes found are used to derive the residential land use structures. Generally for all residential land use structures at least 20 % of each ATKIS object has to be classified as vegetation land cover. Furthermore every ATKIS object and each classified building object needs a certain proportion of red or dark roofs, which are considered as potential residential buildings. For single family houses and linear buildings every ATKIS object requires more than 2 land use indicator objects in each case for the assignment to a residential land use structure. Figure 3 shows the classification result for all derived land use structure classes. An overall accuracy of 63. ### Impervious Build-Up Intensity Mapping For the assessment of growth dynamics and climatic characteristics there is a need not only to map impervious areas but to quantify the urban build-up intensity. In this work this is done by connecting an object-relational horizontal impervious attribute and vertical building structure information to generate a build-up intensity index. A pan-sharpened QuickBird image together with a Normalized Digital Surface Model generated by LiDAR data is used to derive landcover information for the city of Rostock (Germany) based on an object-based classification. After correcting the classified impervious areas for street tree crown shadowing (which corrects the underestimation of imperviousness in urban areas), the index was created by combining building density, sealing degree, vegetation fraction and vertical gross sealing of every building (Figure 4). Building density is calculated by: 1.: building coverage area, 2. distance and number of buildings, 3. mean vertical gross sealing. The latter (3.) is described by the ratio of the footprint and the gross floor area value for every building. The developed concept is then tested with RapidEye data with a lower spatial resolution. Some minor rule set adaptations were necessary to generate satisfying classification results, mainly with regard to brightness and NDVI values. As shown in Figure 4 the build-up intensity metrics of Rostock varies from low values for buildings in the surrounding regions to sealed areas such as the city centre. The lowest intensity values ranging from -2.5 to -1 appear within allotment areas because of their large vegetation fraction, sparsely impervious surface areas and very low gross sealing. Slightly higher values are derived for single-family house settlements. Apartment complexes show mean build-up intensities of about -0.6 caused by high gross floor areas associated with low building densities. The highest build-up intensity values up to 1.3 are generated for industrial areas and particularly for the city centre. ## 4 Conclusions The study presented in this work developed a concept to derive urban landuse structure and advanced metrics to quantify imperviousness/3D-build-up based on satellite data and ALS data. The use of ATKIS DLM information showed that Earth observation based quantification of urban landuse is not necessary comparable with the DLM information. There is however clearly a potential to derive functional properties from the structure and system of landcover objects with the aid of spatial very high resolution data. Urban object neighbourhood and density measures as well as form and shape parameterisations are important for a quantification of the urban landuse properties. TerraSAR-X was with its monotemporal data resolution in this study only used for water vers- shadow differentiation. Although spatial sharpening of RapidEye data with TerraSAR-X data was originally planned, the overall use of the TerraSAR-X data in this study was limited to selected features for better land cover class separability. ## References * Burghardt (2006) Burghardt, W., 2006. Soil sealing and soil properties related to sealing. Geological Society, London, Special Publications 266, pp. 117-124. * Katzschner et al. (2009) Katzschner, L, Maas, A., Schneider, A., 2009. Das stddische Mikroklima: Analyse fur die Stadt- und Gebudeplanung. _Baupbsst_, 31(1), pp. 18-24. * Klysik and Fortuniak (1999) Klysik, K., Fortuniak, K., 1999. Temporal and spatial characteristics of the urban heat island of Lodz, Poland. _Amospheric Environment_, 33(24-25), pp. 3885-3895. * IEEE Transactions on Geoscience and Remote Sensing 28, 6, 992-1000. * Scalenghe and Ajnone-Marsan (2009) Scalenghe, R., Ajnone-Marsan, F., 2009. The anthropogenic sealing of soils in urban areas. _Landscape and Urban Planning_, 90(1), pp. 1-10. * July 2. ## 6 Acknowledgement This work is partly based on RapidEye data provided by the RapidEye Science Archive (RESA) of DLR. Financial support was provided through the ENVILAND-2 project (DLR). Figure 4: Urban Build-Up intensity index for the buildings of Rostock derived from RapidEye data.
This work presents a multi-sensor data analysis concept for the parameterisation of urban landuse in comparison to ATKIS DLM reference objects (digital landscape model). An object based top-down approach is implemented and the potential of multisensor data for a primary urban landover object classification is assessed. Urban landuse structure is developed based on relational features applied to land cover objects and compared to an aggregated DLM class legend. For a better imperviousness description an advanced imperviousness measure - the build-up impervious intensity ratio is developed that takes building height and derived floor area (based on LiDAR data) and the amount of vegetation within a search radius to every building into account. The developed concept is parameterized with test areas in Rostock and transferability is investigated with data coverage in Cologne (Germany). The work is linked to the urban work package of the DLR funded ENVILAND-2 project that aims to develop operational concepts for the use of TerraSAR-X and Rapideye data in urban mapping scenarios. Urban land use, object-based image analysis, RapidEye, TerraSAR-X, ENVILAND-2, DLM
Give a concise overview of the text below.
236
mdpi/0336b9f7_08f1_42d5_bff2_cf381193714e.md
# Sensitivity Study of WRF Simulations over Tanzania for Extreme Events during Wet and Dry Seasons Abubakar Lungo 1Central Forecasting Office, Tanzania Meteorological Agency, Dar es Salaam 16103, Tanzania; [email protected] Sangil Kim 2Department of Mathematics, Pusan National University, Busan 46241, Korea; [email protected] Meiyan Jiang 2Department of Mathematics, Pusan National University, Busan 46241, Korea; [email protected] Giphil Cho 3Finance-Fishery-Manufacture Industrial Mathematics Center on Big Data, Pusan National University, Busan 46241, Korea; [email protected] Yongkuk Kim 4Correspondence: [email protected]; Tel.: +82-51-510-2209 Received: 29 March 2020; Accepted: 30 April 2020; Published: 2 May 2020 ## 1 Introduction Rainfall is an important climate phenomenon that affects social and economic activities in Tanzania. The economy of Tanzania is mainly dependent on rain-fed agriculture, which is highly vulnerable to the amount and distribution of rainfall [1]. On the other hand, excessive rainfall also has a negative influence on socioeconomic activities in Tanzania, as it can lead to flooding, loss of life, and damage to properties. In Tanzania, rainfall has large temporal and spatial variability [2]. Therefore, accurate forecasting is necessary. The Tanzania Meteorological Agency (TMA) is a government agency that acts as the country's authoritative voice for weather and climate-related issues. Among the major responsibilities of the TMA is to provide weather forecasts at all scales to the public. In order to issue forecasts, Tanzania uses numerical weather prediction (NWP) products provided by global centers obtained through the WMO's Severe Weather Forecasting Demonstration Project (SWFDP), namely GFS(Global Forecasting System, USA), ECMWF (European Centre for Medium-Range Weather Forecasts, European countries), and UKGM (United Kingdom Global Model, United Kingdom). The TMA also has access to NWP products from other global centers, including the Unified Model (UM) of Korea and ARPEGE of Meteo-France. There are also regional centers that provide model products to other countries, including Tanzania. These include the UM from the RSMC in Pretoria, South Africa;Aire Limitee Adaptation dynamique Developpement InterNational (ALADIN) from the Regional Specialized Meteorological Centre (RSMC) for tropical cyclones in the Southwest Indian Ocean in Reunion, France; and WRF from the RSMC in Nairobi, Kenya. At the national level, the TMA also runs a number of regional models, including WRF, Consortium for Small-scale Modeling, and WaveWatch III. As the country is located in a tropical region, the main type of precipitation that affects Tanzania is convective precipitation. This includes showers, thundershowers, hailstones, and sometimes tornadoes or waterspouts. The convective precipitation in the tropics is sometimes difficult to resolve by global models partly because it occurs at small spatial and temporal scales. In order to capture this type of precipitation, an adequate mesoscale model with an appropriate combination of settings is needed. However, most of the rainfall events were not well forecasted by the current global model and regional model. This implies that more suitable numerical model settings are needed to improve the accuracy of the forecasts produced and issued to users and the public. In this study, the skill of a numerical mesoscale WRF model in forecasting precipitation over Tanzania and the sensitivity of the results of selected physical parameterization schemes and horizontal grid spacing (resolution) during the wet and dry seasons were assessed. Currently, the available settings of the WRF NWP model do not provide accurate forecasts to meet customer satisfaction for all seasons; thus, this research was conducted to improve the quality of the forecast. In addition, following the development and availability of new versions and settings of the WRF system worldwide, this research was conducted to test the sensitivities of the new options for physical parameterization of clouds in the model. This study provided WRF model settings in terms of resolution and physical parameterization schemes (microphysics and cumulus physics) that best resolved weather systems during the wet and dry seasons over Tanzania. The main objective of this study was to simulate and model the rainfall distribution over Tanzania using the WRF model. To achieve this main objective, a number of tasks (specific objectives) were conducted. These included the following: (i) to conduct numerical experiments using the WRF model at different horizontal resolutions and to determine the accuracy of the model, and (ii) to conduct experiments with different physical parameterization schemes (convective schemes and cloud microphysics schemes) to determine the sensitivity of the model. This paper comprises four sections. Section 1 provides the background of the study area, rationale of the study, and its objectives. Section 2 discusses the research methodology of the study. A description of the domain size experiment and evaluation of different physical schemes are indicated in this section. Section 3 describes the results of the study following the sensitivity tests of the model performance of different physical schemes in forecasting rainfall events. Section 4 includes the summary and remarks and provides recommendations for further studies. ## 2 Materials and Methods ### Study Area Tanzania is an East African nation covering 947,303 km\\({}^{2}\\) that is located between 1\\({}^{\\circ}\\) S to 12\\({}^{\\circ}\\) S and 28\\({}^{\\circ}\\) E to 42\\({}^{\\circ}\\) E in Figure 1a. Because Tanzania is located in a tropical area, the weather is more controlled by precipitation than by temperature and surface wind, which do not vary much throughout the year. As Tanzania is located in an equatorial climatic region, it often receives a large amount of precipitation during the wet season (October to May) and during the dry season (June to September) each year. Tanzania's rainfall pattern is divided into two types, namely bimodal rainfall pattern areas (areas receiving two rainfall seasons in one year) and unimodal rainfall pattern areas (areas receiving one rainfall season in one year) in Figure 1b. In the bimodal areas, there is a short rainfall season from October to December and a long rainfall season from March until May each year, while in unimodal areas there is one long rainfall season from November to May each year. For the wet season, the cases selected for this study included the heavy rain season of 20-22 December 2011. These were very heavy rainfall events due to strengthening of the inter tropical convergence zone (ITCZ), which is the main rainfall mechanism in the country. It is difficult to predict the onset and location of heavy rainfall events caused by the ITCZ, and most of the available global models were unable to correctly resolve the rainfall. For the dry season, the events of 22-25 July 2014 were used. In this month, although it is drier and cooler in Tanzania, the coastal locations receive prolonged periods of light rainfall due to easterly waves and ridging high-pressure systems. The elevated areas and mountainous areas also normally receive precipitation during this month. These rains are always stratiform in nature, which results from layered clouds, such as stratus, stratocumulus, and altostratus or rimbostratus clouds. ### Rainfall Events and Their Observations Rainfall events include 20-22 December 2011 for the wet season and 22-25 July 2014 for the dry season. #### 2.2.1 Heavy Rain Event of 20-22 December 2011 During this heavy rain event, the well-established ITCZ was the dominant weather system. Several locations in the country received heavy rainfall due to the ITCZ. Most of the available global models and the national WRF model of the 19th 0000Z forecasts for these three days showed almost no sign of heavy precipitation over the coastal and central regions, and they forecasted less precipitation over the southern sector of the country. The 24-h forecast issued by the TMA expected only light showers for 20 December 2011. Figure 2a shows satellite images that indicated deep convective clouds along the coastal, central, western, and northern areas of Tanzania [3]. During this period, the Northern Hemisphere high-pressure cells, namely the Azores and the Arabian ridge, remained intense. The southern hemisphere high-pressure cells, namely Mascarene and St. Helena shown in Figure 2b,c, were relatively weak. This resulted in southward movement of the ITCZ towards Tanzania. Figure 1: Location of Tanzania in Africa (Image source: citiesandplaces.com) (**a**) and the rainfall zone in Tanzania (**b**). #### 2.2.2 Rain Event of 22-25 July 2014 For this event, the dominant weather system was the ridging high. A ridging high is a weather system that occurs in southeastern Africa when the region is dominated by a high-pressure system from the south that extends the ridge northward along the coast, as shown in Figure 3. Normally, a ridging high is a low-level system and appears during the Southern Hemisphere's winter (dry season in most areas). It is considered one of the off-season rainfall mechanisms. In most cases, ridging highs are associated with strong maritime (moist) low-level winds as the high-pressure systems are situated behind the frontal system, thereby causing a strong pressure gradient and strong winds. As these strong, moist, low-level winds encounter topographic barriers due to the nature of the topography of the wind. Figure 2: (**a**) Day natural color RGB (VIS0.6 VIS0.8 VIS1.6) satellite imagery of Eastern Africa at 07:12UTC of 21st December 2011 showing cloud development zone (indicated in red) indicating the presence of the inter tropical convergence zone (ITCZ) (Image source: EUMETSAT); (**b**) MSLP pattern on 20–22 December 2011 showing a low-pressure area (red dashed line) over Tanzania; and (**c**) 700 hPa mean wind-flow pattern on 20–22 December 2011 showing the location of the inter tropical convergence zone (blue dashed line) over Tanzania. (Images source: NCEP Reanalysis Data). southeastern Africa, they deposit a large amount of moisture just before the topographic barriers; as the area is dominated by the high-pressure system, a cluster of low-level stratiform (layered clouds) clouds forms and results in light rainfall that can last for a prolonged period of time, such as an entire day, and causes flooding on the ground. The EUMETAST satellite imagery indicates the presence of clusters of layered clouds oriented north-south along the East African coast advected from the east over the Indian ocean (image not shown). The presence of layered clouds advected from the east indicates the easterly wave pattern [4]. The synoptic-scale mean sea-level pressure analysis showed that the area was dominated by the ridge (high pressure) and the cold, dry southerly flow, thereby indicating that the clouds were layered (stratiform). #### 2.2.3 Observations During the events described in previous subsections, the daily rainfall data have been observed by the TMA's station network in Figure 4. These data were observed from both automatic and manned stations. The data contained 24-h accumulated rainfall data as observed at 06:00 UTC on each of the event days. The station observational data were considered the most accurate available data for the case selection and for the verification of the forecasts by the WRF model in this study. Therefore, the rainfall accumulation data from a total of 42 stations were used for the verification of the model performance. Figure 3: Map of the southern area of Africa showing 925 hPa geopotential height pattern overlaid on wind (KT) and relative humidity (RH) fields portraying the ridging high-pressure system on 22 July 2014. The colorbar represents RH (Image generated with GrADS using NCEP Reanalysis Data). ### Default Model Configuration At the TMA central forecasting office, WRF model version 3.8.1 [5] was configured for East Africa and the Lake Victoria Basin. The domain was configured in the Mercator map projection with a central latitude of 10\\({}^{\\circ}\\) S and longitude of 36\\({}^{\\circ}\\) E. It had a one-way nested domain configuration consisting of an outer domain with 15 km resolution as shown in Figure 5a, and an inner domain with 5 km resolution, as shown in Figure 5b. The outer domain and inner domain had 200 \\(\\times\\) 200 and 235 \\(\\times\\) 160 grid points, respectively. Both had 32 vertical levels with a maximum height of 100 hPa. Also, in order for the mesoscale model to run, the initial and boundary condition data were needed. GFS initial and boundary condition data were downloaded from the NOAA Global Model GFS and were used to initialize the mesoscale WRF model in this study. Figure 4: The Tanzania Meteorological Agency (TMA)’s station network. The circles indicate the locations of both automatic and manned stations. (Image source: Tanzania Meteorological Authority). For physical parameterization, the default model used the same configuration that is utilized by the WRF-ARW model run by the TMA, Tanzania. The physical parameterization included the multiscale Kain-Fritsch cumulus physics [6; 7], YSU scheme for planetary boundary layers (PBL) [8], Lin et al. [9] cloud microphysics, the Unified Noah land surface model [10], and RRTM [11] schemes for shortwave and longwave radiation, respectively. Table 1 summarizes the default model configuration. The initial and boundary conditions for the domain were obtained from the GFS global model [12] with 0.5 degree horizontal resolution. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline **Configuration** & **Outer Domain** & **Inner Domain** \\\\ \\hline WRF model version & & 3.8.1 \\\\ \\hline Horizontal grids & \\(200\\times 200\\) & \\(235\\times 160\\) \\\\ \\hline Grid spacing (km) & 15 & 5 \\\\ \\hline Vertical grid & & 32 layer/top 100 hPa \\\\ \\hline Integration time (s) & 90 & 30 \\\\ \\hline Radiation physics & RRTM scheme for long wave radiation and Goddard scheme for shortwave radiation \\\\ \\hline Cloud microphysics & Lin et al. scheme \\\\ \\hline Surface layer physics & Revised MM5 Monin–Obukhov scheme \\\\ \\hline Land surface physics & Unified Noah LSM \\\\ \\hline Planetary boundary layer & YSU scheme \\\\ \\hline Land use and topography data & 30 s/USGS \\\\ \\hline Cumulus physics & Multiscale Kain–Fritsch scheme & N/A \\\\ \\hline Initial condition & Global Forecasting System model forecast fields (50 km resolution, NCEP) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Default model configuration and physical parameterization. Figure 5: (a) Domain 1 (the outer domain); (b) Domain 2 (the inner domain, as indicated by the black rectangle in domain 1). ### Experimental Model Configuration The experimental model configuration used was the same configuration as the control settings, except for the grid spacing, cloud microphysics, and cumulus physics. Table 2 summarizes the experimental model configuration, and research design configuration. Apart from the 15/5 km outer and inner domain resolution, there were also 12/4 km resolution and 9/3 km resolution for outer and inner domains. For each model resolution, the cloud microphysics and cumulus physics schemes were comprised of seven different experimental configurations including the default model configuration in Table 1 such that the Kessler and WRF double-moment 6-class [13] were used for microphysics schemes and the Betts-Miller-Janic [14], ensemble Grell [15; 16], and new simplified Arakawa [17] schemes were used for the cumulus physics schemes. Note that for each rainfall event, there were a total of 21 runs seen in Table 2, and that the simulation 1 has the same model configuration of the default seen in Table 1. There are various parameterization schemes for microphysics including the WRF single-moment 5-class microphysics [18], single-moment 6-class microphysics [19], and cumulus parameterizations including old simplified Arakawa-Schubert [20], Tiedtke [21], new Tiedtke [22], and multiscale KF [23]. However, there are too many combinations to investigate, so we only considered selected the combinations that we are interested in. ### Analysis Procedure To assess the skill of the model, contingency tables and the associated scores were used to represent the relationship between the forecasted and observed rainfall [24]. This technique consisted of a two-dimensional contingency table that displayed the discrete joint distribution of the forecasted and observed events in terms of their frequency of occurrence. In order to create the contingency table, a threshold value was first determined; this value had to be exceeded by the observation or the forecast for it to be termed as an event. Once each value of the data set was classified as an event or nonevent, they were distributed according to the intersection between forecast and observation, as indicated in Table 3. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline \\multirow{2}{*}{**Experiment**} & \\multicolumn{2}{c}{**Resolution of Outer/Inner Domains (km)**} & \\multirow{2}{*}{**Microphysics Scheme**} & \\multirow{2}{*}{**Cumulus Physics Scheme**} \\\\ \\hline Simulation 11(default) & & Lin et al. & Multiscale Kain–Fritsch \\\\ Simulation12 & & WDM6 scheme & Betts–Miller–Janic scheme \\\\ Simulation 13 & & Kessler scheme & Betts–Miller–Janic scheme \\\\ Simulation 14 & 15/5 & WDM6 scheme & Ensemble Grell scheme \\\\ Simulation 15 & & Kessler scheme & Ensemble Grell scheme \\\\ Simulation 16 & & WDM6 scheme & New simplified Arakawa \\\\ Simulation 17 & & Kessler scheme & New simplified Arakawa \\\\ \\hline Simulation 21 & & Lin et al. & Multiscale Kain–Fritsch \\\\ Simulation 22 & & WDM6 scheme & Betts-Miller-Janic scheme \\\\ Simulation 23 & & Kessler scheme & Betts–Miller-Janic scheme \\\\ Simulation 24 & 12/4 & WDM6 scheme & Ensemble Grell scheme \\\\ Simulation 25 & & Kessler scheme & Ensemble Grell scheme \\\\ Simulation 26 & & WDM6 scheme & New simplified Arakawa \\\\ Simulation 27 & & Kessler scheme & New simplified Arakawa \\\\ \\hline Simulation 31 & & Lin et al. & Multiscale Kain–Fritsch \\\\ Simulation 32 & & WDM6 scheme & Betts–Miller–Janic scheme \\\\ Simulation 33 & & Kessler scheme & Betts–Miller–Janic scheme \\\\ Simulation 34 & 9/3 & WDM6 scheme & Ensemble Grell scheme \\\\ Simulation 35 & & Kessler scheme & Ensemble Grell scheme \\\\ Simulation 36 & & WDM6 scheme & New simplified Arakawa \\\\ Simulation 37 & & Kessler scheme & New simplified Arakawa \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Experimental summary of sensitivity studies for resolution, microphysics, and cumulus physics. The configuration of simulation \\(11\\) is the same as that of the default model seen in Table 1. From Table 3, the following indexes were calculated to assess the scores of the experiments. These included probability of detection (POD), frequency bias (BIAS), proportion correct (PC), threat score (TS), and false alarm ratio (FAR), which was calculated using the following formulas: \\[\\begin{array}{l}\\text{POD}=\\frac{\\text{hit}}{\\text{hit}+\\text{miss}},\\text{ BIAS}=\\frac{\\text{hit}+\\text{false alarm}}{\\text{hit}+\\text{miss}},\\\\ \\text{PC}=\\frac{\\text{hit}+\\text{miss}+\\text{false alarm}+\\text{correct rejection}}{\\text{hit}+\\text{false alarm}+\\text{miss}},\\text{ FAR}=\\frac{\\text{false alarm}}{\\text{hit}+\\text{false alarm}}\\end{array}\\] POD, also known as hit rate, is a statistical measure that examines events by measuring the proportion of observed events that were correctly forecasted. It ranges between 0 and 1, whereby 0 represents poor skill and 1 represents perfect skill. BIAS is another statistical measure that compares the frequency of forecasts (forecasted yes) to the frequency of actual occurrences (observed yes). It ranges between 0 and infinity, whereby 1 represents unbiased, BIAS \\(>\\) 1 represents over-forecasting, and BIAS \\(<\\) 1 represents under-forecasting. PC gives the fraction of all forecasts that were correct. It is sensitive to both hits and correct rejections. The value ranges between 0 and 1 and the perfect score is 1. TS, also known as the critical success index (CSI), is a statistical index that is widely used as a performance measure of rare events (e.g., rainfall). It ranges between 0 and 1, whereby TS = 1 represents perfect skill and TS = 0 represents poor skill. FAR gives the fraction of forecast events that were observed to be nonevents. The value varies between 0 and 1 and the perfect score is 0. In order to compare the performance of different model configurations, the combined skill score was employed [25]. Such a skill score plays a role in summarizing the above statistical scores that could be used together when there are many model results for different experiments, and the concept of such skill scores gives the readers overall evaluation and direct understandings of all statistical scores. Thus, we slightly modified the combined skill score such that a modified skill score employed the following scores: PC, POD, TS, and FAR. The purposes of the modified skill score below help us to understand the overall performance better and summarize the results for a final decision. \\[\\text{Combined score}=\\text{mean}\\left\\{\\text{PC}+\\text{POD}+\\text{TS}+(1- \\text{FAR})\\right\\}\\] where the values of PC, POD, and TS are always positive and range between 0 and 1. The FAR range is also between 0 and 1. Thus, the result of 1-FAR is always positive and between 0 and 1. Thus, by its definition, the modified skill score above always ranges between 0 and 1. Note that the statistical skill scores are used here to assess overall performance in order to determine the best model configurations. To summarize all of the model performance results, we utilized the Taylor diagram for the combined skill score and FB to represent the overall performance of the model rainfall predictions [26]. ## 3 Numerical Results The section below describes the numerical results for model horizontal resolution and physical parameterization. To do so, we first show the Taylor diagram to evaluate the overall performances for all 21 simulations in Table 2. Then, we analyze the detailed results for the model horizontal resolution and then the effects on physical parameterization. Note that we only show the results with the threshold for an event of 10 mm or above for the wet season and 1 mm or above for the dry season. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline \\multirow{2}{*}{**Forecast**} & \\multicolumn{2}{c}{**Observed Rainfall Exceeding Threshold Value**} \\\\ \\cline{2-3} & **Yes** & **No** \\\\ \\hline Yes & Hit & False alarm \\\\ \\hline No & Miss & Correct Rejection \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Contingency table. Threshold for the wet and the dry seasons is 10 mm and 1 mm, respectively. Figure 6 shows the results of the combined score and BIAS for the wet season and the dry season, respectively. Note that the blue contour is related to the azimuthal angle, and the black dashed contours are the radial distance from the origin. Hence, when the model agrees with observations, both values of the combined score and BIAS for the models lie nearest to 1. The diagram for the different resolution and physical parameterization schemes indicated that models with the same resolution with different physics showed different performances for both seasons, and predicted differently for the two seasons. However, it is still notable that the square (9/3 km resolution) and triangle (15/5 km resolution) shapes are located closest to the value of 1. In addition, it showed relatively good skill scores of the models with colors of black, magenta, and yellow, respectively corresponding to the physics scheme for simulations 1, 4, and 7. To show more details for model performances, in Figure 7 we showed the combined score and BIAS for different resolutions in both wet and dry seasons. Combined scores showed that the models with the high resolution showed relatively better performances during both seasons, while during the dry season, the models with the coarsest resolution also showed good performance. The model BIAS score indicates whether the models systematically favor producing precipitation. Among different combinations of model physical parameterizations, the BIAS for the models with multiscale Kain-Fritsch cumulus physics together with Lin et al. microphysics showed comparatively more accurate scores than others for both seasons. Overall, according to the Taylor diagram and the statistical skill scores of the model simulations for both wet and dry seasons, it was clear that the model outputs with the finest resolution showed predictability that is more acceptable. Figure 6: Taylor diagram displaying combined skill score and BIAS for (**a**) wet season and (**b**) dry season. The shapes of square, circle, and triangle represent the resolution of 9/3 km, 12/4 km, and 15/5 km, respectively. The numbers with colors of black, blue, green, magenta, red, orange, yellow in order represent simulations using the same physical parameterization from the simulation 1 to simulation 7. The blue contour represents the azimuthal angle, and the black dashed contour is the radial distance from the origin. ### Model Resolution Effect In this section, we simulated rainfall over Tanzania for both seasons to examine the impact of horizontal grid resolution on model predictability. It is generally assumed that models with higher resolution have higher skill than those with lower resolution in resolving small-scale convective systems. This is because most convective clouds occur at a small horizontal scale ranging from several hundreds of meters to a few kilometers. In this study, this behavior was clearly found for the wet season, but a significant difference was not found for the dry season. Figure 8 showed that during the wet season, all the resolution settings which were 15/5, 12/4 and 9/3 km grid spacing showed reasonably good forecasts in terms of special coverage of the rainfall systems. Increased resolution especially showed improvements in the special coverage when compared to the low-resolution simulations. This is because during the wet season, when most of the precipitation activities over Tanzania are convective in nature, the horizontal grid spacing improved the skill of the forecast. This behavior was also the case for the other researches [27]. That is, convective systems take place in small areas and are highly influenced by local features like topography and land cover, so the higher resolution models provide better skills in predicting predominant convective weather systems. For the dry season however, there was no visible pattern regarding the improvement of the skills of the model, as the model resolution was increased over Tanzania, as shown in Figure 9. This may have been due to the fact that during the dry season, most of the precipitation activities were stratiform in nature due to the synoptic-scale systems (ridging high and easterly waves). That is, the weather systems dominating the country during the dry season were mainly stratiform, occupying large areas and insubstantially affected by local features like area topography and land cover. Thus, increased resolution may have little impact in predicting stratiform systems. This is similar to the results of Figure 7: Combined Skill score (**a**,**c**) and BIAS (**b**,**d**) for the wet season (**a**,**b**) and the dry season (**c**,**d**). Horizontal axes represent the discrete microphysical schemes corresponding to simulation*1 to simulation*7. The left, middle, and right bars in each experiment (SIM*1, SIM*2, , SIM*7) represent the skill score values for resolutions of 9/3 km, 12/4 km, and 15/5 km, respectively. the European region [28]. Even though the model resolutions did not show a big difference for predicting rainfall during the dry season, the results indicated a relatively positive effect for the dry season. Overall, analysis of the rainfall predictions from the experiments showed that increasing model resolution had an overall positive impact on rainfall prediction for both seasons. Note that the precipitation with 15/5, and 9/3 km resolution are alike, while they are different from that with 12/4 km. That is, the precipitation locations and their magnitudes are not linearly related to the model resolutions. It is generally known that global models overestimate light precipitation and underestimate heavy rainfall due to the low horizontal resolution compared to the scale of the precipitation core [29]. Also, in some regional studies, the heavy rainfall cores with different resolutions showed slight shifts to different locations and the magnitude of precipitation was underestimated [30; 31]. This difference might be the effects of the initial conditions, which are significant for short-range forecasts, and higher forecast skill could be achieved either by generating an initial condition via data assimilation schemes or by expanding the region used for the outer domain. In this study, however, the unknown nonlinearity between model resolution and rainfall core has been not articulated yet, and an improvement in forecast skill should be pursed in a subsequent study. Figure 8: Total precipitation amount during the wet season on 20 December 2011. (**a**) Satellite observation from airmass RGB (WV6.2–WV7.3, IR9.7–IR10.8, WV6.2) image source: EUMETSAT; and 24-h accumulated model rainfall forecasts for (**b**) 15/5 km resolution; (**c**) 12/4 km resolution; (**d**) 9/3 km resolution. Colorbars represent the total precipitation amount (mm). ### Impact on Physical Parameterization The statistical evaluation methods, POD, PC, TS, FAR, and BIAS, for rainfall occurrence and amount are very important. In the previous subsection, the models with the finest resolution respectively revealed better and similar rainfall predictability than the model with coarser resolution for the wet and dry seasons. In this section, we evaluated the effect of the physical parameterization for simulation 11, simulation 12, , simulation 17, the models that were configured with 3-km resolution for the inner domain and 9-km resolution for the outer domain, as shown in Table 2. In order to make the statistical results more significant, we added the 5-8 May 2017 heavy rain event for the wet season and the 20-23 August 2017 rain event for the dry seasons. Table 4 shows averages of the statistical skill scores for the wet and dry seasons for those simulations indicated by simulation 1, simulation 2, , simulation 7. For example, POD is a statistical measure that relates the likelihood of detection of the meteorological event by the forecast. It is calculated as the ratio of the number of events forecasted to the total number of forecasts (hit/[hit + miss]). In Table 4, it is evident that for the wet season, the default setting (simulation 1), which was comprised Figure 9: Total precipitation amount for the dry season on 26 July 2014. (**a**) Day natural color RGB VIS0.6 VIS0.8 NIR1.6 satellite observation at 1015UTC (image source: EUMETSAT); and 24-h accumulated model rainfall forecasts for (**b**) 15/5 km resolution; (**c**) 12/4 km resolution; (**d**) 9/3 km resolution. The colorbar represents the total precipitation amount (mm). of Lin et al. microphysics and multiscale Kain-Fritsch cumulus physics schemes, had the highest POD (0.59), followed by simulation 4 (Kessler microphysics and new simplified Arakawa cumulus physics schemes), which had an average POD of 0.44. On the other hand, for the dry season, the POD of the events was generally very low. This may have been due to the fact that there were few reports to verify, as most of the stations reported precipitation of less than 1 mm, especially for coastal areas where the number of observation stations is not sufficient. Other statistical scores showed slightly different results such that the highest score of PC, TS, and FAR, and the unbiased one of BIAS for the wet (dry) season was simulation 1 (simulation 5), simulation 1 (simulation 2), simulation 6 (simulations 5,6,7), and simulation 1 (simulation 1), respectively. To assess overall performance in order to determine best model configurations, the mean absolute distance error (MDE) is defined by \\[\\text{MDE}=\\frac{(1-\\text{CS})+\\text{abs}(1-\\text{BIAS})}{2}\\] where \\(\\text{abs}(x)\\) means the absolute value of \\(x\\). Note that the MADE value closest to 1 is the perfect skill score, and MDE measures the average magnitude of the absolute difference between model skill scores and the value 1, of which the combined score and BIAS represent the perfect skill and unbiased score. Thus, the smaller value of MDE indicates the better model predictability. The purposes of the MDE score were to understand the overall performance better and summarize the results for a final decision. The overall ranking of the model simulations for the wet season showed that the default simulation (simulation 1) was first, followed by simulation 2, and simulation 6 showed the poorest skill. For the dry season, the overall ranking of the model simulations showed that simulation 1 was first, followed by simulation 2. Meanwhile, simulation 6 and simulation 7 showed the poorest skill because the FAR index was not assessable for these simulations due to a large number of false alarms. ## 4 Conclusions and Remarks The performance of different schemes on the precipitation systems during the wet and dry seasons over Tanzania is evaluated in this study. Tanzania is located in the tropics, where during the wet season, the dominant weather system is the ITCZ, which produces deep convective systems, including thunderstorms. During the dry season, weather systems dominating the country are mainly stratiform, which occupy a large area and are not affected by local features such as topography and land cover. In order to evaluate the skill of the model predictability for different resolutions and various parameterizations, each model simulation was ranked based on the statistical indexes explained in the above section. For each index, the simulations were ranked for each season, and then the average rank of each simulation was calculated based on all the statistical indexes combined. \\begin{table} \\begin{tabular}{c c c c c c c c c c c c c c c} \\hline \\hline & \\multicolumn{6}{c}{**Wet Season**} & \\multicolumn{6}{c}{**Dry Season**} \\\\ \\cline{2-13} & **sim1** & **sim2** & **sim3** & **sim4** & **sim5** & **sim6** & **sim7** & **sim1** & **sim2** & **sim3** & **sim4** & **sim5** & **sim6** & **sim7** \\\\ \\hline POD & **0.59** & 0.42 & 0.34 & 0.44 & 0.28 & 0.18 & 0.33 & 0.38 & **0.40** & 0.28 & 0.24 & 0.17 & 0.00 & 0.06 \\\\ PC & **0.71** & 0.68 & 0.65 & 0.65 & 0.59 & 0.61 & 0.63 & 0.75 & 0.81 & 0.80 & 0.80 & **0.84** & 0.80 & 0.81 \\\\ TS & **0.28** & 0.22 & 0.16 & 0.18 & 0.09 & 0.10 & 0.13 & 0.13 & **0.20** & 0.13 & 0.12 & 0.14 & 0.00 & 0.05 \\\\ FAR & 0.25 & 0.15 & 0.18 & 0.24 & 0.33 & **0.00** & 0.28 & 0.63 & 0.48 & 0.56 & 0.56 & **0.00** & **0.00** & **0.00** \\\\ \\hline CS & **0.58** & 0.54 & 0.49 & 0.51 & 0.41 & 0.47 & 0.45 & 0.41 & 0.48 & 0.41 & 0.40 & **0.54** & 0.45 & 0.48 \\\\ BIAS & **0.79** & 0.50 & 0.41 & 0.60 & 0.41 & 0.18 & 0.45 & **1.09** & 0.77 & 0.58 & 0.51 & 0.17 & 0.00 & 0.06 \\\\ \\hline MDE & **0.31** & 0.48 & 0.55 & 0.45 & 0.59 & 0.67 & 0.55 & **0.34** & 0.37 & 0.50 & 0.55 & 0.65 & 0.78 & 0.73 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Average of the statistical skill scores for the wet and dry seasons for each simulation. CS and MDE stand for combined score and mean absolute distance error, respectively. The thick black bold numbers indicate the highest or unbiased scores for statistical skills for each season. Generally, all the resolution settings, which were 15/5, 12/4, and 9/3 km grid spacing, showed reasonably good forecasting in terms of special coverage of the rainfall systems. Increased resolution (i.e., 9/3 km grid spacing) showed a greater improvement in the special coverage than the low-resolution simulations (i.e., 15/5 km grid spacing), especially during the wet season when the weather system was dominated by convective systems [32]. Convective systems occur in small areas and are highly influenced by local features, such as topography and land cover; thus, the higher resolution models are better at predicting predominantly convective weather systems. For the dry season, there was no visible pattern regarding the improvement in the skill of the model, as the model resolution was increased over Tanzania. As noted above, the dominant weather systems during the dry seasons are stratiform in nature, which always occupy relatively large area thus not being affected by local scale features. Thus, increased resolution had little impact in predicting stratiform systems. The default simulation containing the Lin et al. microphysics scheme with the multiscale Kain-Fritsch cumulus physics scheme showed greater success at resolving weather systems during the wet season. The microphysics scheme is able to take into consideration the deep convective systems and clouds in all phases, i.e., liquid, ice, and vapor processes; in this research, it was also proved to be better at resolving weather systems during the wet season. Following the findings of this study, it is recommended that the findings of this research be adopted for operational forecasting at the TMA. In addition, further research should be conducted with other physical parameterizations, such as radiation physics and planetary boundary layer physics, among others. Even with these findings, further research should be conducted to improve the physical equations and better resolve the weather systems in the country and the region as a whole. A.L. and S.K. conceived and designed the experiments and wrote the paper. G.C. and M.J. analyzed the data. M.J. visualized the data. Y.K. edited the paper. All authors have read and agreed to the published version of the manuscript. This work was supported by a 2-Year Research Grant of Pusan National University. The authors thank the editor and reviewers who provided constructive and thoughtful comments to improve this paper. The authors declare no conflict of interest. ## References * (1) Kijazi, A.L.; Reason, C.J.C. Relationships between intraseasonal rainfall variability of coastal Tanzania and ENSO. _Theor. Appl. Climatol._**2005**, _82_, 153-176. [CrossRef] * (2) Ogallo, L. Rainfall variability in Africa. _Mon. Weather Rev._**1979**, _107_, 1133-1139. [CrossRef] * (3) Futyan, J.M.; Del Genio, A.D. Deep convective system evolution over Africa and the tropical Atlantic. _J. Clim._**2007**, _20_, 5041-5059. [CrossRef] * (4) Kijazia, A.L.; Reasons, C.J.C. Analysis of the 2006 floods over northern Tanzania. _Int. J. Climatol._**2009**, _29_, 955-970. [CrossRef] * (5) Skamarock, W.C.; Klemp, J.B.; Dudhia, J.; Gill, D.O.; Barker, D.O.; Wang, W.; Powers, J.G. _A Description of the Advanced Research WRF Version 3_; Technical Report; National Center for Atmospheric Research: Boulder, CO, USA, 2005; p. 113. [CrossRef] * (6) Kain, J.S. The Kain-Fritsch convective parameterization: An update. _J. Appl. Meteorol._**2004**, _43_, 170-181. [CrossRef] * (7) Zheng, Y.; Alapaty, K.; Herwehe, J.A.; Genio, A.D.D. Improving High-Resolution Weather Forecasts Using the Weather Research and Forecasting (WRF) Model with an Updated Kain-Fritsch Scheme. _Mon. Weather Rev._**2016**, _144_, 833-860. [CrossRef] * (8) Hong, S.-Y.; Noh, Y.; Dudhia, J. A new vertical diffusion package with an explicit treatment of entrainment processes. _Mon. Weather Rev._**2006**, _134_, 2318-2341. [CrossRef] * (9) Lin, Y.L.; Farley, R.D.; Orville, H.D. Bulk parameterization of the snow field in a cloud model. _J. Clim. Appl. Meteorol._**1983**, _22_, 1065-1092. [CrossRef]* _Tewari et al. (2004)_ Tewari, M.; Chen, F.; Wang, W.; Dudhia, J.; LeMone, M.A.; Mitchell, K.; Ek, M.; Gayno, G.; Wegiel, J.P.; Cuenca, R.H.; et al. Implementation and verification of the unified noah land surface model in the WRF model. _Bull. Am. Meteorol. Soc._**2004**, _27_, 2165-2170. [CrossRef] * _Mlawer et al. (1997)_ Mlawer, E.J.; Taubman, S.J.; Brown, P.D.; Iacono, M.J.; Clough, S.A. Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. _J. Geophys. Res. Atmos._**1997**, _102_, 16663-16682. [CrossRef] * _NCEP (2003)_ NCEP. _The GFS Atmosphere Model; Note 442;_ (November); NCEP: Washington, DC, USA, 2003; p. 14. Available online: [http://www.emc.ncep.noaa.gov/officenotes/newernotes/on442.pdf](http://www.emc.ncep.noaa.gov/officenotes/newernotes/on442.pdf) (accessed on 1 May 2020). * _Lim and Hong (2010)_ Lim, K.-S.S.; Hong, S.-Y. Development of an effective double-moment cloud microphysics scheme with prognostic cloud condensation nuclei (CCN) for weather and climate models. _Mon. Weather Rev._**2010**, _138_, 1587-1612. [CrossRef] * Janjic (1994) Janjic, Z.I. The Step-Mountain Eta Coordinate Model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. _Mon. Weather Rev._**1994**, 927-945. [CrossRef] * _Grell and Freitas (2014)_ Grell, G.A.; Freitas, S.R. A scale and aerosol aware stochastic convective parameterization for weather and air quality modeling. _Atmos. Chem. Phys._**2014**, _14_, 5233-5250. [CrossRef] * _Grell and Devenyi (2002)_ Grell, G.A.; Devenyi, D. A generalized approach to parameterizing convection combining ensemble and data assimilation techniques. _Geophys. Res. Lett._**2002**, _29_, 38-1-38-4. [CrossRef] * _Han and Pan (2011)_ Han, J.; Pan, H.-L. Revision of convection and vertical diffusion schemes in the NCEP Global Forecast System. _Weather Forecast._**2011**, _26_, 520-533. [CrossRef] * _Hong et al. (2004)_ Hong, S.-Y.; Dudhia, J.; Chen, S.-H. A revised approach to ice microphysical processes for the bulk parameterization of clouds and precipitation. _Mon. Weather Rev._**2004**, _132_, 103-120. [CrossRef] * _Hong and Lim (2006)_ Hong, S.; Lim, J. The WRF single-moment 6-class microphysics scheme (WSM6). _J. Korean Meteorol. Soc._**2006**, _42_, 129-151. * _Pan and Wu (1995)_ Pan, H.-L.; Wu, W.-S. Implementing a mass flux convective parameterization package for the NMC medium-range forecast model. _NMC Off. Note._**1995**, _409_. Available online: [https://repository.library.noaa.gov/view/noaa/11429](https://repository.library.noaa.gov/view/noaa/11429) (accessed on 1 May 2020). * _Tiedtke and M.A. Comprehensive mass flux scheme for cumulus parameterization in large-scale models. _Mon. Weather Rev._**1989**, _117_, 1779-1800. [CrossRef] * _Zhang et al. (2011)_ Zhang, C.; Wang, Y.; Hamilton, K. Improved representation of boundary layer clouds over the southeast Pacific in ARW-WRF using a modified Tiedtke cumulus parameterization scheme. _Mon. Weather Rev._**2011**, _139_, 3489-3513. [CrossRef] * _Kain and Fritsch (1998)_ Kain, J.S.; Fritsch, J.M. Multiscale convective overturning in mesoscale convective systems: Reconciling observations, simulations, and theory. _Mon. Weather Rev._**1998**, _126_, 2254-2273. [CrossRef] * _Pearson (1904)_ Pearson, K. _On the Theory of Contingency and Its Relation to Association and Normal Correlation_; Drapers' Company Research Memoirs Biometric Series I; Cambridge University Press: Cambridge, UK, 1904. * _Rodrigo et al. (2018)_ Rodrigo, C.; Kim, S.; Jung, I.H. Sensitivity study of WRF numerical modeling for forecasting heavy rainfall in Sri Lanka. _Atmosphere_**2018**, \\(9\\), 378. [CrossRef] * _Taylor and Fitsch (2001)_ Taylor, K.E. Summarizing multiple aspects of model performance in a single diagram. _J. Geophys. Res._**2001**, _106_, 7183-7192. [CrossRef] * _Dawson and Palmer (2015)_ Dawson, A.; Palmer, N.T. Simulating weather regimes: Impact of model resolution and stochastic parameterization. _Clim. Dyn._**2015**, _44_, 2177-2193. [CrossRef] * _Rolfova et al. (2014)_ Rolfova, Z.; Farda, A.; Kysely, J. Effects of Horizontal Resolution of Regional Climate Model Simulations on Convective and Stratiform Precipitation. In Proceedings of the 14th EMS Annual Meeting Abstracts, Prague, Czech Republic, 6-10 October 2014. * _Ebert et al. (2003)_ Ebert, E.E.; Damrath, U.; Wergen, W.; Baldwin, M.E. The WGNE assessment of short-term quantitative precipitation forecasts. _Bull. Am. Meteorol. Soc._**2003**, _84_, 481-492. [CrossRef] * _Jang and Hong (2014)_ Jang, J.; Hong, S.Y. Quantitative forecast experiment of a heavy rainfall event over Korea in a global model: Horizontal resolution versus lead time issues. _Meteorol. Amos. Phys._**2014**, _124_, 113-127. [CrossRef]* _Jee et al. (2016)_ Jee, J.B.; Kim, S. Sensitivity study on high-resolution numerical modeling of static topographic data. _Atmosphere_**2016**, \\(7\\), 86. [CrossRef] * _Wang et al. (2016)_ Wang, X.; Steinle, P.; Seed, A.; Xiao, Y. The sensitivity of heavy precipitation to horizontal resolution, domain size, and rain rate assimilation: Case studies with a convection-permitting model. _Adv. Meteorol._**2016**, 1-20. [CrossRef]
Precipitation prediction is important to help mitigate the effects of drought and floods on various social and economic activities. This research is to improve the forecasting skill over Tanzania by providing suitable combinations of physical parameterization schemes and horizontal grid spacing of the Weather Research Forecasting (WRF) model for daily forecasting over Tanzania. The performance of different schemes on the precipitation systems during the wet and dry seasons over Tanzania is evaluated such that the sensitivity tests was performed for the WRF model at different horizontal resolutions, and for different physical parameterization schemes (convective and cloud microphysics). The results showed that the improved grid spacing was better at completing forecasts during the wet season, but had little significant impacts during the dry season. Model simulations with combinations of Lin et al. microphysics and the multiscale Kain-Fritsch scheme showed greater success during the both seasons; therefore, these combinations were recommended for Tanzania to resolve weather systems during the wet and dry season simulations, respectively. keywords: heavy rainfall; precipitation forecasting; wet season; dry season; precipitation forecasting; WRF Model; Tanzania + Footnote †: journal:
Summarize the following text.
222
arxiv-format/2404_01842v1.md
# Semi-Supervised Domain Adaptation for Wildfire Detection JooyYoung Jang\\({}^{1,2}\\) Youngseo Cha\\({}^{1}\\) Jisu Kim\\({}^{1}\\) SooHyung Lee\\({}^{1}\\) Geonu Lee\\({}^{1}\\) Minkook Cho\\({}^{1}\\) Young Hwang\\({}^{1}\\) Nojun Kwak\\({}^{2}\\) \\({}^{1}\\)Alchera, South Korea \\({}^{2}\\)Seoul National University, South Korea [email protected], {ys.cha, js.kim, _shlee, gu.lee}alcherainc.com, [email protected] corresponding author: Nojun Kwak ([email protected]) ## 1 Introduction Wildfires contribute to and are exacerbated by global warming, leading to significant economic losses and ecological damage (Sathishkumar et al., 2023; Lindsey and Dahlman, 2020). Such impacts can be mitigated through the early detection of wildfires, enabling firefighters to intervene promptly. For effective mitigation, wildfire detection systems must achieve high accuracy and maintain low false positive rates (Ranadive et al., 2022). However, applying fire detection in real-world scenarios present challenges, including a domain shift between the training and testing environments that can degrade detection performance (Yoo et al., 2022). Even worse, acquiring a large volume of labeled data for the target domain is particularly challenging for infrequent events like wildfires (Kim et al., 2022). To address these challenges, this paper introduces a new protocol, semi-supervised domain adaptation(SSDA) for wildfire detection. To the best of our knowledge, this is the first paper to apply SSDA for object detection task. As depicted in Fig.1, SSDA is a combination of semi-supervised learning (SSL) and unsupervised domain adaptation (UDA) task. It uses large amount of source domain images, while uses minimal set of target labeled images alongside a substantial corpus of target unlabeled images. SSDA setting is practical for real-world application considering labeling cost and performance. (Yu and Lin, 2023)This work makes two primary contributions. First, we introduce new labels for wildfire detection tasks, increasing label diversity by thirtyfold compared to existing labels within the HPWren dataset, as in Table 3. We classified source domain as previous publicily available labels and target domain as new labeled set we suggested in this paper. Second, we present a novel approach to learn translational variance characteristics of wildfires called Location-Aware Semi-Supervised Domain Adaptation (LADA) framework, which integrates Coordinate Convolution (Liu et al., 2018) with a scale-aware Faster R-CNN (Chen et al., 2021). Our result demostrate inhanced performance accross various SSDA protocols from current state-of-the-art UDA framework (Hoyer et al., 2023). **Related work.** SSDA approach seeks to reduce the domain-gap between source and target using consistency regularization loss (Yu and Lin, 2023), or use cutmix augmentation for domain-mixing loss (Chen et al., 2021). However, those methods neither applied for object detection task. Our method uses consistency regularization loss using masked augmentation similar to (Hoyer et al., 2023). ## 2 Methods ### Proposed Dataset In this paper, we propose a refined set of labels for the HPWREN (2000) dataset. This dataset serves as a benchmark for wildfire detection tailored to individual researchers. However, direct application of this benchmark for research encounters two primary obstacles. First, the diversity and quality of the labels are limited, leading to potential overfitting issues. Specifically, the dataset comprises only 609 labeled images across 9 scenes. Second, the practice of labeling smoke with separated bounding boxes demands considerable time and efforts for annotation. We have discovered that merging these bounding boxes not only simplifies the labeling process but also improves detection performance, as illustrated in Fig. 2. Detailed results are presented in Section 3. To address these challenges, we introduce a new benchmark for semi-supervised domain adaptation in object detection, detailed in Table 1. Inspired by Li et al. (2023), we propose three protocols, 0.5%, 1.0%, and 3.0%, representing the ratio of labeled data in the target domain relative to the total dataset. In this benchmark, the source domain comprises 9 sub-directories with labels available on the HPWREN (2000) homepage, while 274 sub-directories are designated as the target domain. This configuration results in a domain shift, as the source and target domains do not share a common environment. ### Location-Aware semi-supervised Domain Adaptation **Preliminary.** In this study, we tackle the challenge of early wildfire detection by leveraging object detection frameworks (Chen et al., 2021). Image samples are denoted by \\(\\mathbf{x}_{s}=(x_{i})_{i=1}^{N_{x}},\\mathbf{x}_{tl}=(x_{i})_{i=1}^{N_{tl}}\\) Figure 1: semi-supervised Domain Adaptation and corresponding bounding-box labels \\(\\mathbf{y}_{s}=(y_{i})_{i=1}^{N_{s}},\\mathbf{y}_{tl}=(y_{i})_{i=1}^{N_{tl}}\\) are utilized as input to the model, where \\(N_{s},N_{tl}\\) is the number of labeled samples for source and target domain each. The label consists of class and bounding boxes \\(y_{i}=(c,x,y,w,h)\\) where \\(c,x,y,w\\) represents class index, center points, width and height of the box. In addition, the pseudo bounding-box labels \\(\\mathbf{u}=(u_{i})_{i=1}^{N_{u}}\\) are constructed when the confidence score is bigger than the upper threshold \\(p_{u}>\\tau_{u}\\), or smaller than the lower threshold \\(p_{u}<\\tau_{l}\\) where \\(\\tau_{u},\\tau_{l}\\), and \\(N_{tu}\\) represents upper, lower confidence threshold, and number of unlabeled target samples, each. **Pseudo Labeling.** We utilize large amount of unlabeled target data with a teacher-student paradigm (Liu et al., 2021) augmented with Masked Image Consistency (MIC) Loss (Hoyer et al., 2023). Built upon that, we changed pseudo label filter to use reliable background images implemented by very low probability score in order to train backbound images, as shown in equation 1. \\(\\hat{y}_{i}\\) reprsents \\(i_{th}\\) unlabeled target sample index. We used \\(\\tau_{u}=0.8,\\tau_{l}=0.05\\) for all of our experiments. We find that it is helpful especially for 0.5%, 1.0% SSDA protocols which lacks of highly reliable positive images. Further information is in Appendix C. **Translation Variance Features.** Wildfires typically don't occur in specific area, such as skies or lakes, or it shows a location-dependent shapes. For instance, they seldom occur in the upper portion of the images, which are predominantly occupied by the sky, while many of them has conical shape, expanding vertically. In order to utilize such characteristics, the Coordinate Convolution layer (Liu et al., 2018) was incorporated into both the Feature Pyramid Network (FPN) (Lin et al., 2017) and the Region Proposal Network (RPN) (Ren et al., 2015), as illustrated in Fig. 3. Coordinate convolution layer embed coordinate information through two channels, \\(x\\) and \\(y\\) to the original convolution layers, and the new added channels enable the model to capture such translational variance features at a minimal computational cost. We didn't add Coordinate convolution layer into the backbone as suggested in the (Liu et al., 2018), since it did not show good performance. **Location Aware Domain Adaptation.** The comprehensive diagram of our training process is depicted in Fig. 4. The student model is trained using both supervised and unsupervised losses, whereas the teacher model is periodically updated through an Exponential Moving Average. Unsupervised losses consist of masked image consistency loss, pseudo label based Cross entropy loss, and adversarial loss. The former allows the model to learn consistent output from masked image to original image, which allows the model to learn robust predictions even in such randomly masked images. Pseudo labeling loss, on the other hand, make use of a large amount of unlabeled target images to train as supervised learning. Finally, adversarial loss aligns the source and target domain features in the backbone in order to reduce domain gap in three levels. More training detail information is available in Appendix B. \\begin{table} \\begin{tabular}{|c|c|c|c|c|} \\hline & **Previous labels** & **Proposed labels** & **Total HPWREN** \\\\ \\hline **Number of sub-directories** & 9 & 283 & 342 \\\\ \\hline **Number of images** & 609 & 2,575 & 27,174 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Number of sub-directories, and labels for HPWREN dataset \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline & **source** & **target 0.5\\%** & **target 1.0\\%** & **target 3.0\\%** & **target val** \\\\ \\hline **forground images** & 309 & 44 & 94 & 257 & 451 \\\\ \\hline **background images** & 300 & 58 & 111 & 359 & 630 \\\\ \\hline **total images** & 609 & 102 & 205 & 616 & 1,081 \\\\ \\hline \\end{tabular} \\end{table} Table 2: semi supervised domain adaptation for wildfire detection protocol \\[\\begin{cases}\\hat{p}_{i}>\\tau_{u}\\\\ \\hat{p}_{i}<\\tau_{l}\\end{cases} \\tag{1}\\] ## 3 Results Our model is trained in two stages. In the first stage, the training exclusively utilized source data. In the subsequent stage, a combination of labeled source data, labeled target data, and unlabeled data was employed. We conducted an initial comparison between the original HPWREN labels and our proposed labeling approach. As detailed in Table 3, our approach, which utilizes merged bounding boxes, improves up to 10.9 mean Average [email protected]:0.95 (mAP) over the original labels. Based on these results, we advocate for the adoption of our merged bounding box labeling strategy in wildfire detection. As presented in Table 4, our model surpasses the performance of the model proposed by Chen et al. (2021b) in the source-only protocol. This protocol exclusively utilizes the source dataset for training and subsequently validates the model on the target validation dataset. Our model also outperforms Hoyer et al. (2023) in semi-supervised Domain Adaptation protocols. The results indicate that our proposed method catches the translational variance features of wildfire well, leading to better generalization performance. Figure 3: Location Aware semi-supervised Domain Adaptation Network. We omitted the second stage regression and classification heads for simplicity. \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & **0.5\\%** & **1.0\\%** & **3.0\\%** \\\\ \\hline **original label** & 1.5/7.0 & 2.8/12.9 & 7.9/29.6 \\\\ \\hline **merged label** & 7.9/24.0 & 10.2/31.5 & 18.8/48.4 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Comparison between original & proposed labeling policy at mAP/[email protected] Figure 2: Original HPWREN labeled image (Left) vs. Proposed labeled image (Right) ## 4 Conclusion In this paper, we propose a novel benchmark utilizing semi-supervised domain adaptation for object detection, designed to benefit both academia and industry. Our labeling approach introduces a diversity that is thirtyfold greater than that of existing wildfire benchmarks and presents a new labeling policy tailored for wildfire detection. Furthermore, we establish a robust baseline for this benchmark, named LADA (Location-Aware Semi-Supervised Domain Adaptation), distinguished by its capability to capture translational variance features pertinent to wildfire detection. ## 5 Acknowledgements This work was supported by NRF grant (2022R1A5A7026673) funded by MSIT, Korean Government. \\begin{table} \\begin{tabular}{|c|l|c|c|c|} \\hline **Type** & **Methods** & \\multicolumn{3}{c|}{**Labeled target images**} \\\\ & & **0.5\\%** & **1.0\\%** & **3.0\\%** \\\\ \\hline Source-only & SADA (Chen et al., 2021b) & 6.9/21.9 & 9.7/28.7 & 17.8/48.0 \\\\ & LADA(ours) & **7.9/24.0** & **10.2/31.5** & **18.8/48.4** \\\\ \\hline SSDA & SADA (Hoyer et al., 2023) & 9.7/27.3 & 12.3/34.9 & 20.4/53.0 \\\\ & LADA(ours) & **10.0/29.1** & **14.0/38.0** & **20.9/52.3** \\\\ \\hline \\end{tabular} \\end{table} Table 4: Comparison of source-only and SSDA results. (mAP/[email protected]) Figure 4: Overall Diagram of our training process. We also use background images for training. ## References * Chen et al. (2021) Shuaijun Chen, Xu Jia, Jianzhong He, Yongjie Shi, and Jianzhuang Liu. Semi-supervised domain adaptation based on dual-level domain mixing for semantic segmentation. _CVPR_, 2021a. * Chen et al. (2021) Yuhua Chen, Haoran Wang, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Scale-aware domain adaptive faster r-cnn. _IJCV_, 2021b. * Hoyer et al. (2023) Lukas Hoyer, Dengxin Dai, Haoran Wang, and Luc Van Gool. Masked image consistency for context-enhanced domain adaptation. _CVPR_, 2023. * HPWREN (2000) HPWREN. The high performance wireless research and education network. [http://hpwren.ucsd.edu/](http://hpwren.ucsd.edu/), 2000. * Kim et al. (2022) JongMok Kim, JooYoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, and Nojun Kwak. Mum : Mix image tiles and unmix feature tiles for semi-supervised object detection. _CVPR_, 2022. * Li et al. (2023) Jichang Li, Guanbin Li, and Yizhou Yu. Adaptive betweenness clustering for semi-supervised domain adaptation. _TIP_, 2023. * Lin et al. (2017) Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. _Institute of Electrical and Electronics Engineers_, 2017. * Lindsey and Dahlman (2020) Rebecca Lindsey and LuAnn Dahlman. Climate change: Global temperature. _Available online: climate.gov_, 2020. * Liu et al. (2018) Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. _NIPS_, 2018. * Liu et al. (2021) Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, and Peter Vajda. Unbiased teacher for semi-supervised object detection. _arXiv preprint arXiv:2102.09480_, 2021. * Ranadive et al. (2022) Omkar Ranadive, Jisu Kim, Serin Lee, Youngseo Cha, Heechan Park, Minkook Cho, and Young Hwang. Image-based early detection system for wildfires. _NIPS_, 2022. * Ren et al. (2015) Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. _NIPS_, 2015. * Eswaramoorthy Sathishkumar et al. (2023) Veerappampalayam Eswaramoorthy Sathishkumar, Jaehyuk Cho, Malliga Subramanian, and Obuli Sai Naren. Forest fire and smoke detection using deep learning-based learning without forgetting. _Fire Ecology_, 2023. * Yoo et al. (2022) Jayeon Yoo, Inseop Chung, and Nojun Kwak. Unsupervised domain adaptation for one-stage object detector using offsets to bounding box. _ECCV_, 2022. * Yu and Lin (2023) Yu-Chu Yu and Hsuan-Tien Lin. Semi-supervised domain adaptation with source label adaptation. _CVPR_, 2023. ## Appendix A Dataset This section will give more information of HPWREN dataset, and how source and target domain has been composed. Each image files are named as equation 2. \\[YYYYMMDD\\_fireName\\_cameraName \\tag{2}\\] We defined domain shift based on equation 2, and simply splitted train and validation set. We splitted target validation set and target train set with 5%, 95% by random sampling. We also random sampled 0.5%, 1.0%, 3.0% target labeled dataset among 95% of target train dataset for each semi-supervised domain adaptaion protocols. Sub-directories for source domain and target domain are summarized in Table 5 to 16. However, we noticed that users could also split based on customized domain shift scenario. For example, we illustrate defining domains with **cameraName** in Equation 2. It is summarized in Table 17 to 19. An example of our proposed labels are show in Fig. 5. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20160722\\_FIRE\\_mg-s-iqeye & 41 & 20170708\\_Whittier\\_syp-n-mobo-c & 81 \\\\ 20160722\\_FIRE\\_mw-e-mobo-c & 81 & 20170708\\_Whittier\\_syp-n-mobo-m & 80 \\\\ 20161113\\_FIRE\\_bl-n-mobo-c & 81 & 20170711\\_FIRE\\_bl-e-mobo-c & 81 \\\\ 20161113\\_FIRE\\_bm-n-mobo-c & 81 & 20170711\\_FIRE\\_bl-s-mobo-c & 81 \\\\ 20161113\\_FIRE\\_bm-w-mobo-c & 81 & 20170711\\_FIRE\\_bm-s-mobo-c & 64 \\\\ 20170519\\_FIRE\\_rm-w-mobo-c & 81 & 20170711\\_FIRE\\_sdsc-e-mobo-c & 81 \\\\ 20170520\\_FIRE\\_lp-s-iqeye & 81 & 20170711\\_FIRE\\_sm-n-mobo-c & 81 \\\\ 20170520\\_FIRE\\_om-s-mobo-c & 55 & 20170713\\_FIRE\\_smer-tcs8-mobo-c & 77 \\\\ 20170520\\_FIRE\\_pi-smo-c & 81 & 20170722\\_FIRE\\_bm-n-mobo-c & 81 \\\\ 20170520\\_FIRE\\_pi-w-mobo-c & 81 & 20170722\\_FIRE\\_hp-e-mobo-c & 81 \\\\ 20170609\\_FIRE\\_sm-n-mobo-c & 81 & 20170722\\_FIRE\\_mg-n-iqeye & 81 \\\\ 20170613\\_FIRE\\_bh-w-mobo-c & 81 & 20170722\\_FIRE\\_so-s-mobo-c & 81 \\\\ 20170613\\_FIRE\\_hp-n-mobo-c & 81 & 20170807\\_FIRE\\_bh-n-mobo-c & 78 \\\\ 20170625\\_BBM\\_bm-n-mobo & 81 & 20170821\\_FIRE\\_lo-s-mobo-c & 81 \\\\ 20170625\\_FIRE\\_mg-s-iqeye & 81 & 20170826\\_FIRE\\_tp-s-mobo-c & 81 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Scenes composed for target domain dataset by equation 2 (Part. 1) \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20160604\\_FIRE\\_rm-n-mobo-c & 81 & 20160604\\_FIRE\\_smer-tcs3-mobo-c & 81 \\\\ 20160619\\_FIRE\\_lp-e-iqeye & 41 & 20160619\\_FIRE\\_om-e-mobo-c & 81 \\\\ 20160619\\_FIRE\\_pi-s-mobo-c & 81 & 20160711\\_FIRE\\_ml-n-mobo-c & 81 \\\\ 20160718\\_FIRE\\_lp-n-iqeye & 41 & 20160718\\_FIRE\\_mg-s-iqeye & 41 \\\\ 20160718\\_FIRE\\_mw-e-mobo-c & 81 & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Scenes composed for source domain dataset by equation 2 \\begin{table} \\begin{tabular}{l l l l} \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20170901\\_FIRE\\_om-s-mobo-c & 81 & 20171017\\_FIRE\\_smer-tcs3-mobo-c & 78 \\\\ 20170927\\_FIRE\\_smer-tcs9-mobo-c & 81 & 20171021\\_FIRE\\_pi-e-mobo-c & 81 \\\\ 20171010\\_FIRE\\_hp-n-mobo-c & 81 & 20171026\\_FIRE\\_rm-m-n-mobo-c & 81 \\\\ 20171010\\_FIRE\\_hp-w-mobo-c & 81 & 20171026\\_FIRE\\_smer-tcs8-mobo-c & 81 \\\\ 20171010\\_FIRE\\_rm-e-mobo-c & 81 & 20171207\\_FIRE\\_bh-n-mobo-c & 81 \\\\ 20171016\\_FIRE\\_sdsc-e-mobo-c & 81 & 20171207\\_FIRE\\_bh-w-mobo-c & 77 \\\\ 20171017\\_FIRE\\_smer-tcs3-mobo-c & 78 & 20171207\\_FIRE\\_smer-tcs8-mobo-c & 81 \\\\ 20171021\\_FIRE\\_pi-e-mobo-c & 81 & 20171207\\_Lilac\\_rm-s-mobo & 81 \\\\ 20171026\\_FIRE\\_rm-m-n-mobo-c & 81 & 20180504\\_FIRE\\_bh-n-mobo-c & 81 \\\\ 20171026\\_FIRE\\_smer-tcs8-mobo-c & 81 & 20180504\\_FIRE\\_rm-m-n-mobo-c & 81 \\\\ 20171207\\_FIRE\\_bh-n-mobo-c & 81 & 20180504\\_FIRE\\_smer-tcs10-mobo-c & 81 \\\\ 20171207\\_FIRE\\_bh-w-mobo-c & 77 & 20180504\\_FIRE\\_smer-tcs8-mobo-c & 81 \\\\ 20171207\\_FIRE\\_smer-tcs8-mobo-c & 81 & 20180517\\_FIRE\\_rm-m-n-mobo-c & 81 \\\\ 20171207\\_Lilac\\_rm-s-mobo & 81 & 20180517\\_FIRE\\_smer-tcs10-mobo-c & 81 \\\\ 20180504\\_FIRE\\_bh-n-mobo-c & 81 & 20180522\\_FIRE\\_rm-e-mobo-c & 81 \\\\ 20180504\\_FIRE\\_rm-m-n-mobo-c & 81 & 20180602\\_Alison\\_sp-s-mobo-c & 81 \\\\ 20180504\\_FIRE\\_smer-tcs10-mobo-c & 81 & 20180602\\_Alison\\_sp-w-mobo-c & 81 \\\\ 20180504\\_FIRE\\_smer-tcs8-mobo-c & 81 & 20180602\\_FIRE\\_rm-m-n-mobo-c & 81 \\\\ 20180517\\_FIRE\\_rm-n-mobo-c & 81 & 20180602\\_FIRE\\_smer-tcs8-mobo-c & 81 \\\\ 20180602\\_FIRE\\_smer-tcs9-mobo-c & 81 & 20180603\\_FIRE\\_bl-s-mobo-c & 81 \\\\ \\hline \\end{tabular} \\end{table} Table 7: Scenes composed for target domain dataset by equation 2 (Part. 2) \\begin{table} \\begin{tabular}{l l l l} \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20180603\\_FIRE\\_rm-w-mobo-c & 81 & 20180606\\_FIRE\\_lo-s-mobo-c & 81 \\\\ 20180603\\_FIRE\\_smer-tcs8-mobo-c & 81 & 20180606\\_FIRE\\_ml-s-mobo-c & 81 \\\\ 20180603\\_FIRE\\_smer-tcs9-mobo-c & 81 & 20180606\\_FIRE\\_pi-e-mobo-c & 81 \\\\ 20180603\\_FIRE\\_sm-n-mobo-c & 81 & 20180611\\_fallbrook\\_rm-w-mobo-c & 81 \\\\ 20180603\\_FIRE\\_sm-w-mobo-c & 81 & 20180612\\_FIRE\\_rm-w-mobo-c & 81 \\\\ 20180605\\_FIRE\\_rm-m-w-mobo-c & 81 & 20180612\\_FIRE\\_smer-tcs9-mobo-c & 81 \\\\ 20180605\\_FIRE\\_smer-tcs9-mobo-c & 81 & 20180614\\_Bridle\\_hp-n-mobo-c & 81 \\\\ 20180614\\_FIRE\\_hp-s-mobo-c & 68 & 20180704\\_Benton\\_hp-n-mobo-c & 81 \\\\ 20180614\\_Hoppe\\_wc-e-mobo-c & 81 & 20180706\\_FIRE\\_sm-e-mobo-c & 81 \\\\ 20180706\\_FIRE\\_sm-n-mobo-c & 70 & 20180706\\_FIRE\\_wc-e-mobo-c & 69 \\\\ 20180706\\_West\\_lp-n-mobo-c & 81 & 20180717\\_otay\\_om-s-mobo-c & 81 \\\\ 20180718\\_FIRE\\_syp-w-mobo-c & 81 & 20180719\\_Skyline\\_sp-n-mobo-c & 81 \\\\ 20180720\\_Cinnamon\\_wc-e-mobo-c & 81 & 20180720\\_FIRE\\_syp-w-mobo-c & 81 \\\\ 20180723\\_FIRE\\_tp-e-mobo-c & 81 & 20180725\\_Cranston\\_hp-n-mobo-c & 81 \\\\ \\hline \\end{tabular} \\end{table} Table 8: Scenes composed for target domain dataset by equation 2 (Part. 3) \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20190620\\_FIRE\\_rmm-w-mobo-c & 81 & 20190715\\_MLOSouth1\\_lo-s-mobo-c & 81 \\\\ 20190620\\_FIRE\\_smer-tcs9-mobo-c & 72 & 20190715\\_MLOSouth2\\_lo-s-mobo-c & 81 \\\\ 20190629\\_FIRE\\_hp-n-mobo-c & 57 & 20190715\\_MLOSouth3\\_lo-s-mobo-c & 81 \\\\ 20190712\\_CottonwoodFire\\_lp-s-mobo-c & 81 & 20190716\\_FIRE\\_bl-s-mobo-c & 70 \\\\ 20190712\\_FIRE\\_om-e-mobo-c & 81 & 20190716\\_FIRE\\_mg-n-mobo-c & 68 \\\\ 20190712\\_RockHouse\\_wc-e-mobo-c & 79 & 20190716\\_FIRE\\_so-w-mobo-c & 72 \\\\ 20190714\\_MLOSouth\\_lo-s-mobo-c & 81 & 20190716\\_Meadowfire\\_hp-n-mobo-c & 70 \\\\ 20190714\\_PinosSouth\\_pi-s-mobo-c & 81 & 20190716\\_Riverfire\\_rm-w-mobo-c & 80 \\\\ 20190717\\_FIRE\\_lp-n-mobo-c & 81 & 20190717\\_FIRE\\_pi-w-mobo-c & 81 \\\\ 20190728\\_Dehesa\\_lp-n-mobo & 80 & 20190728\\_FIRE\\_om-n-mobo-c & 79 \\\\ 20190728\\_FIRE\\_sp-n-mobo-c & 81 & 20190801\\_Caliente\\_om-w-mobo & 81 \\\\ 20190803\\_OtaySouth\\_lp-s-mobo & 79 & 20190803\\_OtaySouth\\_om-s-mobo & 79 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 10: Scenes composed for target domain dataset by equation 2 (Part. 5) \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20180725\\_Cranston\\_sp-e-mobo-c & 81 & 20180806\\_FIRE\\_mg-s-mobo-c & 78 \\\\ 20180725\\_FIRE\\_smer-tcs10-mobo-c & 81 & 20180806\\_FIRE\\_vo-w-mobo-c & 81 \\\\ 20180726\\_FIRE\\_so-n-mobo-c & 81 & 20180806\\_Holy\\_sp-s-mobo-c & 72 \\\\ 20180726\\_FIRE\\_so-w-mobo-c & 81 & 20180806\\_Holy\\_sp-s-mobo-m & 73 \\\\ 20180727\\_FIRE\\_bh-n-mobo-c & 81 & 20180809\\_FIRE\\_bh-s-mobo-c & 80 \\\\ 20180727\\_FIRE\\_bh-s-mobo-c & 81 & 20180809\\_FIRE\\_bl-e-mobo-c & 81 \\\\ 20180727\\_FIRE\\_bl-e-mobo-c & 81 & 20180809\\_FIRE\\_mg-w-mobo-c & 81 \\\\ 20180727\\_FIRE\\_mg-w-mobo-c & 81 & 20180813\\_FIRE\\_bl-n-mobo-c & 81 \\\\ 20180728\\_FIRE\\_rm-w-mobo-c & 81 & 20180813\\_FIRE\\_mg-w-mobo-c & 81 \\\\ 20180728\\_FIRE\\_smer-tcs9-mobo-c & 81 & 20180827\\_Holyflareup\\_sp-e-mobo-c & 81 \\\\ 20180910\\_FIRE\\_smer-tcs8-mobo-c & 81 & 20180919\\_FIRE\\_rm-e-mobo-c & 81 \\\\ 20181112\\_house\\_wc-n-mobo-c & 71 & 20190529\\_94Fire\\_lp-s-mobo-c & 81 \\\\ 20190529\\_94Fire\\_om-n-mobo-c & 81 & 20190610\\_FIRE\\_bh-w-mobo-c & 81 \\\\ 20190610\\_Pauma\\_bh-w-mobo-c & 80 & 20190610\\_Pauma\\_bh-w-mobo-m & 80 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 9: Scenes composed for target domain dataset by equation 2 (Part. 4) \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20190803\\_Sage\\_om-n-mobo & 73 & 20190814\\_FIRE\\_om-e-mobo-c & 79 \\\\ 20190805\\_FIRE\\_sp-e-mobo-c & 77 & 20190814\\_FIRE-pi-s-mobo-c & 80 \\\\ 20190809\\_PinosSouth\\_pi-s-mobo & 41 & 20190825\\_FIRE-smer-tcs8-mobo-c & 80 \\\\ 20190810\\_SantaFire\\_rm-w-mobo & 81 & 20190825\\_FIRE\\_sm-w-mobo-c & 75 \\\\ 20190813\\_FIRE\\_69bravo-e-mobo-c & 81 & 20190826\\_FIRE\\_pi-s-mobo-c & 80 \\\\ 20190813\\_Topanga\\_69bravo-n-mobo & 81 & 20190826\\_FIRE\\_rm-wr-mobo-c & 80 \\\\ 20190814\\_Border\\_lp-s-mobo & 80 & 20190826\\_FIRE\\_smer-tcs9-mobo-c & 80 \\\\ 20190827\\_FIRE\\_so-w-mobo-c & 81 & 20190829\\_FIRE\\_bl-n-mobo-c & 81 \\\\ 20190829\\_FIRE\\_pi-e-mobo-c & 81 & 20190913\\_FIRE\\_lp-n-mobo-c & 80 \\\\ 20190829\\_FIRE\\_rm-w-mobo-c & 81 & 20190915\\_FIRE\\_rm-rn-mobo-c & 78 \\\\ 20190829\\_FIRE\\_smer-tcs8-mobo-c & 76 & 20190922\\_FIRE\\_ml-w-mobo-c & 81 \\\\ 20190924\\_FIRE\\_bl-s-mobo-c & 79 & 20190924\\_FIRE\\_hp-s-mobo-c & 80 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 11: Scenes composed for target domain dataset by equation 2 (Part. 6) \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20190924\\_FIRE\\_lo-w-mobo-c & 79 & 20191001\\_FIRE\\_om-s-mobo-c & 60 \\\\ 20190924\\_FIRE\\_lp-n-mobo-c & 72 & 20191001\\_FIRE\\_rm-w-mobo-c & 81 \\\\ 20190924\\_FIRE\\_ml-w-mobo-c & 80 & 20191001\\_FIRE\\_smer-tcs9-mobo-c & 80 \\\\ 20190924\\_FIRE\\_pi-w-mobo-c & 79 & 20191003\\_FIRE\\_om-s-mobo-c & 77 \\\\ 20190924\\_FIRE\\_sm-n-mobo-c & 76 & 20191003\\_FIRE\\_rm-wr-mobo-c & 81 \\\\ 20190924\\_FIRE\\_wc-e-mobo-c & 72 & 20191003\\_FIRE\\_smer-tcs9-mobo-c & 77 \\\\ 20190924\\_FIRE\\_wc-s-mobo-c & 70 & 20191005\\_FIRE\\_bm-e-mobo-c & 79 \\\\ 20190925\\_FIRE\\_wc-e-mobo-c & 81 & 20191005\\_FIRE\\_hp-s-mobo-c & 81 \\\\ 20190925\\_FIRE\\_wc-s-mobo-c & 81 & 20191005\\_FIRE\\_vo-n-mobo-c & 77 \\\\ 20190930\\_FIRE\\_om-s-mobo-c & 80 & 20191005\\_FIRE\\_wc-e-mobo-c & 79 \\\\ 20191001\\_FIRE\\_bh-w-mobo-c & 79 & 20191005\\_FIRE\\_wc-n-mobo-c & 78 \\\\ 20191001\\_FIRE\\_lp-s-mobo-c & 80 & 20191006\\_FIRE\\_lo-s-mobo-c & 79 \\\\ 20191001\\_FIRE\\_om-e-mobo-c & 79 & 20191006\\_FIRE\\_lo-w-mobo-c & 80 \\\\ 20191006\\_FIRE\\_lp-e-mobo-c & 72 & 20191006\\_FIRE\\_lp-n-mobo-c & 73 \\\\ 20191006\\_FIRE\\_lp-s-mobo-c & 73 & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 12: Scenes composed for target domain dataset by equation 2 (Part. 7) \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Scene Description** & **Imgs** & **Scene Description** & **Imgs** \\\\ \\hline 20191006\\_FIRE\\_ml-w-mobo-c & 81 & 20200226\\_FIRE\\_rm-e-mobo-c & 81 \\\\ 20191006\\_FIRE\\_om-n-mobo-c & 78 & 20200304\\_FIRE\\_rm-w-mobo-c & 81 \\\\ 20191006\\_FIRE\\_om-s-mobo-c & 77 & 20200306\\_FIRE\\_mlo-n-mobo-c & 81 \\\\ 20191006\\_FIRE\\_pi-s-mobo-c & 78 & 20200306\\_FIRE\\_ml-s-mobo-c & 81 \\\\ 20191007\\_FIRE\\_lp-s-mobo-c & 81 & 20200306\\_FIRE\\_pi-n-mobo-c & 81 \\\\ 20191007\\_FIRE\\_om-s-mobo-c & 81 & 20200521\\_FIRE\\_om-n-mobo-c & 81 \\\\ 20191007\\_FIRE\\_sm-s-mobo-c & 81 & 20200521\\_FIRE\\_om-s-mobo-c & 81 \\\\ 20191030\\_CopperCanyon\\_om-s-mobo-c & 81 & 20200521\\_FIRE\\_om-w-mobo-c & 81 \\\\ 20191030\\_CopperCanyon\\_om-s-mobo-m & 81 & 20200521\\_VEGFGGMT\\_bm-s-mobo-c & 81 \\\\ 20200202\\_FIRE\\_hp-w-mobo-c & 81 & 20200521\\_VEGFGGMT\\_ml-w-mobo-c & 81 \\\\ 20200205\\_FIRE\\_hp-w-mobo-c & 81 & 20200521\\_VEGFGGMT\\_wc-e-mobo-c & 81 \\\\ 20200206\\_FIRE\\_ml-s-mobo-c & 81 & 20200529\\_StructFire\\_we-e-mobo-c & 80 \\\\ 20200601\\_WILLDAND-DRILLS\\_mlo-e-mobo-c & 81 & 20200608\\_FIRE\\_rm-w-mobo-c & 81 \\\\ 20200601\\_WILLDAND-DRILLS\\_ml-s-mobo-c & 81 & 20200611\\_skyline\\_lp-n-mobo-c & 81 \\\\ 20200601\\_WILLDAND-DRILLS\\_ml-s-mobo-c & 81 & 20200614\\_DrumCanyon\\_syp-w-mobo-c & 81 \\\\ 20200601\\_WILLDAND-DRILLS\\_om-e-mobo-c & 81 & 20200615\\_Rainbow\\_rm-e-mobo-c & 81 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 13: Scenes composed for target domain dataset by equation 2 (Part. 8) \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Scene Description** & **Imgs** & **Scene Description** & **Imgs** \\\\ \\hline 20200618\\_FIRE\\_om-w-mobo-c & 81 & 20200727\\_Border11Fire\\_om-e-mobo-c & 75 \\\\ 20200705\\_FIRE\\_bm-w-mobo-c & 81 & 20200727\\_Border11Fire\\_om-e-mobo-m & 75 \\\\ 20200705\\_FIRE\\_wc-n-mobo-c & 81 & 20200806\\_BorderFire\\_lp-s-mobo-c & 81 \\\\ 20200709\\_Tripp\\_hp-n-mobo-c & 81 & 20200806\\_BorderFire\\_om-e-mobo-c & 81 \\\\ 20200712\\_VSSBonhommeRichard\\_sm-w-mobo-c & 81 & 20200806\\_SpringsFire\\_lp-w-mobo-c & 62 \\\\ 20200727\\_Border11Fire\\_lp-s-mobo-c & 75 & 20200806\\_SpringsFire\\_lp-w-mobo-m & 62 \\\\ 20200807\\_AppleFire-backfire-operation\\_hp-n- & 81 & 20200806\\_SpringsFire\\_om-n-mobo-c & 65 \\\\ 20200808\\_OliveFire\\_wc-e-mobo-c & 74 & 20200806\\_SpringsFire\\_om-n-mobo-m & 62 \\\\ 20200812\\_LakeFire\\_dwpgm-n-mobo-c & 81 & 20200806\\_SpringsFire\\_sm-e-mobo-c & 65 \\\\ 20200813\\_Ranch2Fire\\_marconi-n-mobo-c & 73 & 20200813\\_SkylineFire\\_sp-n-mobo-c & 75 \\\\ 20200813\\_Ranch2Fire\\_sjh-n-mobo-c & 78 & 20200813\\_VictoriaFire\\_lp-n-mobo-c & 70 \\\\ 20200813\\_Ranch2Fire\\_wilson-e-mobo-c & 77 & 20200822\\_BrattonFire\\_lp-e-mobo-c & 81 \\\\ 20200822\\_BrattonFire\\_lp-s-mobo-c & 81 & 20200828\\_BorderFire\\_om-w-mobo-c & 80 \\\\ 20200822\\_SloaneFire\\_lp-n-mobo-c & 81 & 20200828\\_BorderFire\\_sm-s-mobo-c & 81 \\\\ 20200823\\_OakFire\\_pi-e-mobo-c & 81 & 20200829\\_inside-Mexico\\_cp-s-mobo-c & 81 \\\\ 20200829\\_inside-Mexico\\_mo-s-mobo-c & 81 & 20200831\\_FIRE\\_wc-n-mobo-c & 180 \\\\ 20200905\\_ValleyFire\\_cp-s-mobo-c & 0 & 20200905\\_ValleyFire\\_lp-n-mobo-c & 73 \\\\ 20200905\\_ValleyFire\\_pi-w-mobo-c & 75 & 20200905\\_ValleyFire\\_sm-e-mobo-c & 71 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 14: Scenes composed for target domain dataset by equation 2 (Part. 9) ## Appendix B Training details In equation 3, \\(L^{S},L^{M},L^{A},L^{C}\\) refers to supervised loss, masked image consistency loss (MIC loss), adversarial loss, and consistency loss each. More information could be seen in baseline method Hoyer et al. (2023) since we use same losses. In the first source-only stage, we only used supervised learning. Our backbone is Resnet-50 with ImageNet pretrained weight. We trained 10 epoch with step decay learning rate schedule. We used 16 batch size, SGD optimizer with 0.9 momentum and 0.0005 weight decay. In second stage, we also trained 10 epoch with same hyperparameter setup as first stage except that we used semi-supervised learning loss. We splitted image into 32x32 blocks and masked with ratio 0.5 for strong augmentation. We used 0.9 as EMA rate, and \\(\\lambda^{M}\\) for 0.5. We used confidence score threshold as \\(\\tau_{u}=0.8,\\tau_{l}=0.05\\). The final weight of first stage is used as initial weights. Overall hyperparameters used are summarized in Table 20. \\[min_{\\theta_{s}}\\frac{1}{N_{s}}\\sum_{k=1}^{N_{s}}L_{k}^{S}+\\frac{1}{N_{t}}\\sum_ {k=1}^{N_{t}}(\\lambda^{M}L_{k}^{M})+\\frac{1}{N_{t}+N_{s}}\\sum_{k=1}^{N_{t}+N_{ s}}(\\lambda^{A}L_{k}^{A}+\\lambda^{C}L_{k}^{C}) \\tag{3}\\] ## Appendix C Impact of using background images for semi-supervised Domain Adaptation Background images are especially helpful for 0.5%, 1.0% protocols where pseudo labels with high confidence score is especially lacking as show in Table 21. We defined background image when there are no object with confidence score higher than 0.05. \\begin{table} \\begin{tabular}{r c c c} \\hline \\hline **Scene Description** & **Imgs** & **Scene Description** & **Imgs** \\\\ \\hline 20200911\\_FIRE\\_lp-e-mobo-c & 81 & 20200930\\_inMexico\\_lp-s-mobo-c & 81 \\\\ 20200911\\_FIRE\\_mlo-s-mobo-c & 81 & 20200930\\_inMexico\\_om-e-mobo-c & 81 \\\\ 20200911\\_FIRE\\_pi-s-mobo-c & 81 & 20201003\\_structurefire\\_bh-w-mobo-c & 80 \\\\ 20200930\\_BoundaryFire\\_wc-e-mobo-c & 81 & 20201003\\_structurefire\\_bm-w-mobo-c & 74 \\\\ 20200930\\_DeLuze\\_rrm-w-mobo-c & 61 & 20201013\\_FIRE\\_cp-s-mobo-c & 79 \\\\ 20201105\\_Roundfire\\_lp-s-mobo-c & 80 & 20201105\\_Roundfire\\_om-e-mobo-c & 81 \\\\ 20201105\\_Roundfire\\_pi-s-mobo-c & 81 & 20201127\\_Hawkfire\\_pi-w-mobo-c & 81 \\\\ 20201202\\_BondFire-nightime\\_sp-w-mobo-c & 75 & 20201205\\_typical-range-fire\\_sclm-e-mobo-c & 81 \\\\ 20201202\\_WillowFire-nightime-near-CDF- & 73 & 20201206\\_JEEP-ON-FIRE\\_om-w-mobo-c & 70 \\\\ HQ\\_lp-w-mobo-c & & & \\\\ 20201202\\_WillowFire-nightime-near-CDF- & 77 & 20201207\\_La\\_bh-s-mobo-c & 81 \\\\ HQ\\_om-n-mobo-c & & & \\\\ 20201202\\_WillowFire-nightime-near-CDF- & 77 & 20201208\\_FIRE\\_om-s-mobo-c & 80 \\\\ HQ\\_sm-n-mobo-c & & & \\\\ 20201216\\_ChaparralFire\\_lp-w-mobo-c & 81 & 20201216\\_ChaparralFire\\_om-n-mobo-c & 81 \\\\ 20201216\\_ChaparralFire\\_pi-w-mobo-c & 81 & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 15: Scenes composed for target domain dataset by equation 2 (Part. 10) \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline 20201216\\_ChaparralFire\\_sm-n- & 81 & 20210113\\_Borderfire\\_mlo-s- & 81 \\\\ mobo-c & mobo-c & mobo-c & 81 \\\\ 20201223\\_Creekfire\\_bh-w-mobo-c & 81 & 20210113\\_Borderfire\\_pi-s-mobo-c & 81 \\\\ 20210107\\_Miguelfire\\_om-w-mobo-c & 80 & 20210115\\_Bontifaire\\_tp-w-mobo-c & 71 \\\\ 20210110\\_Borderfire\\_lp-s-mobo-c & 80 & 20210204\\_FIRE\\_tp-s-mobo-c & 81 \\\\ 20210209\\_FIRE\\_hp-e-mobo-c & 78 & 20210302\\_FIRE\\_lp-e-mobo-c & 81 \\\\ 20210209\\_FIRE\\_tp-w-mobo-c & 77 & 20210302\\_FIRE\\_lp-e-mobo-m & 81 \\\\ 20210319\\_FIRE\\_om-n-mobo-c & 81 & 20210711\\_FIRE\\_wc-e-mobo-c & 81 \\\\ 20200906-BobcatFire-wilson-e- & 82 & 20210810-Lyonsfire-housefire-lp- & 64 \\\\ mobo-c & & n-mobo-c & \\\\ 20220210-EmeraldFire-marconi- & 82 & 20220210-EmeraldFire-signal-s- & 82 \\\\ w-mobo-c & mobo-c & mobo-c & \\\\ 20220210-EmeraldFire-stgo-w- & 82 & 20220214-PrescribedFire-pi-n- & 82 \\\\ mobo-c & mobo-c & mobo-c & \\\\ 20220302-Jimfire-0921-stgo-e- & 81 & 20220302-Jimfire-0921-stgo-s- & 81 \\\\ mobo-c & mobo-c & mobo-c & \\\\ 20220302-Jimfire-1101-stgo-e- & 81 & 20220405-fire-in-Fallbrook-rm-s- & 81 \\\\ mobo-c & mobo-c & mobo-c & \\\\ 20220405-fire-in-Fallbrook-rm-s- & 82 & 20220622-HighlandFire-wc-n- & 81 \\\\ mobo-m & mobo-c & mobo-c & \\\\ 20220622-HighlandFire-wc-n- & 82 & 20220713-Longestarfire-om-w- & 72 \\\\ mobo-c & mobo-c & mobo-c & \\\\ 20220727-Casenfire-bm-s-mobo-c & 82 & 20220831-Border32fire-pi-s-mobo-c & 65 \\\\ 20220831-Border32fire-pi-s- & 66 & 20220905-FairviewFire-bh-n- & 81 \\\\ mobo-m & mobo-c & & \\\\ 20220905-FairviewFire-smer- & 82 & 20220905-FairviewFire-stgo-e- & 81 \\\\ tcs3-mobo-c & mobo-c & mobo-c & \\\\ 20220905-FairviewFire-tp-w- & 81 & 20221116-Willowfire-om-n- & 81 \\\\ mobo-c & mobo-c & mobo-c & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 16: Scenes composed for target domain dataset by equation 2 (Part. 11) \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline rm-n-mobo-c & 81 & smer-tcs3-mobo-c & 81 \\\\ lp-e-iqeye & 41 & om-e-mobo-c & 81 \\\\ pi-s-mobo-c & 81 & ml-n-mobo-c & 81 \\\\ lp-n-iqeye & 41 & mg-s-iqeye & 41 \\\\ mw-e-mobo-c & 81 & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 17: Scenes composed for source domain dataset by only cameraName ## Appendix D Relation between foreground-image and background-image ratio Since there are dataset imbalance between label and unlabel images, we studied the best ratio between labeled and unlabeled data composing in a mini-batch. As shown in Table 22, 80% of unlabeled images used for minibatch had best result. It is as expected since the number of unlabeled images are more than 10 times than that of labeled dataset. We found that 90% unlabeled images used for mini-batch did not converged. We assume it is due to lack of supervisory signal in the early phase. We reported our result based on 80% of unlabeled ratio in a mini-batch. \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline & & bl-n-mobo-c & 243 \\\\ bm-n-mobo-c & 162 & bm-w-mobo-c & 236 \\\\ rm-w-mobo-c & 1193 & lp-s-iqeye & 81 \\\\ om-s-mobo-c & 834 & pi-w-mobo-c & 478 \\\\ sm-n-mobo-c & 628 & bh-w-mobo-c & 559 \\\\ hp-n-mobo-c & 694 & bm-n-mobo & 81 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 18: Scenes composed for target domain dataset by only CameraName (Part 1.) Figure 5: Example of our labeled dataset \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline **Scene Description** & **\\# of Imgs** & **Scene Description** & **\\# of Imgs** \\\\ \\hline \\hline spp-n-mobo-c & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ bl-e-mobo-c & 243 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ bl-s-mobo-c & 227 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ smer-tcs8-mobo-c & 719 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ mg-n-iqeye & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ bh-n-mobo-c & 402 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ tp-s-mobo-c & 162 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ hp-w-mobo-c & 243 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ pi-e-mobo-c & 405 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ smer-tcs10-mobo-c & 243 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ sp-w-mobo-c & 156 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ ml-s-mobo-c & 324 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ wc-e-mobo-c & 939 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ lp-n-mobo-c & 756 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ sp-n-mobo-c & 237 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ sp-e-mobo-c & 239 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ so-w-mobo-c & 234 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ mg-w-mobo-c & 243 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ mg-s-mobo-c & 78 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ sp-s-mobo-m & 73 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ om-n-mobo-c & 704 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ mg-n-mobo-c & 68 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ om-w-mobo & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ om-s-mobo & 79 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ pi-s-mobo & 41 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ 69bravo-e-mobo-c & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ ml-w-mobo-c & 323 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ wc-s-mobo-c & 151 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ vo-m-mobo-c & 77 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ sm-s-mobo-c & 162 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ mlo-n-mobo-c & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ om-w-mobo-c & 546 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ mlo-s-mobo-c & 324 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ om-e-mobo-m & 75 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ lp-w-mobo-m & 62 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ sm-e-mobo-m & 63 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ marconi-n-mobo-c & 73 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ wilson-e-mobo-c & 159 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ sclm-e-mobo-c & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ lp-e-mobo-m & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ signal-s-mobo-c & 82 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ stgo-e-mobo-c & 243 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ rm-s-mobo-c & 81 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ wc-n-mobo-m & 82 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ smer-tcs3-mobo-m & 82 & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 19: Scenes composed for target domain dataset by only CameraName (Part 2.) \\begin{table} \\begin{tabular}{l c} \\hline **Config** & SGD \\\\ Optimizer momentum & 0.9 \\\\ Weight decay & 1e-4 \\\\ Domain Adaptation rate & 2.5e-3 \\\\ Warmup epochs & 0.333 \\\\ Training epochs & 10 \\\\ EMA decay & 0.9 \\\\ \\(\\tau_{u}\\) & 0.8 \\\\ \\(\\tau_{l}\\) & 0.05 \\\\ \\(\\lambda^{M}\\) & 0.5 \\\\ \\(\\lambda^{A}_{ins}\\) & 1e-1 \\\\ \\(\\lambda^{A}_{img}\\) & 2.5e-2 \\\\ \\(\\lambda^{C}_{img}\\) & 1e-2 \\\\ \\(\\lambda^{C}_{img}\\) & 2.5e-3 \\\\ \\hline \\end{tabular} \\end{table} Table 20: Hyperparameters of training LADA with the proposed SSDA \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & **soda 0.5\\%** & **soda 1.0\\%** & **soda 3.0\\%** \\\\ \\hline **LADA(ours)** & **10.0** & **14.0** & 20.4 \\\\ \\hline **LADA(without background images)** & 7.9 & 13.9 & **20.9** \\\\ \\hline \\end{tabular} \\end{table} Table 21: LADA vs. baseline for ssda protocol \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & **0.5** & **0.6** & **0.7** & **0.8** \\\\ \\hline **SSDA-0.5\\%** & 25.2 & 27.2 & 27.0 & **27.3** \\\\ \\hline \\end{tabular} \\end{table} Table 22: ratio between label vs. unlabel iamge in a minibatch
Recently, both the frequency and intensity of wildfires have increased worldwide, primarily due to climate change (Sathishkumar et al., 2023). In this paper, we propose a novel protocol for wildfire detection, leveraging semi-supervised Domain Adaptation for object detection, accompanied by a corresponding dataset designed for use by both academics and industries. Our dataset encompasses 30 times more diverse labeled scenes for the current largest benchmark wildfire dataset, HPWREN (2000), and introduces a new labeling policy for wildfire detection. Inspired by Liu et al. (2018), we propose a robust baseline, Location-Aware Object Detection for Semi-Supervised Domain Adaptation (LADA), utilizing a teacher-student (Liu et al., 2021) based framework capable of extracting translational variance features characteristic of wildfires. With only using 1% target domain labeled data, our framework significantly outperforms our source-only baseline by a notable margin of 3.8% in mean Average Precision on the HPWREN wildfire dataset. Our dataset is available at [https://github.com/BloomBerry/LADA](https://github.com/BloomBerry/LADA). + Footnote †: corresponding author: Nojun Kwak ([email protected])
Summarize the following text.
256
arxiv-format/0806_2355v1.md
Constraining the density dependence of nucleon symmetry energy with heavy-ion reactions and its astrophysical impact ###### 000 000 24th Winter Workshop on Nuclear Dynamics South Padre, Texas, USA April 5-12, 2008 of nucleon symmetry energy with heavy-ion reactions and its astrophysical impact Bao-An Li\\({}^{1}\\), Lie-Wen Chen\\({}^{2}\\), Che Ming Ko\\({}^{3}\\), Plamen G. Krastev\\({}^{1}\\) and Aaron Worley\\({}^{1}\\) \\({}^{1}\\) Department of Physics, Texas A&M University-Commerce, Commerce, Texas 75429-3011, USA \\({}^{2}\\) Institute of Theoretical Physics, Shanghai Jiao Tong University, Shanghai 200240, P.R. China \\({}^{3}\\) Cyclotron Institute and Department of Physics, Texas A&M University, College Station, Texas 77843-3366, USA Symmetry Energy, Equation of State, Neutron-Rich Nuclear Matter, Neutron Stars, Gravitational Waves 21.65.Cd., 21.65.Ef,25.70.-z,21.30.Fe,21.10.Gv,21.60-c. ## 1 EOS of neutron-rich nuclear matter partially constrained by heavy-ion reactions The EOS of isospin asymmetric nuclear matter can be written within the well-known parabolic approximation as \\[E(\\rho,\\delta)=E(\\rho,\\delta=0)+E_{\\rm sym}(\\rho)\\delta^{2}+{\\cal O}(\\delta^{4}), \\tag{1}\\]where \\(\\delta\\equiv(\\rho_{n}-\\rho_{p})/(\\rho_{p}+\\rho_{n})\\) is the isospin asymmetry with \\(\\rho_{n}\\) and \\(\\rho_{p}\\) denoting, respectively, the neutron and proton densities, \\(E(\\rho,\\delta=0)\\) is the EOS of symmetric nuclear matter, and \\(E_{\\rm sym}(\\rho)\\) is the density-dependent nuclear symmetry energy. The latter is very important for many interesting astrophysical problems [1, 2], the structure of rare isotopes [3] and heavy-ion reactions [4, 5, 6, 7, 8]. However, the density dependence of the nuclear symmetry energy has been the most uncertain part of the EOS of neutron-rich matter. Fortunately, comprehensive analyses of several isospin effects including the isospin diffusion [9, 10] and isoscaling [11] in heavy-ion reactions and the size of neutron skin in heavy nuclei [12] have allowed us to constrain the density dependence of the symmetry energy at sub-saturation densities within approximately \\(31.6(\\rho/\\rho_{0})^{0.69}\\) and \\(31.6(\\rho/\\rho_{0})^{1.05}\\) as labelled by \\(x=0\\) and \\(x=-1\\), respectively, in the lower panel of Fig.1[13, 14]. While these constraints are only valid for sub-saturation densities and still suffer from some uncertainties, compared to the early situation they represent a significant progress in the field. Further progress is expected from both the parity violating electron scattering experiments [15] at the Jefferson lab that will help pin down the low density part of the symmetry energy and heavy-ion reactions with high energy radioactive beams at several facilities that will help constrain the high density behavior of the symmetry energy [8]. For many astrophysical studies, the EOS is usually expressed in terms of the pressure as a function of density and isospin asymmetry. Shown in Fig. 1 are the pressures for two extreme cases: symmetric (upper panel) and pure neutron matter (lower panel). The green area in the density range of \\(2-4.6\\rho_{0}\\) is the experimental constraint on the pressure \\(P_{0}\\) of symmetric nuclear matter extracted by Danielewicz, Lacey and Lynch from analyzing the collective flow data from relativistic heavy-ion collisions [6]. It is seen that results from mean-field calculations using the phenomenological momentum-dependent (MDI) interaction [16], the Dirac-Brueckner-Hartree-Fock approach with the Bonn B potential (DBHF) [17], and the variational calculations by Akmal, Pandharipande, and Ravenhall (APR)[18] are all consistent with this constraint. For pure neutron matter, its pressure is \\(P_{\\rm PNM}=P_{0}+\\rho^{2}dE_{\\rm sym}/d\\rho\\) and depends on the density dependence of nuclear symmetry energy. Since the constraints on the symmetry energy from terrestrial laboratory experiments are only available for densities less than about \\(1.2\\rho_{0}\\) as indicated by the green and red squares in the lower panel, which is in contrast to the constraint on the EOS of symmetry nuclear matter that is only available at much higher densities, the most reliable estimate of the EOS of neutron-rich matter can thus be obtained by extrapolating the underlying model EOS for symmetric matter and the symmetry energy in their respective density ranges to all densities. Shown by the shaded black area in the lower panel is the resulting best estimate of the pressure of high density pure neutron matter based on the predictions from the MDI interaction with x=0 and x=-1 as the lower and upper bounds on the symmetry energy and the flow-constrained EOS of symmetric nuclear matter. As one expects and consistent with the estimate in Ref. [6], the estimated error bars of the high density pure neutron matter EOS is much wider than the uncertainty range of the Figure 1: (Color online) Pressure as a function of density for symmetric (upper panel) and pure neutron (lower panel) matter. The green area in the upper panel is the experimental constraint on symmetric matter. The corresponding constraint on the pressure of pure neutron matter obtained by combining the flow data and an extrapolation of the symmetry energy functionals constrained below \\(1.2\\rho_{0}\\) by the isospin diffusion data is the shaded black area in the lower panel. Results taken from Refs. [6, 19]. EOS of symmetric nuclear matter. For the four interactions indicated in the figure, their predicted EOS's cannot be distinguished by the estimated constraint on the high density pure neutron matter. In the following, the astrophysical consequences of this partially constrained EOS of neutron-rich matter on the mass-radius correlation, moment of inertia, and the elliptical deformation and gravitational radiation of (rapidly) rotating neutron stars are briefly discussed. More details of our studies on these topics can be found in Refs. [19, 20, 21, 22, 23]. Nuclear constraints on the mass-radius correlation, moment of inertia, elliptical deformation and gravitational radiation of rapidly rotating neutron stars The partially constrained EOS of neutron-rich nuclear matter has important ramifications on properties of neutron stars. As a first example, in Fig. 2 we show the mass-radius correlations for the two fastest rotating neutron stars known as of today. These pulsars spin at 716 [24] and 1122 Hz [25], respectively. However, based only on the observational data available so far, their properties have not yet been fully understood. The analysis of their properties based on the EOS and symmetry energy constrained by the terrestrial laboratory data is thus especially interesting. Setting the observed frequency of the pulsar as the Kepler frequency, corresponding to the highest possible frequency for a star before its starts to shed mass at the equator, one can obtain an estimate of its maximum radius as a function of mass \\(M\\), \\[R_{\\rm max}(M)=\\chi\\left(\\frac{M}{1.4M_{\\odot}}\\right)^{1/3}\\ {\\rm km}, \\tag{2}\\] Figure 2: (Color online) Gravitational mass versus equatorial radius for neutron stars rotating at \\(\ u=716\\) Hz and \\(\ u=1122\\) Hz. Taken from Ref. [22]. with \\(\\chi=20.94\\) for rotational frequency \\(\ u=716\\) Hz and \\(\\chi=15.52\\) for \\(\ u=1122\\) Hz. The maximum radii are shown with the dotted lines in Fig. 2. It is seen that the range of allowed masses supported by a given EOS for rapidly rotating neutron stars becomes narrower than the one of static configurations. This effect becomes stronger with increasing frequency and depends upon the EOS. Since predictions from the \\(x=0\\) and \\(x=-1\\) EOSs represent the limits of the neutron star models consistent with the experimental data from terrestrial nuclear laboratories, one can predict that the mass of the neutron star rotating at 1122 Hz is between 1.7 and 2.1 solar mass [22]. Another interesting example is the gravitational radiation expected from elliptically deformed pulsars. Gravitational waves (GWs) are tiny disturbances in space-time and are a fundamental, although not yet directly confirmed, prediction of General Relativity. Gravitational wave astrophysics would open an entirely new non-electromagnetic window to the Cosmos, making it possible to probe physics that is hidden or dark to current electromagnetic observations [26]. Elliptically deformed pulsars are among the primary possible sources of the GWs. Very recently the LIGO and GEO collaborations have set upper limits on the GWs expected from 78 radio pulsars [27]. Gravitational waves are characterized by a strain amplitude \\(h_{0}\\) which can be written as \\[h_{0}=\\chi\\frac{\\Phi_{22}\ u^{2}}{r}, \\tag{3}\\] with \\(\\chi=\\sqrt{2048\\pi^{5}/15}G/c^{4}\\). In the above equation, \\(r\\) is the distance between the pulsar and the detector, and the \\(\\Phi_{22}\\) is the quadrupole moment of the mass Figure 3: Gravitational-wave strain amplitude as a function of the neutron star mass. The error bars between the \\(x=0\\) and \\(x=-1\\) EOSs provide a limit on the strain amplitude of the gravitational waves to be expected from these neutron stars, and show a specific case for stellar models of \\(1.4M_{\\odot}\\). Taken from ref.[19]. distribution. For slowly rotating neutron stars, one has [28] \\[\\Phi_{22,max}=2.4\\times 10^{38}g\\ cm^{2}\\left(\\frac{\\sigma}{10^{-2}}\\right)\\left( \\frac{R}{10km}\\right)^{6.26}\\left(\\frac{1.4M_{\\odot}}{M}\\right)^{1.2}. \\tag{4}\\] In the above expression, \\(\\sigma\\) is the breaking strain of the neutron star crust which is rather uncertain at present time and lies in the wide range \\(\\sigma=[10^{-5}-10^{-2}]\\)[29]. In our estimate, we use the maximum breaking strength, i.e. \\(\\sigma=10^{-2}\\). In Fig. 3 we display the GW strain amplitude, \\(h_{0}\\), as a function of stellar mass for three selected millisecond pulsars which are relatively close to Earth (\\(r<0.4kpc\\)) and have rotational frequencies below 300 Hz. It is interesting to note that the predicted \\(h_{0}\\) is above the design sensitivity of LIGO detector. The error bars in Fig. 3 between the \\(x=0\\) and \\(x=-1\\) EOSs provide a constraint on the _maximal_ strain amplitude of the gravitational waves emitted by the millisecond pulsars considered here. The specific case shown in the figure is for neutron star models of \\(1.4M_{\\odot}\\). Depending on the exact rotational frequency, distance to detector, and details of the EOS, the _maximal_\\(h_{0}\\) is in the range \\(\\sim[0.4-1.5]\\times 10^{-24}\\). These estimates do not take into account the uncertainties in the distance measurements. They also should be regarded as upper limits since the quadrupole moment (Eq. (4)) has been calculated with \\(\\sigma=10^{-2}\\) (where \\(\\sigma\\) can go as low as \\(10^{-5}\\)). To emit GWs a pulsar must have a quadrupole deformation. The latter is normally characterized by the ellipticity which is related to the neutron star maximum quadrupole moment \\(\\Phi_{22}\\) and the moment of inertia via [28] \\[\\epsilon=\\sqrt{\\frac{8\\pi}{15}}\\frac{\\Phi_{22}}{I_{zz}}, \\tag{5}\\] For slowly rotating neutron stars, one can use the following empirical relation [30] \\[I_{zz}\\approx(0.237\\pm 0.008)MR^{2}\\left[1+4.2\\frac{Mkm}{M_{\\odot}R}+90\\left( \\frac{Mkm}{M_{\\odot}R}\\right)^{4}\\right] \\tag{6}\\] This expression is shown to hold for a wide class of equations of state which do not exhibit considerable softening and for neutron star models with masses above \\(1M_{\\odot}\\)[30]. Fig. 4 displays the neutron star moment of inertia (left panel) and ellipticity (right panel). It is interesting to mention that a fiducial value of \\(I_{zz}=10^{45}\\)g cm\\({}^{2}\\) is normally assumed in the literature. Our calculations indicate that the \\(I_{zz}\\) is strongly mass dependent. This observation is consistent with previous calculations. Moreover, the ellipticity decreases with increasing mass. The magnitude is above the lowest upper limit of \\(4\\times 10^{-7}\\) estimated for the PSR J2124-3358 [27]. Interestingly, essentially all obervables depend strongly on the EOS of neutron-rich matter. In particular, the MDI EOSs, adopting the same symmetric matter EOS but different density dependence of the symmetry energy, sets useful nuclear boundaries for these gravitational wave observables. In summary, the heavy-ion physics community has made significant progress in constraining the EOS of neutron-rich nuclear matter in recent years. In particular, comprehensive analyses of several isospin effects including the isospin diffusion and isoscaling in heavy-ion reactions and the size of neutron skins in heavy nuclei have allowed us to constrain the symmetry energy at sub-saturation densities within approximately \\(31.6(\\rho/\\rho_{0})^{0.69}\\) and \\(31.6(\\rho/\\rho_{0})^{1.05}\\). While the currently existing data only allowed us to constrain the symmetry energy and thus the EOS of neutron-rich matter in a narrow range, it can already help to put some useful constraints on several interesting observables in astrophysics, such as the mass-radius correlation, moment of inertia, and the elliptical deformation and gravitational radiation of (rapidly) rotating neutron stars. With the parity violating electron scattering experiments and heavy-ion reactions with high energy radioactive beams, it will be possible in the future to map out accurately the entire density dependence of the symmetry energy. ## Acknowledgments This work was supported in part by the US National Science Foundation under Grant No. PHY-0652548, PHY-0757839 and PHY-0457265, the Research Corporation under Award No. 7123, the Advanced Research Program of the Texas Coordinating Board of Higher Education under grant no. 003565-0004-2007, the Welch Foundation under Grant No. A-1358, the National Natural Science Foundation of China under Grant Nos. 10575071 and 10675082, MOE of China under project NCET-05-0392, Shanghai Rising-Star Program under Grant No.06QA14024, the SRF for ROCS, SEM of China, and the National Basic Research Program of China (973 Program) under Contract No. 2007CB815004. Figure 4: Neutron star moment of inertia (left panel) and Ellipticity (right panel). Taken from ref.[19]. ## References * [1] J.M. Lattimer and M. Prakash, Phys. Rep., **333**, 121 (2000); Astr. Phys. Jour. **550**, 426 (2001); Science Vol. **304**, 536 (2004) * [2] A. W. Steiner, M. Prakash, J.M. Lattimer and P.J. Ellis, nucl-th/0410066, Phys. Rep. **411**, 325 (2005). * [3] B.A. Brown, Phys. Rev. Let. **85**, 5296 (2000). * [4] B.A. Li, C.M. Ko and W. Bauer, topical review, Int. J. Mod. Phys. E**7**, 147 (1998). * [5]_Isospin Physics in Heavy-Ion Collisions at Intermediate Energies_, Eds. B. A. Li and W. Uuo Schrodear (Nova Science Publishers, Inc, New York, 2001). * [6] P. Danielewicz, R. Lacey and W.G. Lynch, Science **298**, 1592 (2002). * [7] V. Baran, M. Colonna, V. Greco and M. Di Toro, Phys. Rep. **410**, 335 (2005). * [8] B.A. Li, L.W. Chen and C.M. Ko, Phys. Rep. (2008) in press, arXiv:0804.3580 [nucl-th]. * [9] M.B. Tsang _et al._, Phys. Rev. Lett. **92** 062701 (2004). * [10] T.X. Liu _et al._, Phys. Rev. C **76**, 034603 (2007). * [11] D. Shetty, S.J. Yennello, G.A. Souliotis, Phys. Rev. C **75**, 034602 (2007). * [12] A.W. Steiner, B.A. Li, Phys. Rev. C **72**, 041601 (R) (2005). * [13] L.W. Chen, C.M. Ko, B.A. Li, Phys. Rev. Lett. **94**, 032701 (2005). * [14] B.A. Li, L.W. Chen, Phys. Rev. C **72**, 064611 (2005). * [15] C. J. Horowitz _et al._, Phys. Rev. C **63**, 025501 (2001). * [16] C.B. Das, S. Das Gupta, C. Gale, and B.A. Li, Phys. Rev. C **67**, 034611 (2003). * [17] P.G. Krastev and F. Sammarruca, Phys. Rev. C **74**, 025808 (2006). * [18] A. Akmal, V.R. Pandharipande, and D.G. Ravenhall, Phys. Rev. C **58**, 1804 (1998). * [19] P.G. Krastev, B.A. Li and A. Worley, arXiv:0805.1973 [astro-ph]. * [20] B.A. Li, A.W. Steiner, Phys. Lett. B **642** 436 (2006). * [21] P. Krastev, B.A. Li, Phys. Rev. C **76** 055804 (2007). * [22] P. Krastev, B.A. Li, A. Worley, Astrophys. J. **676**, 1170 (2008). * [23] A. Worley, P. Krastev, B.A. Li, arXiv:0801.1653 [astro-ph]; Astrophys. J. (2008) in press. * [24] J.W.T. Hessels, S.M. Ransom, I.H. Stairs, P.C.C. Freire, V.M. Kaspi, F. Camilo, Science **311**, 1901 (2006). * [25] P. Kaaret, J. Prieskorn _et al._, Astrophys. J. **657**, L97 (2007). * [26] E. E. Flanagan and S. A. Hughes, New J. Phys. **7** (2005) 204 (2005). * [27] B. Abbott _et al._ [LIGO Scientific Collaboration], Phys. Rev. Lett. **94** (2005) 181103; Phys. Rev. D**76**, 042001 (2007). * [28] B. J. Owen, Phys. Rev. Lett. **95** (2005) 211101. * [29] B. Haskell, N. Andersson, D. I. Jones, and L. Samuelsson, Phys. Rev. Lett. **99**, 231101 (2007). * [30] J. M. Lattimer and B. F. Schutz, Astrophys. J. **629**, 979 (2005).
Recent analyses of several isospin effects in heavy-ion reactions have allowed us to constrain the density dependence of nuclear symmetry energy at sub-saturation densities within a narrow range. Combined with constraints on the Equation of State (EOS) of symmetric nuclear matter obtained previously from analyzing the elliptic flow in relativistic heavy-ion collisions, the EOS of neutron-rich nuclear matter is thus partially constrained. Here we report effects of the partially constrained EOS of neutron-rich nuclear matter on the mass-radius correlation, moment of inertia, elliptical deformation and gravitational radiation of (rapidly) rotating neutron stars.
Condense the content of the following passage.
117
frontiers/journals_remote_sensing_articles_10_3389_frsen_2022_901463_pdf.md
Assessing Protected Area Zoning Effectiveness With Remote Sensing Data: The Case of Nahuel Huapi National Park, Argentina Maria Daniela Rivarola1, Jacob Dein2, Daniel Simberloff1 and Hannah Victoria Herrero2 ###### 2010a; Coad et al., 2015; Barnes et al., 2016; Barnes et al., 2017; Coad et al., 2019). Leverington et al. (2010a) summarized different approaches used to assess PA effectiveness as follows: Coverage-studies biodiversity representation within PAs, also known as Gap Analysis (Scott et al., 1993; Armenteras & Villareal, 2003; Chape et al., 2005; Rodriguez-Cabal et al., 2008); Broadcast Outcomes-compares environmental changes within and outside protected areas, generally using remotely sensed data (Nagendra et al., 2013; Herrero et al., 2016); Protected Area Management Effectiveness Assessments (PAME)--uses the scoring framework developed by the IUCN (Hockings, 2003; Coad et al., 2015) or a similar approach; and lastly, Detailed Monitoring-generally reports animal population trends, vegetation conditions, or socioeconomic impacts of a particular PA (Barnes et al., 2016; Geldmann et al., 2018). These different approaches are complementary, as each addresses the outcome from a singular perspective (Hockings et al., 2006). The term \"protected area\" comprises areas known by different names, often referring to different types of management. The International Union for Conservation of Nature (IUCN) recognizes six management categories organized from more to less strict as follows: Ia-Strict nature reserve, Ib-Wilderness area, II-National Park, III-Natural monument or feature, IV-Habitat/species management area, V-Protected landscape or seascape, and VI-Protected area with sustainable use of natural resources (Dudley, 2013). While extractive use is forbidden or minimal in categories from I to IV, the restrictions are reduced in categories V and VI. The idea that a stricter management category will yield better habitat preservation has been widely explored in a variety of environments, with contrasting results. A study conducted in Bolivia, Costa Rica, Indonesia, and Thailand found that, although deforestation was generally lower inside the strictest PAs, it was unclear whether this outcome was related to the management category or instead to the remote location of most of the strict PAs (Ferraro et al., 2013). The Royal Chitwan National Park in Nepal allows use by and involvement of local residents, while Celaque National Park in Handuras has a more traditional management approach in which local residents do not participate in this endeavor. Although deforestation rate was lower inside both PAs, the regeneration and conservation of the buffer zones surrounding the PAs were better where local residents were involved, suggesting that regulated use of PAs could be more effective in the long run (Nagendra et al., 2004). Fire incidence in tropical forest located in strict PAs in Latin America and Asia was lower than in the unprotected area; however, multi-use PAs in these continents sustained an even lower fire incidence (Nagendra, 2008). While most studies looked at entire units belonging to one particular management category, the division of a PA into zones with differing categories of protection has been less studied (Hull et al., 2011). A common management approach is subdividing a PA into more than one management category, resulting in more strictly regulated areas surrounded by less regulated categories acting as buffer zones between the most highly protected area and the unprotected area (Geneletti & van Duren, 2008). This is the situation in Nahuel Huapi National Park (NHNP), located in the northern portion of the temperate forests in Patagonia, Argentina. This protected area, originally created in 1922 as National Park (formerly Southern National Park, then Nahuel Huapi National Park; NP category II IUCN), was later subdivided into National Reserve (NR; category VI IUCN) and Strict Nature Reserve (SNR; category I IUCN) (Rivarola et al., 2021a). The National Reserve was established in 1970 as a buffer zone between the National Park (west) and the unprotected area (east); furthermore, most of the private properties that existed at the time were included in this new (lower) category, allowing for regulated extractive use. Finally, in 1990 pristine and remote areas within the National Park were declared Strict Nature Reserve with no human intervention allowed (Martin & Chehbar, 2001; Riverola et al., 2021a). NHNP has one city and two small towns on its borders, accounting for a total population of approximately 170,000 people. This region had a low human population density for most of the 20th century, a situation that changed in the 1980s. The population growth rate between 1980 and 1991 reached 101.58%, and it peaked again between 2001 and 2005 (74%), resulting in an unplanned and unregulated urbanization expansion in which social and economic inequity are evident, pressing on natural resources in a complex manner (Madariaga, 2007). The economic development of this region was historically based on agriculture, livestock, and logging but later switched to tourism, the main source of income nowadays (Schluler, 1994; Nunez & Vejsberg, 2010). Three biomes are protected by NHNP: high Andes, Patagonian temperate forests, and steppe, with Patagonian temperate forests accounting for the largest extent (Monjeau et al., 2005). These forests have been isolated from other temperate forests since the mid-Tertiary Period (Axelrod et al., 1991; Villagran and Hinjosa, 1997), and, as a result, 90% of the woody species are endemic (Arroyo et al., 1996) and the region is characterized by one of the highest known rates of plant-animal mutualisms (Aizen & Escurra, 1998). Despite the existence of multiple PAs incorporating Patagonian forests in Chile and Argentina (Armesto et al., 1998; Burkart, 2005), more than 1/3 of the Patagonian forests have been lost since the arrival of Europeans in the 19th century (Tecklin et al., 2002). The assessment of NHNP effectiveness in preserving its biodiversity is crucial. Integral and pluralistic approaches are needed in order to assess PA performance (Caro et al., 2009). Three of the effectiveness assessment methods suggested by Leverington et al. (2010a) have been implemented in this PA. The first was a PAME assessment that found that NHNP management falls in the fairly satisfactory category (Rusch, 2002). Secondly, a coverage study concluded that the hotspot of Patagonian biodiversity is not fully covered by the current PAs (Rodriguez-Cabal et al., 2008). Lastly, a detailed monitoring of the small mammal community of NHNP concluded that there is no clear evidence that a stricter category preserves this community better, with the exception of the endemic marsupial _Dromicipos diprovides_ (Rivarola et al., 2021b). The fourth type of PA effectiveness assessment, broadscale outcome, has not yet been performed, and it is the main goal of the present study. Ecological applications of satellite remote sensing (SRS) can potentially improve environmental management by providing verifiable and standardized data at a large temporal and spatial scale (Pettorelli et al., 2014). For decades, access to SRS data was very expensive, limiting its use in many countries and regions. The shift toward free SRS databases provided the opportunity for better and wider use of these data (Woodcock et al., 2008). Concomitantly, advances in processing methods have allowed such data to be used in a more comprehensive manner (Hansen and Loveland, 2012). An increasing number of studies have used SRS to assess PA broadcast outcomes, facilitating the evaluation of large areas that would not have been possible to assess from the ground (Nagendra et al., 2004; Buchanan et al., 2008; Wiens et al., 2009). Among all sensors, Landsat provides the longest consistently calibrated data set registering surface changes since 1972 (Markham and Helder, 2012), providing an excellent source of data for habitat monitoring, allowing detection of habitat fragmentation and disturbances in PAs (Nagendra et al., 2013). Several vegetation indices have been developed in order to draw inferences about vegetation structure, photosynthetic capacity, and leaf water content among other ecological data. Normalized Difference Vegetation Index (NDVI) is the most widely used index and is defined as the ratio of the difference between the spectral reflectance in near-infrared (NIR) and the red (RED) wavelengths divided by the sum of both, where NIR and RED are the light reflected by the vegetation in the NIR and RED wavelength bands, respectively (Gandhi et al., 2015; Yengoh et al., 2016). This index is based on the fact that chlorophyll absorbs RED, while the mesophyll disperses NIR (Pettorelli et al., 2005), and its values range between \\(-1\\) and \\(+1\\), where negative values correspond to unvegetated areas (Myneni et al., 1995). This index is highly sensitive to changes in canopy photosynthetic activity, and such changes can be used as an early warning of habitat modification (Leisher et al., 2013; Nagendra et al., 2013). In terrestrial ecosystems, the amount and distribution of vegetation directly influence the abundance and distribution of resident and migrant animals, thus, NDVI is a valuable tool not only for assessing photosynthetic activity but also to infer overall ecosystem status at a large spatial and temporal scale (Pettorelli et al., 2005). Time series analysis of NDVI has been used to assess PA effectiveness, allowing researchers to differentiate between seasonal and yearly changes (Waylen et al., 2014; Herrero et al., 2016; Southworth et al., 2016; Herrero et al., 2020). The research reported here explores vegetation status of NHNP among its three protection categories along with a neighboring unprotected area. We aimed to assess the effectiveness of the three levels of protection in the NHNP during the 21st century by comparing NDVI time series from 2000 to 2020 of areas under each level of protection to that for \"matching\" (or \"apples-to-apples\") unprotected areas in order to reduce variation associated with different land characteristics (Joppa and Pfaff, 2011). Variation in NDVI can be associated with variables including land cover, precipitation, temperature, and elevation; therefore, we incorporated these factors in our analysis. Using NDVI as a comprehensive metric for all change in vegetation aimed to identify overall trends while serving to suggest influential land change processes for further analysis. ## Methods ### Study Area Nahuel Huapi National Park is located between parallels 40deg 08deg 18'' and 41deg 35deg 19'' South and longitudes 71deg50deg 52'' and 71deg 04deg 45\" West (**Figure 1**). It is bordered in the west by Chile, in the north by Lanin National Park, in the east by the Patagonian steppe (a small area of which is included within NR), and in the south by the Manso river. Its total area of 7,172.61 km2 is subdivided into different management categories. The eastern 2,253.8 km2 are designated National Reserve (NR), IUCN category VI. This area contains several private properties, Figure 1: Map of Natural Huapi National Park. Levels of protection indicated by different colors. where authorized livestock and logging are frequent. Furthermore, this category allows more intensive use and the development of infrastructure related to tourism (hotels, sky resorts). The western 4,918.81 km\\({}^{2}\\) are designated National Park (NP), IUCN category II (WDPA, 2021). There are fewer private properties in this region, and although livestock and logging are forbidden by law in a category II PA, these activities are still common. More extensive use is also allowed, such as campstics and low-impact developments. In 1990, pristine areas within the National Park were declared Strict Nature Reserve (SNR, IUCN category Ia), covering 755.25 km\\({}^{2}\\); and later in 1994, a new subdivision of the NP was proclaimed Wilderness Natural Reserve (WNR, IUCN category Ib). Information regarding the decision process that led to selection of those areas to be included under the strictest category level is unavailable, other than that they were pristine areas, remotely located, with difficult or no ground access. Consequently, several are located at higher elevations (where conflicts with human uses are minimal) or in areas accessible only by boat. This category does not allow any use, other than patrolling and scientific research (Rivarola et al., 2021). In this study, we investigated NR, NP, and SNR, insofar as we lacked access to spatial data that included WNR. Additionally, we evaluated 2,423.8 km\\({}^{2}\\) of unprotected area located south of the Manso river, using the same NHNP longitudinal range (**Figure 1**). NHNP lies within the Valdivian Ecoregion, where High Andean, Patagonian Forests, and Steppe biomes are represented (Burkart et al., 1999). It has a mean annual precipitation of 1,800 mm, with a marked west-east gradient (from above 2,000 mm to approximately 200 mm) owing to the shadow effect of the Andes Mountains (Cabrera, 1976). Most of the NHNP is covered by forest dominated by evergreen or deciduous species of the genus _Nothofagus_ (_N. pumilio_--Lenga, _N. antarctica_--Nire, _N. dombeyi_--Coihue, _N. betuloides_--Guindo, and _N. nitida_--Coihue de Chiloe) and _Artaviaria artaucana_ (Araucaria) in the northern area, and _Austroedras chillensis_--cypress along the eastern fringe in the acetone between forest and steppe (Cabrera, 1976). ## Data Collection To quantify NDVI change in the study area in relation to significant environmental variables, we collected satellite imagery, precipitation, and temperature data from available remote sensing products from years 2000-2020, leveraging the resources of Google Earth Engine (Gorelick et al., 2017). While limited remote sensing data are available before 2000, sufficient data available starting in year 2000 allowed us to analyze vegetation change one decade after the strict reserves were first established. In addition to remote sensing data, we also acquired fire history (Mermoz et al., 2005), land cover provided from CIEFAP (Centro de Investigacion y Extension Forestal Andino Patagonico), and tourism data from APN (Administracion de Parques Nacionales). We created cloudless composites using Landsat data by selecting the \"greenest\" pixel from all scenes captured between December 1 and March 1 of each summer season. We selected only summer data because the presence of evergreen and deciduous forests prevents us from evaluating photosynthetic activity in fall and winter, while frequent presence of snow during the spring would also bias our analysis. The greenest pixel was taken to be that with the highest NDVI, which we computed for each available scene during each season. At least 25 scenes were available for each season. Composites before 2013 were created using Landsat 7 scenes (Landsat 7 Collection 1 Tier 1 TOA Reflectance courtesy of the U.S. Geological Survey), and composites 2013 and after were created using Landsat 8 (Landsat 8 Collection 1 Tier 1 TOA Reflectance courtesy of the U.S. Geological Survey). Selecting the greenest pixel allowed for the best interseason NDVI comparisons. However, while the method works well for creating cloudless composites in areas with vegetation present, Figure 2: Land cover of N-NP and neighboring area. pixels containing clouds are often selected over areas with low NDVI values, such as water bodies, urban areas, and areas above the tree line. Because these areas contained little vegetation (e.g., sparse ground vegetation growing above the tree line) or no vegetation (water bodies, rocks, bare ground), they were irrelevant to our analysis, and we masked them from the resulting composites to exclude them from analysis. The mask was taken from land cover data provided to us by CIEFAP (Centro de Investigacion y Extension Forestal Andino Patagonico) and shown in **Figure 2**. We collected precipitation data within the study area from the Global Precipitation Measurement (GPM) Integrated Multi-stellite Retrievals for GPM (IMERG) dataset (Huffman et al., 2019). IMERG provides total precipitation for every month at 0.1 \\(\\times\\) 0.1\\({}^{*}\\) (approximately 11.1 \\(\\times\\) 11.1 km) spatial resolution and calibrates multiple satellite estimates with ground sources. Figure 4.— **INM** values over time recorded for a total of 375 random points equally distributed among the four levels of protection and four dominant tree species (_Netroflagus_ domboj_, _N. Antarctica_, _N. pumilo_, and _Austrocedus chilensis_). Figure 3.— Variation in mean NDI at different levels of protection (D: Unprotected area, 1: National Reserve, 2: National Park, 3: Strict Nature Reserve), Horizontal lines on top indicate statistical differences between levels of protection (pairs). We collected temperature data within the study area from the MOD11A1 V6 product derived from data collected by satellites equipped with the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument (Wan et al., 2015). The data provide daily (one daytime and one nighttime) land surface temperature estimates at 1 \\(\\times\\) 1 km resolution. We computed the mean daytime temperature for every year (Jan 1st-December 31st) for use in analysis. ## Data Analysis We calculated the mean NDVI per austral summer from 2000 to 2020 at each level of protection (SNR, NP, NR, Unprotected). We evaluated the synergistic effect of protection and time on changes in mean NDVI using an analysis of covariance (ANCOVA), followed by a post hoc Bonferroni adjustment test. Level of protection and years (2000-2020) were the categorical and continuous variables, respectively. Since differences in NDVI among the levels of protection do not necessarily mean healthier vegetation, but rather could be related to differences in dominant species cover (e.g., NP mostly dominated by _N. dombeyi_ vs. NR mostly dominated by _A. chilensis_), precipitation, elevation, or temperature, in a second step we selected random points in each of the Unprotected, NR, NP, and SNR levels. Using the available data regarding dominant species cover across NHNP (Margutti & Arrosteguy, 2019), we selected 25 random points per area dominated by one of the four most common dominant tree species. _N. dombeyi_, _N. antarctica_, _N. pumtilio_, and _A. chilensis_, accounting for a total of 100 random points each for NP, NR and Unprotected area, and 75 random points for SNR, because no area is dominated by _A. chilensis_ within this category (**Figure 2**). For these 375 random points we extracted the elevation, and their NDVI values, temperature, and precipitation from 2000 to 2020. For each random point we ran a linear regression in which we evaluated NDVI change over time, resulting in a total of 375 linear regressions. From each linear regression, we extracted the slope, as it provided us with information regarding the general trend of NDVI change per level of protection. Lastly, we ran an ANOVA test to evaluate if the slopes (indicating NDVI change over time) varied among the four levels of protection, followed by a Tukey HSD test. _Nothofagus dombeyi_, _N. antarctica_, _N. pumtilio_, and _A. chilensis_ respond differently to disturbances such as wildfire, drought, livestock, and windstorm (Raffaele et al., 2014). To investigate if these species experienced a different trend among the levels of protection, we ran a two-way ANOVA test, grouping by level of protection and dominant species and using as dependent variable the 375 slopes from the random points mentioned above, followed by a Bonferroni test. Previous studies have indicated that precipitation plays an important role in determining NDVI (Herrero et al., 2020). Other characteristics, such as elevation and temperature, might also affect the NDVI values. In order to reduce the number of physical variables used to explain changes in NDVI, we used Pearson's Figure 5: Comparison of the NCM increases over time, indicated as positive slope values, among the different levels of protection (\\(\\text{DF}=3,368,\\text{F}=4.366,p=0.005\\)). 0: unprotected area, 1: National Reserve, 2: National Park, and 3: Strict Nature Reserve. Figure 6: Comparison among the NDVI increases over time, indicated as positive slope values, for each of the four most common dominant tree species, among the different levels of protection. 0: unprotected area, 1: National Reserve, 2: National Park, and 3: Strict Nature Reserve. product-moment correlation to evaluate the correlation between precipitation and elevation (t = 2.16, df = 7,810, \\(p\\)-value = 0.03) and correlation between precipitation and temperature (t = -36.20, df = 7,810, \\(p\\)-value=\\(<\\)1.1e-16). We ran a linear regression to evaluate if annual precipitation in the studied area affected its NDVI. We transformed NDVI values using a Box-Cox transformation to satisfy the normality assumption, then evaluated by ANCOVA whether different levels of protection differed in annual precipitation. To identify areas where changes in NDVI over the 20-year period were more notable, we filtered those pixels where values changed by more than 1 standard deviation (1 SD = 0.05) in both positive and negative directions. We ran a linear regression to evaluate if increases in NDVI were related to the level of protection, and we ran a second linear regression to analyze the relation between NDVI decreases and level of protection. As mentioned above, total area differs among levels of protection. To standardize the measure among them, we used the percent of area (of each level of protection) where NDVI changed by more than 1 SD. While an increase in NDVI is associated with vegetation growth, this does not necessarily mean that natural vegetation is thriving in fact, several undesired landscape modifications can induce that change, such as land abandonment and agriculture expansion (Pan et al., 2018). On the other hand, a decrease in NDVI in areas dominated by a particular tree species could reflect disturbances with negative effects on that species. To investigate this situation further, we conducted an ANOVA test to evaluate if decreases in NDVI were related to the type of forest. We performed the statistical analyses using R 4.0.3 (R Core Team, 2014). The study area is highly visited during the summer months, when the risk of wildfire is higher owing to low seasonal precipitation. Campsites distributed in NP, NR, and in the neighboring unprotected area receive thousands of visitors, increasing the risk of wildfires. We performed a colocation analysis to determine if fires were more likely to occur near designated campsites. Colocation analysis yields a colocation quotient for each fire centroid, where values less than 1 indicate isolation and values greater than 1 indicate spatial correlation (Leslie & Kronenfeld, 2011). We performed the analysis using the Spatial Statistics toolbox in ESRI ArcGIS Pro version 2.8 using the four nearest neighbors. Fire data were provided by INIBIOMA (Instituto de Investigaciones en Biodiversidad y Medioambiente) and campsite data were provided by NHNP. ## Results We performed an ANCOVA to determine the effect of level of protection on NDVI after controlling for time (years). There was a statistically significant difference in NDVI between the groups (F (3,91) = 170.43, \\(p<0.0001\\)). Mean NDVI in the Strict Nature Reserve (0.724 \\(\\pm\\) 0.003) significantly exceeded that in the National Park (0.711 \\(\\pm\\) 0.003), Unprotected area (0.707 \\(\\pm\\) 0.003), and National Reserve (0.642 \\(\\pm\\) 0.003), \\(p<0.001\\) (**Figure 3**). Linear regressions based on the 375 random points showed an overall NDVI increase over time (**Figure 4**). We analyzed differences in slope among protection categories by ANOVA. The greening process (NDVI increase over time) was significantly higher in the \\begin{table} \\begin{tabular}{l c c c} \\hline **Levels Compared** & **DF** & **F** & **p** \\\\ \\hline Unprotected vs. NR & 79 & 4.28 & 5.16e-5 *** \\\\ Unprotected vs. NP & 79 & \\(-\\)1.66 & 0.100 \\\\ Unprotected vs. SNR & 79 & \\(-\\)3.35 & 1.24e-3 *** \\\\ NR vs. NP & 79 & \\(-\\)5.97 & 7.07e-8 *** \\\\ NR vs. SNR & 79 & \\(-\\)7.63 & 4.5e-1 *** \\\\ NP vs. SNR & 79 & \\(-\\)1.69 & 0.006 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Results from a Bonferroni test indicating differences between changes in annual precipitation at each protection category. Statistical significance is indicated by +. Figure 7: Annual precipitation change for each level of protection. prot0: unprotected area, prot1: National Reserve, prot2: National Park, prot3: Strict Nature Reserve. unprotected area than in the Natural Reserve (\\(p\\) = 0.022), the National Park (\\(p\\) = 0.01), and the Strict Nature Reserve (\\(p\\) = 0.03) (**Figure 5**). We further investigated if dominant tree species had a different trend of NDVI change over time in each protection category with a two-way ANOVA. The interaction term between level of protection and dominant tree species was not significant (DF = 8, 357, F = 0.283, \\(p\\) = 0.971), indicating that the tree species did not follow different trends over time in different protection categories (**Figure 6**). Nevertheless, the \"Group\" term indicated that different species did follow a different trend without accounting for the protection level (DF = 3, 357, F = 4.036, \\(p\\) = 0.008). We then computed a pairwise comparison using a Bonferroni test and found that the trend of NDVI change over time was more pronounced in _N. dombeyi_ than in _N. pumilio_ (\\(p\\) = 0.0009), with no differences among the other species. Precipitation and temperature were negatively correlated (t = -36.198, df = 7.810, \\(p\\)-value \\(<\\) 2.2e-16, r = -0.38), and areas with higher temperature had lower NDVI. On the other hand, precipitation and elevation were positively correlated (t = 2.1616, df = 7,810, \\(p\\)-value = 0.03, r = 0.02). We further analyzed the effects of precipitation on NDVI, since it was the physical variable that explained the highest fraction of the variation. We found a positive correlation between annual precipitation and NDVI (F (1,82) = 17.87, \\(p\\) = 6.1e-05). We conducted an ANCOVA in order to determine if annual precipitation has changed in the studied area, and furthermore, if this change is consistent among the different levels of protection. We found an overall decrease in annual precipitation (F (1,79) = 38.428, \\(p\\) = 2.42e-08), although this decrease differed among levels of protection (F (3,79) = 21.469, \\(p\\) = 2.88e-10) (**Figure 7**). In addition, a post-hoc Bonferroni analysis showed that estimated mean annual precipitation in the National Reserve was significantly lower than in the other three categories and that the estimated mean precipitation in the Strict Nature Reserve was higher than in the unprotected area (**Table 1; Figure 8**). Areas where NDVI changed by more than 1 standard deviation (both positive and negative) are indicated in **Figure 9**. The percent of greening areas in the unprotected area significantly exceeded that in the NR (Adj. R-squared = 0.435, F (4, 91) = 19.26, \\(p\\) = 1.666e-11), while no significant differences with and among the other categories were found (**Figure 10**). On the contrary, the same analysis applied to percent of areas with a decreased NDVI yielded no relationship with years or level of protection (Adj. R-squared = 0.001, F (4, 91) = 1.039, \\(p\\) = 0.392) (**Figure 11**). To evaluate if negative changes in NDVI were more frequent in forests dominated by a particular tree species, we conducted an ANOVA. We found a significant difference between the number of pixels with a decreased NDVI and the type of forest (F (4,15) = 3.922, \\(p\\) = 0.023). Forests dominated by _N. pumilio_ contained larger proportional areas with a decreased NDVI than did mixed forests (\\(p\\) = 0.0339) and forests dominated by shrub species (\\(p\\) = 0.026) (**Figure 12**). As shown in **Figure 13** and **Table 2**, colocation analysis suggests that some fires were more likely to occur near campsite than if they were randomly distributed. However, none of the quotients are statistically significant, which could be due to the low number of both fires (n = 23) and campsites (n = 28). Overall, approximately 1.7% of the land within the study area burned between 2000 and 2020. ## Discussion Remote sensing data provide an excellent opportunity to evaluate land surface changes over time at different scales, from local to Figure 8: Estimated mean annual precipitation for each level of protection based on 20 years of precipitation data (period 2000–2020). Statistical differences among the protection categories are indicated with \\(\\cdot\\) prot0. unprotected area, prot1: National Reserve, prot2: National Park, prot3: Strict Nature Reserve. Horizontal lines on top indicate statistical differences between levels of protection (pairs). global assessments, and they have been widely used to evaluate fragmentation and degradation within protected areas (Nagendra et al., 2013). Leisher et al. (2013) analyzed land and forest degradation in 1,788 PAs in Latin America between 2004 and 2009, concluding that the rate of degradation increased from 0.04 to 0.10% per year, resulting in 1,097,618 ha degraded. They evaluated 166 Argentinean PAs, concluding that almost 20% of them have experienced land and forest degradation, despite having a funding level three times the average for the Latin American countries (US$ 8.60 versus US$ 2.50 per lecture). In this study, we evaluated in detail the conservation status of the oldest and one of the largest of the Argentinean PAs, Nahuel Huapi National Park (NHNP). The second NHNP Management Plan published in 2019 (Margutiti & Arosteguy, 2019) highlighted the importance of unifying the criteria used to subdivide the PA following its formal and legal delimitation between different conservation categories (Strict Nature Reserve, Wilderness Natural Reserve, National Park, National Reserve; IUCN categories Ia, Ib, II and VI respectively) rather than by zoning categories based on its uses, as the first management plan proposed (Gil et al., 1986). Because the strict category was created in 1990, our study spanning from 2000 to 2020 provides the best up-to-date evidence at a broad scale regarding the effectiveness of this high conservation category. NDVI values were consistently higher in the SNR compared to the other protection categories and the unprotected area with an assessment based on the average NDVI per category per year. This result would provide an optimistic assessment of the effectiveness of the strictest category. However, differences in area, location (affecting precipitation, temperature, elevation), and dominant tree species among the protection categories might bias our understanding of differences among them in effectiveness. To address this problem, we re-evaluated NDVI changes based on the selection of 375 random points, with an equal representation of different dominant tree species among the four levels. Surprisingly, with this second analysis, the unprotected area shows the highest values of NDVI, which differ statistically from those values reported in the three categories inside the PA. A common approach is to compare vegetation inside and outside PAs. However, the border of a PA may coincide with a natural change in habitat type leading to misinterpretation regarding the effectiveness of such a PA (Mas, 2005; Joppa & Pfaff, 2011; Ferraro et al., 2013). We purposely selected as unprotected area the neighboring southern region based on landscape, climatic conditions, and floral similarities with the PA, since the eastern area transitions into steppe and the western area is in a different nation, Chile. Finally, NDVI values within NHNP coincided with the levels of protection (NDVI values SNR > NP > NR). The general NDVI change trend was positive for all four categories. We found similar results with both assessments (average NDVI per category and using 375 pixel values). The observation of increasing NDVI values agrees with the greening phenomenon reported globally (Zhu et al., 2016). However, interpreting positive NDVI changes remains challenging because no rigorous method has yet been validated (Leisher et al., 2013). Greener does not necessarily mean better conserved. Seasonal or annual changes in NDVI could be associated with an increase in leaf size, number of leaves per plant, plant density, and crop grown per year, but it can also reflect replacement of natural ecosystems by agricultural lands, or non-native species colonization after disturbances (Piao et al., 2020). We selected the widespread NDVI as a comprehensive metric to assess overall vegetation trends between levels of protection. Established land change detection algorithms, such as LandTrendr (Kennedy et al., 2018) and CCDC (Arevalo et al., 2020), for example, could be used to study specific land change processes, such as forest degradation/regeneration (e.g., Piffer et al., 2022), in more detail. However, algorithms designed to detect abrupt changes may not identify land change processes, Figure 9: Map with areas with NDVI increase (blue) and decrease (yellow) in the period 2000–2020. such as the spread of invasive species or global greening. In the case of NHNP, no forest has been replaced by agricultural land, but rather more gradual land change processes of livestock grazing and spread of non-native species were identified as main threats to the PA, along with the more abrupt disturbances of logging and wildfire (Margutti & Arosteguy, 2019). While a compelling body of evidence depicts negative effects associated with non-native plant species in NHNP (Simberloff et al., 2002; Nunez, 2008; Svriz et al., 2013; Franzse & Ghermandi, 2014), where 25% of the plant species are non-native (Raffade et al., 2014), further studies are needed to evaluate if this colonization is related to the increased NDVI Figure 11: Percent of area with a decrease in NDVI, per level of protection, over time. prot0: unprotected area, prot1: National Reserve, prot2: National Park, prot3: Strict Nature Reserve. Figure 10: Percent of area with an increase in NDVI, per level of protection, over time. prot0: unprotected area, prot1: National Reserve, prot2: National Park, prot3: Strict Nature Reserve. values reported here. Furthermore, introduced animals impact forest structure and regeneration in NHNP (Barrios-Garcia & Simberloff, 2013; Nunez et al., 2013; Rodriguez-Cabal et al., 2013; Martin-Albarracin et al., 2015). Land change processes can interact to produce complex outcomes. For example, the combination of cattle and wildfires negatively affected regeneration of _Nothofagus dombeyi - Austrocedrus chilensis_ mixed forests in NHNP, facilitating a post-fire transition from forest to bamboo-dominated shrubland (Blackchall et al., 2008), which could increase NDVI values (Franco et al., 2020). Specific studies addressing this question are needed and could be aided by land cover classification in combination with established land change detection algorithms. We found a positive relationship between NDVI and precipitation, supporting previous findings (Herrero et al., 2020). Furthermore, the variation in precipitation among the \\begin{table} \\begin{tabular}{l c c c c} **ID** & **Year** & **Area (ha)** & **Colocation Quotient** & \\(p\\)**-value** & **Colocation Label** \\\\ [MISSING_PAGE_POST] \\end{tabular} \\end{table} Table 2: Collection Analysis Summary: relation between wildfires and camps. Figure 12: Total number of pixels where NDVI decreased over a 20-year period for each type of forest. levels of protection resembles the differences found among their NDVI values (SNR \\(>\\) NP \\(>\\) Outside \\(>\\) NR, see **Figure 3** and **Figure 8**), suggesting that precipitation might be the cause of these differences in NDVI rather than levels of protection. However, mean annual precipitation fluctuated over time across this region, but the overall trend manifests a decreasing pattern. Precipitation therefore cannot explain the increasing NDVI trend. In addition to identifying the general greening pattern explained above, we further investigated this process by extracting pixels where the NDVI changed more than 1 SD during the period studied. The unprotected area accounted for a larger increase compared to the NR, while this difference was not evident between unprotected and NP, unprotected and SNR, or within the three levels of protection inside NHNP. As stated above, interpretation of increasing NDVI is difficult; however, as a similar pattern or tendency was observed inside and outside the PA, the greening process could be a consequence of a global phenomenon such as climate change and increased CO2 (Ogunkoya et al., 2021), further exacerbated in the unprotected area where extensive livestock and logging are common. Negative NDVI change could be more easily related to degradation and habitat fragmentation (Morton et al., 2005; Leisher et al., 2013). We found no evidence that degradation was more extensive in the unprotected area compared to that in areas under any level of protection. However, during the year 2012 there was a peak in the percentage of area with a negative NDVI change inside the PA (NR \\(\\sim\\) 30%, NP \\(\\sim\\) 20%, SNR \\(\\sim\\)16%), but this value remained low in the unprotected area (\\(\\sim\\)5%). This result could be explained by the massive ash deposit as consequence of the Puyehue-Cordon Caulle volcanic eruption that dispersed about 100 million metric tons of ash, covering 7.5 million ha in Patagonia (Wilson et al., 2013). While the central and northern areas of NHNP were severely affected by ash deposition, the southern portion of NHNP and the unprotected area evaluated in this study remained free of ash. The area more severely affected by ash deposition (Bignami et al., 2014) coincides with the yellow area observed in the NW section of **Figure 9**. Fragmentation or degradation were more striking in areas dominated by _Nothofagus pumilio_ (both inside and outside the PA) compared to areas dominated by mixed forests and shrublands. _Nothofagus pumilio_ forests are distributed at high elevations along approximately 3,000 km of the southern Andes Mountain chain (Mathiasen and Premoli, 2010). A previous study found a positive correlation between _N. pumilio_ growth and precipitation, and a negative correlation with mean annual temperature (Lara et al., 2005). Severe dropwiths in the 20th century following relatively wet and cool years have been associated with a persistent decline in _N. pumilio_ growth, suggesting that current and future trends with lower precipitation and higher temperatures associated with climate change would further promote the decline of these forests (Rodriguez-Caton et al., 2016). Furthermore, wildfires have historically affected forests of _N. pumilio_ by direct burning (Veblen et al., 2003) and also by reducing root ecomycorrhizal colonization after fire (Longo et al., 2011). Wildfires in the region are common between September and April, with record numbers in January and February, due to the combination of low precipitation and high temperatures during the austral summer. The causes of most of these wildfires remain unknown owing to lack of trained personnel and low budgets. However, human activities are thought to be closely related to their occurrence (Monjeau et al., 2005; Margutti and Arosteguy, 2019). Although tourists arrive in NHNP throughout the year, more than 23% of the total annual visitation occurs during the summer months (Area Tecnica y Estadistica, 2015). Campsites are distributed in NP, NR, and the unprotected area, and camping during summer is one of the favorite activities for both residents and tourists. Since the establishment of campsites within the PA Figure 13: Fires and campsites within the study area. is regulated by the APN (National Park Administration), we explored if the locations of wildfires were associated with the authorized campsites. We used available data (date, location, and area burnt) for 22 wildfires that occurred between 1999 and 2015 within NHNP. Although we found no relationship between these two variables, it is important to notice that there were a total of 239 fires recorded in this period, most of which (150) affected less than 0.5 ha (Margutti & Arosteguy, 2019), and no location data were available, so we could not include such data in our analysis. Furthermore, illegal campfires are common across the region, and the low number of personnel and lack of appropriate vehicles make it difficult for authorities to prevent them (Rivarola et al., 2021a). On the other hand, the combination of the current trend of warmer and drier summers following unusually dry springs (phenomena associated with La Nina events), uncommon electrical storms during the summer, and the massive accumulation of fuel material in the forests constitutes a permanent threat for these forests, across all levels of protection. The remote and isolated location of most SNR areas makes them particularly vulnerable to fires, since ground access is difficult, preventing prompt response to many fires, resulting in thousands of hectares affected. On 7 December 2021, lighting ignited a wildfire in the southern area of NHNP, within the SNR. The initial, small fire could not be controlled because firefighters could not access the area. The wildfire is spreading and remains active at the time this manuscript is being written, 1.5 months after the initiation of the fire. It is estimated that more than 6,000 ha of pristine native forests within NHNP (both in SNR and NP) have been burned (ADN, 2022; Sala de Noticias, 2022). Effectiveness assessments should be performed regularly, using multiple and complementary approaches, which would provide crucial information to update management plans as needed, and ultimately, would secure PA conservation goals. The inclusion of PA protection categories, reports regarding effectiveness assessments implemented in those PAs with multiple categories, and current and past management plans within WDPA (the global databases on PAs) would allow a consistent evaluation at the local and global scale. Effective management of PAs is essential to conserve natural ecosystems. Their role in ecosystem services and preserving biodiversity goes beyond the limits of a PA, and they constitute a substantial fraction of a country's national capital, supporting national sustainable development and human well-being (Bovamick et al., 2010). The Argentinean PA system has a long history, and despite political and economic instability in the country, the general trend of PA establishment and management by the APN is promising (Rivarola et al., 2021a). Weaknesses and threats were well identified in the latest management plan for NHNP, and general and specific goals were established for both the short and the long term. This study provides new information that stakeholders in NHNP could take into account to better assess the conditions and changes occurring in this PA and act accordingly. NHNP is an emblematic PA at the national and international level, and its successful management would benefit not only the natural ecosystems represented in the area but also people who are directly or indirectly connected to this PA. ## Data Availability Statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ## Author Contributions MR developed the idea, obtained local data, did the literature review, analyzed the data, created the graphs, and wrote the manuscript. JD obtained, cleaned, and organized the data, and made the maps. DS revised and edited the manuscript. HH provided ideas, revised and edited the manuscript. ## References * [1]A. Adn, S. U. R. (2022) ADN sur Agencia de Noticias. Chubut, Argentina: Comodoro Rivadavia. [https://www.adnusr.com/ar.Rio](https://www.adnusr.com/ar.Rio) Negro: el incemdio cerca del Lago Steffen y a atedo unas 5.438 hectares. Cited by: SS1. * [2]M. Aizen, C. Escura, and C. Escura (1998) High Incidence of Plant-Animal Mutualism in the Woody Flora of the Temperate Forest of Southern South America: Biogeographical Origin and Present Ecological Significance. Ecol. Austral8, pp. 217-236. Cited by: SS1. * [3]P. Arvedo, E. L. Pollock, C. E. Woodcock, and P. Olofsson (2020) A Suite of Tools for Continuous Land Change Monitoring in Google Earth Engine. Front. Clim. 2, pp. 576740. External Links: Document, 2002.0576740 Cited by: SS1. * [4]D. A. Armenteras, F. Gast, and H. Villerallen (2003) Andean Forest Fragmentation and the Representatives of Protected Natural Areas in the Eastern Andes. Columbia. Biol. Conserv.113, pp. 245-256. External Links: Document, ISSN 0006-3207 Cited by: SS1. * [5]J. Ametso, R. J. Rozzi, R. Smith-Ramirez, and M. T. K. Arroyo (1998) Conservation Targets in South American Temperate Forests. Science282, pp. 1271-1272. External Links: Document, ISSN 0006-3207 Cited by: SS1. * [6]M. T. K. Arroyo, M. P. Rivrots, A. Penaloza, L. Caivers, and A. M. Fagi (1996) Phytographic Relationships and Regional Rothness Patterns of the Cool Temperate Rainforest Broa of Southern South America. In High-Latitude Rapifets and Associated Equipment of the West Coast of the Americas Climate, Hydrology, Ecology, and Conservation, Eidber R. G. L. Loxford, et al. (New York, NY, USA), pp. 143-172. External Links: Document, ISSN 0007-878, Link Cited by: SS1. * [7]D. A. Azure, M. T. K. Arroyo, and P. H. R. * [20]Bovarnick, A., Fernandez Baca, J., Galindo, I., and Negret, H. (2010). _Financial Sustainability of Protected Areas in Latin America and the Caribbean: Investment Policy Guidance_. New York, NY and Arlington, VA: United Nations Development Program (UNDP) and The Nature Conservancy (UNC). * [21]Bruner, A. G., Gullison, R. E., Rice, R. E., and da Fonseca, G. A. B. (2001). Effectiveness of Parks in Protecting Tropical Biodiversity. _Science_ 291, 125-128. doi:10.1126/science.291.5501.125 * [22]Buchanan, G. M., Buchatt, S. H., Dutson, G., Pilgrim, J. D., Steininger, M. K., Bishop, K. D., et al. (2008). Using Remote Sensing to Inform Conservation Status Assessment: Estimates of Recent Deforestation Rates on New Britain and the Impacts upon Endemic Birds. _Biol. Conserv._ 141, 56-66. doi:10.1016/j.biocom.2007.08.023 * [23]Burkart, R. (2005). \"Las Areas Protegidas de la Argentina,\" in _La Stancion Animetal Argentina_ 2005. Editors A. Brown, U. Martinez Ortiz, M. Acerbi, and I. Corcurent (Buenos Aires, Argentina: Administration de Parques Nacional), 399-431. * [24]Burkart, R., Barbaro, N. O., Sanchez, R. O., and Gomez, D. A. (1999). in _Econ-Regiones de la Argentina_ (Buenos Aires, Argentina: Presidenc de la Nacion), 45. Administration de Parques Nacional. * [25]Cabrera, L. A. (1976). _Regiones Filografos Argentina_. Buenos Aires: ACOA. * [26]Caro, T., Gardner, T. A., Stoner, C., Fitzherhert, E., and Davenport, T. R. B. (2009). Assessing the Effectiveness of Protected Areas: Paradoxes Call for Pluratismism in Evaluating Conservation Performance. _Divers. Distributions_ 15, 178-182. doi:10.1111/j.1472-4642.2008.00522.x * [27]CBD (2010). _Decision Adopted by the Cap to the CBD at its 10th Meeting (UNEP/CBD/COP/DEC/X/2_. Montreal Secretariat of the CBD. * [28]Chape, S., Harrison, J., Spalding, M., and Lysenko, I. (2005). Measuring the Extent and Effectiveness of Protected Areas as an Indicator for Meeting Global Biodiversity Targets. _Phil. Trans. R. Soc. B_ 360, 443-455. doi:10.1098/rstb.2004.1592 * [29]Coad, L., Leverington, F., Knights, K., Geldmann, J., Eassom, A., Kapos, V., et al. (2015). Measuring Impact of Protected Area Management Interventions: Current and Future Use of the Global Database of Protected Area Management Effectiveness. _Phil. Trans. R. Soc. B_ 370, 20140281. doi:10.1098/rstb.2014.0281 * [30]Coad, L., Watson, J. E., Geldmann, J., Burgess, N. D., Leverington, F., Hocking, M., et al. (2019). Widespread Shortfalls in Protected Area Resourcing Undermine Efforts to Conserve Biodiversity. _Front. Ecol. Environ._ 17, 259-264. doi:10.1002/fee.2042 * [31]Dudley, N. (2013). _Guidelines for Applying Protected Area Management Categories Including IUCN WCPA Best Practice Guidance on Requisition Protected Areas and Assigning Management Categories and Governance Types_. Gland: IUCN, 86. * [32]Estadistica, A. T. Y. (2015). _Temporada Etsival Enero Y Febrero, Comparacion Anal._ San Carlos de Bariloche Serotravian Municipal de Turino, 16. * [33]Ferraro, P. J., Hanauer, M. M., Miteva, D. A., Canaviec-Bacarreza, G. J., Pattanayak, S. K., and Sims, K. R. E. (2013). More Strictly Protected Areas Are Not Necessarily More Protective: Evidence from Bolivia, Costa Rica, Indonesia, and Thailand. _Environ. Res. Lett._ 8, 025011. doi:10.1088/1748-9326/8/20501 * [34]Franco, M. G., Mundo, I. A., and Veblen, T. T. (2020). Field-validated Burning Severity Mapping in North Patagonian Forests. _Remote Sens._ 12, 214. doi:10.3399/11202412. * [35]Franco, J., and Ghermardi, I. (2014). Early Competition between the Exotic Herb Rumes acrossha and Two Native Tusock Grasses with Different Palatability and Water Stress Tolerance. _J. Artif Environ._ 106, 58-62. doi:10.1016/j.jardirc.2014.03.004 * A Case Study of Vellore District. _Procia Comput. Sci._ 57, 1199-1210. doi:10.1016/j.proc.2015.07.415 * [37]Geldmann, J., Coad, L., Barnes, M. D., Craigie, I. D., Woodley, S., Balmford, A., et al. (2018). A Global Analysis of Management Capacity and Ecological Outcomes in Terrestrial Protected Areas. _Conserv. Lett._ 11, e12434. doi:10.111/conl.12434 * [38]Gendetti, D., and van Duren, I. (2008). Protected Area Zoning for Conservation and Use: A Combination of Spatial Multicriteria and Multiobjective Evaluation. _Landac. Urban Plan._ 85, 97-110. doi:10.1016/j.landurbplan.2007.10.004 * Argentina: APN). * [40]Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., and Moore, R. (2017). Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. _Remote Sens. Environ._ 202, 18-27. doi:10.1016/j.rsc.2017.06.031 * [41]Hansen, M. C., and Lowdand, T. R. (2012). A Review of Large Area Monitoring of Land Cover Charge Using Lung Landsat Data. _Remote Sens. Environ._ 122, 66-74. doi:10.1016/j.rsc.2011.08.024 * [42]Herrero, H., Southworth, J., and Bunting, E. (2016). Utilizing Multiple Lines of Evidence to Determine Landscape Degradation within Protected Area Landscapes: A Case Study of Chobe National Park, Botswana from 1982 to 2011. _Remote Sens._ 623. doi:10.3390/rs0808623 * [43]Herrero, H., Southworth, J., Muir, C., Khatami, R., Bunting, E., and Child, B. (2020). An Evaluation of Vegetation Health in and Around Southern African National Parks during the 21st Century (2000-2016). _Appl. Sci._ 10, 2366. doi:10.3390/app10072366 * [44]Hockings, M., Stolton, S. U. E., Leverington, F., Dudley, N., Courrau, J., and Valentine, P. (2006). _Evaluating Effectiveness. A Framework for Assessing Management Differences of Protected Areas_. Gland, Switzerland and Cambridge, UK: IUCN The World Conservation Union. * [45]Hockings, M. (2003). Systems for Assessing the Effectiveness of Management in Protected Areas. _BioScience_ 53, 823. doi:10.1641/0006-3568(2003)053(0823-state)0.20.co2 * [46]Huffman, G. J., Stocker, E. F., Bokvin, D. T., Nelkin, E. J., and Tan, J. (2019). _GPM IMERG Final Precipitation L3 I Month 0.1 Degree X 0.1 Degree V06_. Greenbelt, MD: Goddard Earth Sciences Data and Information Services Center (GES DISC. * [47]Hull, V., Xu, W., Liu, W., Zhou, S., Vila, A., Zhang, J., et al. (2011). Evaluating the Efficacy of Zoning Designations for Protected Area Management. _Biol. Conserv._ 144, 3028-3037. doi:10.1016/j.biocom.2011.09.007 * [48]Hunter, M. L., and Gibbs, J. P. (2007). _Fundamentals of Conservation Biology_. Maden, MA, USA: Blackwell Publishing. * [49]Joppa, L. N., and Phiff, A. (2011). Global Protected Area Impacts. _Proc. R. Soc. B_ 278, 1633-1638. doi:10.1098/rspb.2010.1713 * [50]Kennedy, R., Yang, Z., Gorelick, N., Braaten, J., Cavalcante, L., Cohen, W., et al. (2018). Implementation of the Land/Trender Algorithm on Google Earth Engine. _Remote Sens._ 105, 691. doi:10.3390/rs010050691 * [51]Iara, A., Villalba, R., Wlodorczyk-Franke, A., Aravena, J. C., Luckman, B. H., and Cao, E. (2005). Spatial and Temporal Variation in Nothofagus Pumilio Growth at Tree Line along its Equential Range (35'40'- '55' 5) in the Chilean Andes. _J. Bioggev._ 32, 879-893. doi:10.1111/j.1365-2699.2005.01191x * [52]Leisher, C., Touval, J., Hess, S., Boucher, T., and Reymondin, L. (2013). Land and Forest Degradation inside Protected Areas in Latin America. _Diversity_ 5, 779-795. doi:10.3390/sj040779 * [53]Leslie, T. F., and Kronmfeld, B. J. (2011). The Colocation Quotient: A New Measure of Spatial Association between Categorical Subsets of Points. _Georg. Anal._ 43, 306-326. doi:10.1111/j.1538-4632.2011.00821.x * [54]Leverington, F., Costa, K. L., Pavese, H., Lisle, A., and Hockings, M. (2010a). A Global Analysis of Protected Area Management Effectiveness. _Environ. Manag._ 46, 685-698. doi:10.1007/s00267-019-564-5 * [55]Longo, M. S., Urcday, C., and Nouhra, E. (2011). Long Term Effects of and Management on Isa Victoria, Nahued Huapi National Park, _PeerJ_ 3, e1328. doi:10.7717/peerj.1328 * Mas (2005) Mas, J.-F. (2005). Assessing Protected Area Effectiveness Using Surrounding (Buhler) Areas Environmentally Similar to the Target Area. _Environ. Monit. Asses_ 105, 69-80. doi:10.1007/s10661-005-3156-5 * Mathiasen and Premoli (2010) Mathiasen, P., and Premoli, A. C. (2010). Out in the Cold: Genetic Variation of Nothofagus Pumilio (Nothofagaceae) Provides Evidence for Latitudinally Distinct Evolutionary Histories in Austral South America. _Mol. Evol._ 19, 371-385. doi:10.1111/j.1365-294X.2009.04456.x * Mermoz et al. (2005) Mermoz, M., Kitzberger, T., and Veblen, T. T. (2005). Landscape Influences on Occurrence and Spread of Wildfires in Patagonian Forests and Shrublands. _Ecology_ 86, 2705-2715. doi:10.1890/014-1850 * Monjeau et al. (2005) Monjeau, A., Nazar Achorens, S., Montoni, V., Marquez, J., Alcalde, D., D'Iorio, A., et al. (2005). Perfil del Area Proteggida Argentina: Parque Nacional Nahued Haupt. Available at: [http://www.parkswatch.org/parkprofile.php?l=engkcountry=gr&p8kar-knup](http://www.parkswatch.org/parkprofile.php?l=engkcountry=gr&p8kar-knup). * Morton et al. (2005) Morton, D. C., DeFries, R. S., Shimabukuro, Y. E., Anderson, L. O., Del Bon Espitto-Santo, F., Hansen, M., et al. (2005). Rapid Assessment of Annual Deforestation in the Brazilian Amazon Using MODIS Data. _Earth Interact._ 9, 1-22. doi:10.1175/e1139.1 * Myneni et al. (1995) Myneni, R. B., Hall, F. G., Sellers, P. J., and Marshak, A. L. (1995). The Interpretation of Spectral Vegetation Indexes. _IEEE Trans. Geosci. Remote Sens._ 33, 481-486. doi:10.1109/TGRS.1995.874602101.109/36.377948 * Nagendra (2008) Nagendra, H. (2008). Do ranks Work? Impact of Protected Areas on land Cover Clearing. _AMBIO A J. Hum. Environ._ 37, 330-337. doi:10.1579/06-1784.1 * Nagendra et al. (2013) Nagendra, H., Lucas, R., Honrado, J. P., Jongman, R. H. G., Tarantino, C., Adamo, M., et al. (2013). Remote Sensing for Conservation Monitoring: Assessing Protected Areas, Habitat Extent, Habitat Condition, Species Diversity, and Threats. _Ecol. Indit._ 33, 45-59. doi:10.1016/j.ecollind.2012.09.014 * Nagendra et al. (2004) Nagendra, H., Tucker, C., Carlson, L., Southworth, J., Karmarcharya, M., and Kama, B. (2004). Monitoring Parks through Remote Sensing Studies in Nepal and Homoduras. _Environ. Manag._ 34, 748-760. doi:10.1007/s000267-004-0028-7 * Nunez (2008) Nunez, M. A. (2008). Experiments on Multiple Factors Affecting Pinnacea Invasions on Isla Victoria, Nahued Haupt National Park, Argentina. _Ecology and Evolutionary Biology, Ph.D. Diss. Knoxville, TN: University of Tennessee, 102._ * Nunez et al. (2013) Nunez, M. A., Hayward, J., Horton, T. R., Amico, G. C., Dimaro, R. D., Barrios-Garcia, M. N., et al. (2013). Exotic Mammals Disperse Exotic Fungi that Promote Invasion by Exotic Trees. _PLoS One_ 8, e68632. doi:10.1371/journal.pone.0066832 * Nunez and Vejsbjberg (2010) Nunez, P., and Vejsbjberg, L. (2010). _Tourism between Economic Activity and Social Right_, 19. Nahavel Huapi National Park, 1934930-1955945._Estud. Perpetr. Tur._ * Ogunkova et al. (2021) Ogunkova, A., Kaplan, J., Whitlock, C., Nawawati, W., Roberts, D. W., and Poulter, B. (2021). Drives of Recent Forest Cover Change in Southern South America Arelinked to Climate and O2mole. _Ecol._ 36, 3591-3606. doi:10.1007/s10690-021-01330-7 * Pettorelli et al. (2014) Pettorelli, N., Laurance, W. F., O'Brien, T. G., Wegmann, M., Nagendra, H., Turner, W., et al. (2014). Satellite Remote Sensing for Applied Ecologists: Opportunities and Challenges. _J. Appl. Sci._ 51, 839-848. doi:10.1111/1365-2664.12261 * Pettorelli et al. (2005) Pettorelli, N., Vlx, J. O., Mysterud, A., Gaillard, J.-M., Tucker, C. J., and Stenseth, N. C. (2005). Using the Satellite-Derived NDVI to Assess Ecological Responses to Environmental Change. _Trends Ecol._ 20, 503-510. doi:10.1016/j.tree.2005.05.011 * Pao et al. (2020) Pao, S., Wang, S., Park, T., Chen, C., Lin, X., He, Y., et al. (2020). Characteristics, Drives and Feedbacks of Global Greening Nat. Rev. Earthviron._ 1, 14-27. doi:10.1038/s43017-019-0001-x * Piffer et al. (2022) Piffer, P. R., Rosa, M. R., Tamboli, I. R., Meterjer, J., and Utiarte, M. (2022). Tumor Rates of Regenerated Forests Challenge Restoration Errors in the Beailla Atlantic Forest. _Environ. Rev. Lett._ 17 (4), 045009. doi:10.1088/1748-9326/accl * R Core Team (2014) R Core Team (2014). _A Language and Environment for Statistical Computing_. Vienna, Austria: R Foundation for Statistical Computing * Maffalee et al. (2014) Maffalee, E., de Torres Curbh, M., Morales, C. L., and Kitzberger, T. (2014). _Ecologia e Historia Natural de la Patagonia Andina_. Cidad Autonoma de Buenos Aires, Argentina: Fundacion de Historia Natural Felix de Azara. * Rivarola et al. (2021a) Rivarola, M. D., Simberhoff, D., and Leppanen, C. (2021a). History of Protected Areas in Argentina: A Sevast of Shifting Priorities and Policies in a Developing Country. _Environ. Hist._ camb 27, 515-548. doi:10.3197/096734019X15740974883825 * Riwarokh et al. (2021) Riwarokh, M. D., Simberhoff, D., and Leppanen, C. (2021a). Nahued Haupt National Park, Argentina: Conservation Effectiveness Assessment through Monitoring Small Mammuli Communities. _PARKS 27,_ 15-26. doi:10.2305/fucm.ch.2021.purks-27-2md.en * Rodriguez-Caton et al. (2016) Rodriguez-Caton, M., Villada, R., Morales, M., and Surr, A. (2016). Influence of Troughs on _Nothofagus Pumilio_ Forest Decline across Northern Patagonia, Argentina. _Ecosphere_ 7, e01390. doi:10.1002/esa.1390. * Rodriguez-Cabal et al. (2013) Rodriguez-Cabal, M. A., Barrios-Garcia, M. N., Amico, G. C., Aizen, M. A., and Sanders, N. L. (2013). Node-by-node Dissensibly of a Mutualistic Interaction Web Driven by Species Introductions. _Proc. Natl. Acad. Sci. U.S.A._ 110, 16503-16507. doi:10.1073/pnas.1300131110 * Rodriguez-Cabal et al. (2008) Rodriguez-Cabal, M. A., Nunez, M. A., and Martinez, A. S. (2008). Quantity versus Quality: Endemism and Protected Areas in the Temperature Forest of South America. _Austrial Ecol._ 37, 370-376. doi:10.1111/1.1442-9993.2008.01841.x * Rusch (2002) Rusch, V. (2002). _Estado de Sitacion de las Areas Protogdos de la porcon Argentina de la Serogium Valdivana_. Argentina: Fundacion Vida Silvestre Argentina/WWF, 98. * Sala de Noticias (2022) Sala de Noticias (2022). Incendios en Barriloche: y hay mis de 6.000 hectares afectados v esperan la isyuda de la lluvia. El Noticiero Digital. Available at: [https://dnicetologist.com.ar](https://dnicetologist.com.ar). * Schlider (1994) Schlider, R. G. (1994). San Carlos de Barriloche: Costos y beneficios del ecoturismo. _Estud. Perspect. Tur._ 3, 149-196. * Scott et al. (1993) Scott, J. M., Davis, F., Cattul, B., Noss, R., Butterfield, B., Groves, C., et al. (1993). Gap Analysis: A Geographic Approach to Protection of Biological Diversity. _Wild. Monogr._ 3-41. * Simberhoff et al. (2002) Simberhoff, D., Relva, M. A., and Nunez, M. (2002). Gringos en el bosque: introduced tree invasion in native Nothofagus/Austroedras forest. _Biol. Immunos_ 4, 35-53. doi:10.1023/c0205760884 * Sotik (1987) Sotik, M. E. (1987). History of the Society for Conservation Biology: How and Why We Got Here. _Conserv. Biol._ 1, 4-5. doi:10.1111/j.1523-1739.1987.tb00001.x * Southworth et al. (2016) Southworth, J., Zhu, L., Bunting, E., Ryan, S. J., Herrero, H., Wajern, P. R., et al. (2016). Changes in Vegetation Presence across Global Savanna Landscapes, 1982-100. _J. Land Soc. Sci._ 11, 7-23. doi:10.1080/1747432X.2015.1701439 * Scriz et al. (2013) Scriz, M., Damasco, M. A., Zimmermann, H., and Hensen, L. (2013). The Exotic Shruba Rosa Rubigoons as a Nurse Plant. Implications for the Restoration of Disturbed Temperature Forests in Patagon, Argentina. _For. Ecol. Manag._ 289, 234-242Yengoh, G. T., Dent, D. L., Olsson, L., Tengberg, A. E., and Tucker, C., III (2016). _Future Trends, and Practical Considerations_. ChamNew York: Springer International Publishing AG.Use of the Normalized Difference Vegetation Index (NDVI) to Assess Land Degradation at Multiple Scales: Current Status * Zhu et al. (2016) Zhu, Z., Piao, S., Myneni, R. B., Huang, M., Zeng, Z., Canadell, J. G., et al. (2016). Greening of the Earth and its Drivers. _Nat. Clim. Change_ 6, 791-795. doi:10.1038/nclimate3004 **Conflict of Interest**: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. **Publisher's Note**: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. _Copyright & 2022 Riverola, Dein, Simberfield and Herrero. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forms is permitted, provided the original author(s) and the copyright owner(s) are called and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms._
Protected areas (PAs) remain the most important tool to prevent biodiversity loss and habitat degradation worldwide, but the formal creation of a PA constitutes only the first step. In recent decades, concerns about PA effectiveness have arisen, and several PAs have been evaluated using different methods. Results show that while some PAs are achieving their conservation goals, others have been less effective. Particularly, assessing broadscale outcomes is a method that allows us to monitor change over time at a large scale, using remote sensing data. In this study, we evaluated the effectiveness of Nahuel Huapi National Park, with particular attention to its three protection categories: Strict Natural Reserve (SNR), National Park (NP), and National Reserve (NR) (UCN categories la, II, and VI respectively). We compared changes in Normalized Difference Vegetation Index (NDVI) among sites in these categories and between them and neighboring unprotected areas, over the period 2000-2020. Overall, habitat degradation was low, and we found no difference among the four categories evaluated. Nevertheless, a greening process has been conspicuous in the entire area, with higher values both in the SNR and in the unprotected area. We propose possible explanations as we consider variables such as dominant tree species, precipitation, temperature, elevation, and wildfires. This study supports the importance of NHNP at the regional and national levels, particularly its SNR areas.
Condense the content of the following passage.
284
arxiv-format/2108_08870v1.md
# Topo2vec: Topography Embedding Using the Fractal Effect Jonathan Kavitzky, 1,2 Jonathan Zarecki, 1,3 Idan Brusilovsky, 1,4 Uriel Singer1,5 ## 1 Introduction From ancient times, the topographic structure of land was a key aspect in many decisions. The topography of the land is provably correlated with many other tasks: land-use [22], soil mapping [15], soil salinity [20], landslides [3] water floods [16], avalanches [10] and high solar-energy locations [1]. The techniques for perceiving, collecting, and understanding topography has changed significantly in recent years and today, geographic information systems (GIS) are built on many classical and data-driven algorithms. As in many fields in recent years, deep learning has also begun to transform the field of topography and GIS in general, with works from automatic road-mapping [14, 15] to elevation map super-resolution [23]. Deep learning techniques have a particularly interesting property with respect to classical ML models: In addition to superior predictive accuracy, they naturally build a latent embedding space in which every example can be mapped to a latent vector. These embedding spaces have received much attention in self-supervised contrastive learning methods [11, 12, 13], where an intrinsic property in the data is used to train the model instead of labels gathered by human annotators. Self-supervised methods have been used to achieve close-to-SOTA results on benchmarks such as ImageNet [24], iNaturalist [10], Birdsnap [3], and many more [25, 14]. These methods usually exploit specific invariants in the data, such as invariance to rotation, color jitter, cropping, and more. Topography data exhibits an interesting property which we call the _fractal-effect_. This effect implies that the same location will have the same topographic patterns when viewed in different radii/scales. Topographic objects such as peaks and saddles will appear in every observed scale. This property also implies in practice that any embedding space for topographic data should be highly scale-dependent, and choosing a scale is highly relevant to how the data would be represented; that is, the same location with different scales should have very different embeddings since they represent different things. In this work, we present a new self-supervised technique tailored for topographic images, we exploit the fact that topography images [12] are built as fractals and train a neural network capable of embedding the images in a useful latent embedding space. We use the fractal-effect in inference and prove our embedding space is effective in many scales. Our main contributions are as follows: 1. We introduce a self-supervised training technique for deep neural networks tailored for topography data and use this technique to train a model capable of embedding a topographic image into a useful embedding space. 2. We empirically evaluate the model and technique on classification benchmarks we have built for topographic images, and present qualitative results expressing the fractal-effect in inference. 3. We build a baseline comparison for future work in the field of topography ML. This includes: data, architecture, tasks, and metrics. ## 2 Exploiting the Fractal-Effect for Self-Supervision In this section, we will first introduce the fractal-effect in topography images, afterward, we will discuss the data used in this paper, and then we will define our self-supervision technique exploiting the fractal-effect. ### Fractal Effect As discussed in the introduction, topography data exerts many interesting properties. One of the main properties of the topographic data is scale semi-invariance/semi-fractal: the data can be viewed as topographic in many resolutions and is somewhat similar in those different resolutions [10], we call this property the _fractal-effect_. For example, as seen in Figure 2, peaks appear in both very high and low zoom levels of the topography. As a result of this effect, and in contrast to most computer-vision tasks, our interest objects will certainly appear in multiple scales and any model working in this domain must exert a high level of scale-dependence. Now, in order to support multiple scales, one can change the image resolution so that it will be observed at a different scale. See Figure 2 for a visual illustration. However, it is important to remember that the fractal-effect implies that each image in the data contains objects in multiple scales. As before performing any experiments one should decide which scale (or multiple scales) the data was sampled at. ### Data Digital terrain model (DTM)In order to create the topography images, we used the DTM elevation data from [1]. Every topography image is a 2D array of pixels, each representing the ground surface elevation above sea level. This database has a spatial resolution of 30 meters per pixel, and elevation accuracy of \\(\\pm\\)10 meters. In our experiments we used Europe as an area of interest. OpenStreetMap (OSM)For precise locations of objects of different classes, used later for evaluation, we used OpenStreetMap [1]. Each location is represented by a pair of geographic coordinates (longitude, latitude), which in turn will be converted to a topography image. We used geo-locations of different topographic classes (peaks, rivers and saddles), and non-topographic classes (aerialway stations) all sampled evenly at random out of all the OSM known tags. ### Topo2Vec Let \\(loc\\) be a location on earth: \\(x_{r}^{s}\\) is the image of the area with radius \\(r\\) around \\(loc\\) (as presented in Figure 1), N is the number of pixels from the image's central to the edge. The spatial resolution in \\(\\frac{\\text{meter}}{\\text{pixel}}\\) of \\(x_{r}^{s}\\) is defined as: \\[s=r/N\\frac{\\text{meter}}{\\text{pixel}}\\] One can sample the image at different spatial resolutions \\(x_{r}^{s_{1}},x_{r}^{s_{2}},\\dots,x_{r}^{s_{j}}\\), while covering the same area. However, due to the fractal-effect, different topographic objects in the Figure 1: DTM Topography image 1. Each pixel’s value represents its elevation above sea level, and all the values are normalized for each tile. Figure 2: 2 topographic images that were taken from two close points. In a small scale, the two images look very differently and are classified as a peak and a saddle, but in a bigger scale, both locations are of a peak. Different scales imply different topography. Taken in: \\(46^{\\circ}51^{\\prime}34.2^{\\circ}N\\)\\(7^{\\circ}31^{\\prime}31.8^{\\circ}E\\). observed area will become more apparent in different resolutions, as seen in Figure 2. Taking the fractal-effect into consideration, we define \\(f\\) as a function that encodes \\(x_{r}^{s}\\) into an embedding space. Our goal is that \\(f(x_{r}^{s})\\) will represent the topographic information present around \\(loc\\) with radius \\(r\\) and a resolution of \\(s\\). By learning a second function, \\(g\\), that decodes \\(f(x_{r}^{s})\\) to an image with a different spatial resolution \\(s^{\\prime}\\), we are able to leverage the fractal-effect and learn a better topographic representation. This is achieved by maximizing: \\[\\mathbb{P}(x_{r}^{s^{\\prime}}|g(f(x_{r}^{s})))\\] Where \\(s^{\\prime}=k\\cdot s_{\\{\ icefrac{{k}}{{2}}\\mid\\text{c}\\}}\\) can be thought as the _fractal-factor_. Maximizing the above expression is done by minimizing the \\(L^{p}\\) distance: \\[\\min_{\\theta_{f}}L_{p}=\\|x_{r}^{s^{\\prime}},g(f(x_{r}^{s}))\\|\\] Training using this loss, we obtain two functions \\(f\\) and \\(g\\). While \\(g\\) decodes a given embedding vector into a higher resolution image, meaning it learned the fractal-effect around \\(loc\\), \\(f\\) (from now on will be called _topo2vec_) is a self-supervised, fractal-effect aware encoder that takes as input a topographic image and embeds it into a embedding space. ``` Input :\\(Locations\\) - list of different locations \\(Scales\\) - list of relevant scales \\(k\\) - desired fractal-factor \\(lr_{1},lr_{2}\\) - learning rates Output :\\(f\\) - learned encoder \\(g\\) - learned decoder \\(d\\) - learned discriminator 1whilef,g,d not convergeddo 2forebatchinLocationsdo 3for\\(s\\in Scales\\)do 4\\(X\\) = toImageDataset(\\(batch\\), \\(s\\)); 5\\(Y\\) = toImageDataset(\\(batch\\), \\(k\\cdot s\\)); 6\\(\\hat{Y}=g(f(X))\\); 7\\(L_{p}=\\sum_{i}||y_{i},\\hat{y}_{i}||_{p}\\); 8\\(L_{G}^{GAN}=BCE(d([X,\\hat{Y}]),\\bar{1})\\); 9\\(f\\)\\(\\leftarrow\\)\\(f-lr_{1}\\cdot\ abla_{f}(\\lambda_{1}*L_{p}+\\lambda_{2}*L_{G}^{GAN})\\); 10\\(g\\)\\(\\leftarrow\\)\\(g-lr_{1}\\cdot\ abla_{g}(\\lambda_{1}*L_{p}+\\lambda_{2}*L_{G}^{GAN})\\); 11\\(L_{D}^{GAN}=BCE(d([X,Y]),\\bar{1})+\\) 12\\(BCE(d([X,\\hat{Y}]),\\bar{0})\\); 13\\(d\\)\\(\\leftarrow\\)\\(d-lr_{2}\\cdot\ abla_{d}(L_{D}^{cGAN})\\); 14 15 return\\(f,g,d\\) ; 16 ``` **Algorithm 1**Topo2vec training procedure Intuitively, if \\(loc\\) expresses a fractal-effect that one peak becomes many small peaks in higher resolution. Then, if \\(g\\) has managed to predict this fractal-pattern, it has learned the topographic information in \\(loc\\). A problem that arises from this training procedure is that while \\(g\\) attempts to predict the fractal-pattern, it is not guaranteed that it will predict the exact fractal-effect of \\(loc\\), but rather the distribution of patterns possible in that area. In order to address this issue, we propose an adversarial loss on top of the current loss to enable the network to predict a sample from the fractal-pattern distribution rather than the distribution's average. The discriminator \\(d\\) takes as input fake pairs, \\((x_{r}^{s},g(f(x_{r}^{s})))\\), and true pairs, \\((x_{r}^{s},x_{r}^{s^{\\prime}})\\) and tries to distinguish between them, while \\(g\\circ f\\) tries to fool him. Formally, the adversarial loss is: \\[\\min_{\\theta_{f}\\theta_{g}}\\max_{\\theta_{d}}L_{G}^{GAN}=\\mathbb{E }_{s\\sim(x_{r}^{s},x_{r}^{s^{\\prime}})}log(d(s))\\] \\[+\\mathbb{E}_{s\\sim(x_{r}^{s},g(f(x_{r}^{s})))}log(1-d(s))\\] While the final optimization loss of topo2vec is: \\[\\min_{\\theta_{f}\\theta_{g}}\\max_{\\theta_{d}}\\lambda_{1}*L_{p}+\\lambda_{2}*L_{G }^{GAN}\\] Where \\(\\lambda_{1}=100\\) and \\(\\lambda_{2}=1\\), and \\(L_{G}^{GAN}\\) is calculated using binary cross-entropy (\\(BCE\\)). Alg 1 presents the training procedure of the adversarial version of topo2vec, while non-adversarial topo2vec trains the same but with \\(\\lambda_{1}=1\\) and \\(\\lambda_{2}=0\\). ### Implementation Details The topo2vec architecture consists of down-sample layers and then up-sample layers, similar to the U-net architecture [10], but without the skip connections. We take the representation between the down and up-sample layers as the latent representation. The reason we deleted the skip connections is to constrain the latent representation to hold the entire information needed for decoding. In order to allow learning \\(s^{\\prime}>s\\) we added additional up-sample layers (one for each factor of \\(2\\)). We used L1 distance as the loss with a learning rate of \\(lr_{1}=0.0002\\) (for the adversarial version \\(g\\) had the learning rate of \\(lr_{2}=0.0016\\)). As for the discriminator, we trained a 5-layer CNN with three fully-connected layers at the end. EncoderThe full architecture of the encoder \\(f\\) is as follows: 1. The input is a 1x17x17 image representing the topography of a specific coordinate in a given scale. 2. The input image enters two convolutional layers, each includes a convolution block (8 kernels of size 3x3), BatchNorm [11], and ReLu [12]. The image is resized to 16x16 (due to padding), with 8 channels, resulting in an 8x16x16 output. 3. Four down-sample layers, each includes a max-pooling layer (kernel size of 2x2) followed by two convolutional layers as before, each time doubling the number of filters. After four down-sample layers, we end up with a 128x1x1 image. The 128x1x1 image is flattened into a 128 vector, representing the latent vector of the image. DecoderThe full architecture of the decoder \\(g\\) is as follows: 1. The input is a 128x1x1 image representing the last output of \\(f\\). 2. Four up-sample layers, each includes an up-sampling layer (scale factor of 2 in bilinear mode), followed by two convolutional layers as before, each time decreasing the number of filters by 2. After 4 up-sample layers, we end up with an 8x16x16 image. 3. In topo2vec-1 we finish with a convolutional block that maps the 8x16x16 image to the final 1x16x16 output image. 4. In topo2vec-4 we add two additional up-sample layers that resolve with a 2x64x64 image. We finish with a convolutional block that maps the 2x64x64 image to the final 1x64x64 output image. DiscriminatorFor the topo2vec-adv version, \\(f\\) and \\(g\\) together represent the generator, while \\(d\\) represent the discriminator. The full architecture of the discriminator \\(d\\) is as follows: 1. The input is a 2x64x64 image representing in its two channels the true or fake pair of images. 2. Five layers of convolutional blocks (For the first four: kernel size of 4, stride 2, and padding 1. For the Fifth: kernel size of 4, stride 1, and no padding), BatchNorm, and ReLu. The first starts with 8 filters and each time doubling, ending up with a 128x1x1 image. 3. Flattening the image to a 128 size vector. 4. Three fully connected layers: \\(128\\to 64\\), \\(64\\to 32\\), \\(32\\to 1\\), with a ReLu activation after the first two. 5. Sigmoid activation resolving with a final probability neuron. ## 3 Experiments We test our framework on several topographic classes gathered from OpenStreetMap (OSM), the leading geodata repository, in both few-shot and many-shot settings. Lastly, we'll show an experiment to show how we generalize beyond our initially trained scale. ### Experimental Methodology DatasetsIn order to guarantee a fair evaluation procedure, we geographically split the DTM into two different areas of size 25 \\(degree\\)2 each, one for train and one for test (Figure 3). The polygon from which we sampled coordinates for the train-set is: Footnote 2: GitHub repository with all code, baselines, data, and experiments: [https://github.com/urielsinger/topo2vec](https://github.com/urielsinger/topo2vec) \\[POLYGON((10\\ \\ 50,\\ 10\\ \\ 45,\\ 15\\ \\ 45,\\ 15\\ \\ 50,\\ 10\\ \\ 50))\\] The polygon from which we sampled coordinates for the test set is: \\[POLYGON((5\\ \\ 45,\\ 5\\ \\ 50,\\ 10\\ \\ 50,\\ 10\\ \\ 45,\\ 5\\ \\ 45))\\] Each sample was built by extracting topographic data on an image of radius \\(r\\) around a geographic point (as presented in Figure 1), and resizing it to the appropriate size (17 pixels). Baselines * **ID**: A simple flatten operation on the given image resulting in a vector that is used as the feature representation. * the same classes we later examine in Exp. 1, as those are the only \"basic\" topographic classes with an appropriate amount of data in the OSM for deep networks training, this results in 20,000 training examples. We use the last hidden layer as the feature representation. * **SauMoCo**[12]: An extension of the contrastive method MoCo [13]. This method uses an additional spatial augmentation technique of sampling a geographically close image. We use this method instead of classic MoCo as other augmentations employed by contrastive methods are irrelevant for topography data, this includes color jitter, random conversion to greyscale, and more. This method replaces using Tile2Vec [12] as it clearly outperforms it in spatial image representation. We additionally compare to several variations of our method: * **Topo2vec-1**: Topo2vec with fractal-factor \\(k=1\\) (meaning no fractal learning). This variation will help us compare the effectiveness of exploiting the fractal-effect. * **Topo2vec-4**: Topo2vec with fractal-factor \\(k=4\\). * **Topo2vec-adv**: An adversarial version of topo2vec network as explained last section with fractal-factor \\(k=4\\). All self-supervised methods were trained with 100,000 images. All input images to all methods are the size of \\(17\\times 17\\)2. Figure 3: Geographic train/test split. ### Exp. 1: Classification on Topographic Classes In this experiment, we evaluate a model trained by our framework's ability to serve as a pre-training model for topographic tasks with a sufficient amount of data. We test our model individually on 4 topographic classes (rivers3, peaks4, saddles5 and cliffs6). Footnote 3: wiki.openstreetmap.org/wiki/Rivers Footnote 4: wiki.openstreetmap.org/wiki/Peaks Footnote 5: wiki.openstreetmap.org/wiki/Tag:natural=saddle Footnote 6: wiki.openstreetmap.org/wiki/Tag:natural=cliff Finding the class' OSM scaleA given topographic class is present at many scales, but in OSM, not all scales were collected. We'll first devise an experiment to determine what scale the OSM community sampled at: Let \\(C=\\{c\\ |\\ c\\in\\text{coordinates}\\}\\) be a coordinate dataset for a given class, constructed such that half of the coordinates are sampled at the location of the class (\\(y=1\\)) and half are sampled at random (\\(y=0\\)). Given a radius \\(r\\), we define \\(X_{r}^{C}=\\{x_{r}^{i}\\ |\\ x_{r}^{i}\\in\\mathbb{R}^{17\\times 17}\\}\\) as the image dataset built from \\(C\\) where each image is built around a coordinate with radius \\(r\\). For each \\(r\\) the spatial resolution changes. Now, to find the sampled resolution/scale \\(s\\) of set \\(C\\) we define the following experiment: 1. Split \\(X_{r}^{C}\\) of size 1,000 to train and validation sets (80%, 20%). 2. For each \\(r\\in R\\), train a CNN on the available training data and calculate accuracy on the validation set. This CNN has a similar architecture to the CNN baseline. 3. Repeat this process for 10 different random seeds 4. The sampled scale is the one with maximum average accuracy on the validation set. We can expect the model to have maximum accuracy on the scale of the class' sampled scale, the experiment is repeated for all four classes. The results of the experiment classes are presented in Figure 4. We observe that all classes peak around the resolution of 30 meters/pixel. All subsequent experiments will train the models in this optimal scale. Methodology & ResultsWe compared the topo2vec variants with baselines trained on the optimal scale presented above. These models are pre-trained as discussed in 3.1. For each topographic class and model, we train an SVM built on the model's embeddings with a training set of size 1000 and evaluated on a test set of size 200, both equally distributed between the positive and negative examples. We repeat each experiment 10 times with random seeds and report average accuracies and standard deviations in Table 1. We see that our model outperforms other baselines on all classes, even when the baselines are trained on the optimal scale. It is important to notice that although CNN is a supervised baseline which had access to more labeled data, topo2vec outperformed it over all classes. Furthermore, topo2vec-4 outperforms topo2vec-1 on 2 classes and is comparable on the other two. This shows the power the fractal-effect has in the self-supervised learning procedure. Topo2vec-adv outperforms topo2vec-4 on two classes and shows the same results on the other two. This indicates that the adversarial loss affects the latent representation learning, although not significantly. SauMoCo does not perform well in this experiment, we hypothesize this is because several of its base assumptions does not hold in topography data. It assumes that close locations should have the same representation (similar to Tile2Vec (Jean et al., 2019)), and in topography data, where the fractal-effect is present, even the same location can have multiple classes when viewed in different scales. While SauMoCo did not deal with this fact, it is the key consideration of our method. ### Exp. 2: Classification on Topography-correlated Classes We evaluate our method on several topography-correlated classes, where we want to see if the embedding space holds information useful even for classes with only a mild topographic signature. We test on four classes: aerialway station7, alpine hut8, waterfall9 and sinkhole10. Similar to Exp. 1, we train an SVM built on the model's embeddings. We sample a training set of size 400 and a test set of size 100, both equally \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline & peaks & rivers & cliffs & saddles \\\\ \\hline ID & \\(92.4\\pm 0.5\\) & \\(75.9\\pm 1.8\\) & \\(74.4\\pm 2.8\\) & \\(87.5\\pm 1.2\\) \\\\ CNN & \\(92.3\\pm 0.5\\) & \\(76.4\\pm 0.7\\) & \\(72.2\\pm 1.8\\) & \\(84.1\\pm 1.3\\) \\\\ SauMoCo & \\(86.9\\pm 2.3\\) & \\(81.7\\pm 2.2\\) & \\(66.2\\pm 2.5\\) & \\(67.3\\pm 2.3\\) \\\\ topo2vec-1 & \\(\\mathbf{93.5\\pm 0.7}\\) & \\(81.3\\pm 2.2\\) & \\(71.4\\pm 1.7\\) & \\(80.5\\pm 2.3\\) \\\\ topo2vec-4 & \\(93.1\\pm 0.9\\) & \\(80.9\\pm 1.7\\) & \\(75.8\\pm 2.1\\) & \\(\\mathbf{89.0\\pm 1.3}\\) \\\\ topo2vec-adv & \\(92.8\\pm 0.3\\) & \\(\\mathbf{82.6\\pm 1.2}\\) & \\(\\mathbf{78.1\\pm 2.8}\\) & \\(88.7\\pm 2.2\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Experiment 1 results. accuracy \\(\\pm\\) standard-deviation Figure 4: Accuracy-per-scale of the topography classes. distributed between the positive and negative examples. We repeat each experiment 10 times with random seeds. The average accuracies and standard deviations are summarised in Table 2. We can see how topo2vec significantly outperforms the other baselines on this task. This result emphasizes, wherein the previous experiment all methods were able to learn the topographic classes, it is much harder to predict classes with a less emphasized topographic signature. topo2vec-4 outperforms topo2vec-1 over all classes except waterfall, topo2vec-adv show comparable results to topo2vec-4. The poor performance in the waterfall class can be explained as waterfall do not exhibit fractal properties (no peak inside a waterfall). This experiment shows that our modifications for topographic data are useful for learning a more powerful representation, as intended. The exploitation of the fractal-effect improves the generality of topography representation that was learned. Moreover, this suggests that topography data holds additional, uncorrelated information that can be used in addition to normal GIS and tabular features to further improve ML models working in the GIS domain. ### Exp. 3: Fractal-Effect Visualization Throughout this work, we repeatedly presented the importance of the fractal-effect and how we might find entirely different objects when looking at different scales. In this section, we will present several qualitative results of topo2vec-4 exhibiting this important property. For each point in the polygon, we build an image around it, which is then classified using our model. This process is repeated for different radii and scales. In Figure 5 we can see our method detecting peaks, saddles, rivers, and cliffs in multiple scales when images in different zoom levels are presented. This is a unique property of topography data and we see our model performs well in this setting. ### Exp. 4: Topography Retrieval We conducted another qualitative experiment to demonstrate the strength of our embedding space. We took \\(n\\) locations representing a topography pattern as input and averaged over their topo2vec-4 embedding representations. We then used the KNN algorithm to find the \"most similar\" points in the embedding space to those input points. In Figure 6 we can see a representative example. We can see that our retrieved coordinates have very similar topography-patterns to those sampled. This interactive experiment is also provided in the code. ## 4 Related Work ### Algorithmic Topography There have been works dealing with topography data enhancement and generation: Extracting information from raw elevation data (Jaedicke, Syre, and Sverdrup-Thygeson, 2014), DTM (digital-terrain-model) extraction from satellite imagery (Gevaert et al., 2018) and enhancing the resolution of topographic images (Yue et al., 2015). Many possible usages of DTM have been proposed. natural disasters analysis and prediction (such as Avalanche warning (Jaedicke, Syre, and Sverdrup-Thygeson, 2014; Choubin et al., 2019) and landslide susceptibility (Wu and Chen, 2009)) to high solar energy regions localization (Hoe et al., 2020) and more (Bolibar et al., 2020). In addition, deep learning has been used for the automatic mapping of topographic objects from DTM (Torres et al., 2018; Torres, Milani, and Fraternali, 2019), and satellite imagery (Li and Hsu, 2020). ### Fractal-Effect in Topography The fact the earth's topography is a fractal has inspired some previous works. (Chase, 1992) studied the evolution of mountains and regional topography, and the effects of tectonic movement and climate on the landscape, including its fractal geometry. (Pelletier, 1997) built a mathematical framework for explaining the existence of fractals in topography. (Weissel et al., 1995) used the fractal-effect in a research of the erosional development of the Ethiopian plateau \\begin{table} \\begin{tabular}{l l l l l} \\hline \\hline & aerialway stations & alpine huts & waterfall & sinkholes \\\\ \\hline ID & \\(63.9\\pm 5.5\\) & \\(59.2\\pm 3.6\\) & \\(69.2\\pm 2.6\\) & \\(62.5\\pm 2.4\\) \\\\ CNN & \\(75.5\\pm 2.1\\) & \\(73.3\\pm 1.6\\) & \\(69.7\\pm 2.4\\) & \\(55.3\\pm 2.9\\) \\\\ SauMoCo & \\(69.7\\pm 3.6\\) & \\(73.3\\pm 2.8\\) & \\(55.3\\pm 3.1\\) & \\(56.5\\pm 4.3\\) \\\\ topo2vec-1 & \\(79.4\\pm 1.5\\) & \\(70.9\\pm 2.7\\) & \\(\\mathbf{77.9\\pm 2.2}\\) & \\(60.1\\pm 2.1\\) \\\\ topo2vec-4 & \\(82.1\\pm 2.0\\) & \\(\\mathbf{75.7\\pm 1.4}\\) & \\(73.3\\pm 1.6\\) & \\(\\mathbf{63.2\\pm 2.5}\\) \\\\ topo2vec-adv & \\(\\mathbf{82.8\\pm 1.8}\\) & \\(75.5\\pm 1.3\\) & \\(74.8\\pm 1.6\\) & \\(62.9\\pm 2.4\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Experiment 2 results. accuracy \\(\\pm\\) standard-deviation Figure 5: Qualitative results of our model’s predictions in different scales. (blue: peaks, green: saddles, red: cliffs, yellow: rivers) This polygon is around \\(32^{\\circ}37^{\\prime}02.8\"N35^{\\circ}07^{\\prime}27.4\"E\\). of Northeast Africa and (Liu et al., 2019) used it in a similar way for landslide susceptibility mapping. ### Unsupervised Learning for Visual and Geographic Data The concept of embedding an input to a latent lower-dimensional space is a basic and powerful one in the field of machine learning. As mentioned in the introduction, neural-networks tend to generate these spaces naturally during training. Unsupervised learning for visual tasks is a very active field of research, making it very difficult to summarize. Recent methods that have pushed the state-of-the-are include: SimCLR (Chen et al., 2020), MoCo (He et al., 2020) and BYoL (Grill et al., 2020). All these methods follow the basic idea of making representation of an image similar under small transformations, exploiting several invariances within their domain such as invariance to rotation and color jitter. There have been some attempts in the geographic ML field to build specialized embedding spaces, most following Tobler's first law of geography (Tobler, 1970): \"_everything is related to everything else, but near things are more related than distant things_\". Noteworthy examples are Tile2Vec (Jean et al., 2019) which used the triplet-loss for spatially distributed data and the recent work of SauMoCo (Kang et al., 2020) which takes this same basic idea from Tobler's law in order to create a new invariance for _spatially close images_. We have seen in our experiments this assumption does not hold for classification on topographic classes. ## 5 Conclusions In this work, we presented a novel self-supervision method for training neural networks for topographic feature extraction. We exploited the fractal-effect in the data during training and inference to build a model capable of generalizing to any relevant scale. Our key consideration in topo2vec was leveraging the fractal-effect, which we achieved in an encoder-decoder based framework. We evaluated our method on several topographic classes and topography-correlated classes and found it superior to other self-supervised methods and even surpassing fully supervised methods that used an order of magnitude more data. Our results motivate a few interesting directions for future work we intend to explore. First is pushing empirical results further by using more elaborate architectures such as graph autoencoders (Kipf and Welling, 2016) and GANs (Goodfellow et al., 2014). Second, we plan to use our method for the automation of topography features mapping in multiple scales, greatly reducing the human labor necessary for such a task. We hope this work will further advance the field of geographic ML, specifically to be used as additional features in tasks that exhibit correlation with the topography of its location (like avalanches and wildfire prediction). We also hope our method will act as a strong baseline for future works in the field of topography embedding. ## References * A. A. A. Agency (2014)Advanced land observing satellite daiethi-2. DAICHI. External Links: Document Cited by: SS1. * T. Berg, J. Liu, S. Lee, M. L. Alexander, D. Jacobs, and P. Belhumeur (2014)Birdsnap: large-scale fine-grained visual categorization of birds. CVPR2019, pp. 2019-2026. External Links: Document Cited by: SS1. * J. Bolibar, A. Rabatel, I. Gouttevin, C. Galiez, T. Condon, and S. Eric (2020)Deep learning applied to glacier evolution modelling. The Cryosphere14, pp. 565-584. External Links: Document Cited by: SS1. * 57. External Links: ISSN 0002-0573, Document, Link Cited by: SS1. * T. Chen, S. Kornblith, M. Norouzi, and G. E. Hinton (2020)A simple framework for contrastive learning of visual representations. ArXivabs/2002.05709. Cited by: SS1. * B. Choubin, M. Borji, A. Mosavi, F. Sajedi-Hosseini, V. P. Singh, and S. Shamshirband (2019)Snow avalanche hazard prediction using machine learning methods. Journal of Hydrology577, pp. 123929. External Links: Document Cited by: SS1. Figure 6: The top image of each side is the input location. The two bottom images are the two nearest neighbors of that input image in the latent space. This experiment was conducted in the orange rectangle, around \\(46^{\\circ}46^{\\prime}30.0\"N7^{\\circ}51^{\\prime}00.0\"E\\). Fei-Fei, L.; Fergus, R.; and Perona, P. 2004. Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. In _CVPR Workshops_. Gevaert, C.; Persello, C.; Nex, F.; and Vosselman, G. 2018. A deep learning approach to DTM extraction from imagery using rule-based training labels. _ISPRS Journal of Photogrammetry and Remote Sensing_, 142. Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A. C.; and Bengio, Y. 2014. Generative Adversarial Nets. In _NIPS_. Grill, J.-B.; Strub, F.; Altche, F.; Tallee, C.; Richemond, P. H.; Buchatskaya, E.; Doersch, C.; Pires, B. A.; Guo, Z. D.; Azar, M. G.; Piot, B.; Kavukcuoglu, K.; Munos, R.; and Valko, M. 2020. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning. _ArXiv_, abs/2006.07733. Haklay, M.; and Weber, P. 2008. Openstreetmap: User-generated street maps. _IEEE Pervasive Computing_, 7(4). He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. B. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. _CVPR_, 9726-9735. Heo, J.; Jung, J.; Kim, B.; and Han, S. 2020. Digital elevation model-based convolutional neural network modeling for searching of high solar energy regions. _Applied Energy_, 262: 114588. Horn, G. V.; Aodha, O. M.; Song, Y.; Cui, Y.; Sun, C.; Shepard, A.; Adam, H.; Perona, P.; and Belongie, S. J. 2018. The iNaturalist Species Classification and Detection Dataset. _CVPR_, 8769-8778. Hosseiny, B.; Ghasemian, N.; and Amini, J. 2019. A convolutional neural network for flood mapping using sentinel-1 and STRM DEM data: case study in Poldokhtar-Iran. volume XLII-4/W18, 527-533. Ioffe, S.; and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. _arXiv preprint arXiv:1502.03167_. Jaedicke, C.; Syre, E.; and Sverdrup-Thygeson, K. 2014. GIS-aided avalanche warning in Norway. _Computers & Geosciences_, 66: 31 - 39. Jean, N.; Wang, S.; Samar, A.; Azzari, G.; Lobell, D.; and Ermon, S. 2019. Tile2Vec: Unsupervised representation learning for spatially distributed data. In _AAAI_. Jin, Y.; Wu, Y.; Li, H.; Zhao, M.; and Pan, J. 2017. Definition of fractal topography to essential understanding of scale-invariance. _Scientific Reports_, 7: 46672. Kang, J.; Fernandez-Beltran, R.; Duan, P.; Liu, S.; and Plaza, A. J. 2020. Deep Unsupervised Embedding for Remotely Sensed Images Based on Spatially Augmented Momentum Contrast. _IEEE Transactions on Geoscience and Remote Sensing_, 1-13. Kipf, T. N.; and Welling, M. 2016. Variational Graph Auto-Encoders. _NIPS Workshop on Bayesian Deep Learning_. Li, W.; and Hsu, C. 2020. Automated terrain feature identification from remote sensing imagery: a deep learning approach. _International Journal of Geographical Information Science_, 34: 637 - 660. Liu, L.; Li, S.; Li, X.; Jiang, Y.; Wei, W.; Wang, Z.; and Bai, Y. 2019. An integrated approach for landslide susceptibility mapping by considering spatial correlation and fractal distribution of clustered landslide data. _Landslides_, 16: 715-728. Mattyus, G.; Luo, W.; and Urtasun, R. 2017. DeepGradMapper: Extracting Road Topology from Aerial Images. _ICCV_, 3458-3466. Nair, V.; and Hinton, G. E. 2010. Rectified linear units improve restricted boltzmann machines. In _ICML_. Nilsback, M.-E.; and Zisserman, A. 2008. Automated Flower Classification over a Large Number of Classes. _2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing_, 722-729. Pelletier, J. D. 1997. Why is topography fractal. Prakash, N.; Manconi, A.; and Loew, S. 2020. Mapping landslides on EO data: Performance of deep learning models vs. traditional machine learning models. _Remote Sensing_, 12(3): 346. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In _International Conference on Medical image computing and computer-assisted intervention_, 234-241. Springer. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A.; and Fei-Fei, L. 2015. ImageNet Large Scale Visual Recognition Challenge. _International Journal of Computer Vision_, 115: 211-252. Scull, P.; Franklin, J.; Chadwick, O.; and McArthur, D. 2003. Predictive soil mapping: a review. _Progress in Physical Geography_, 27(2): 171-197. Sheikh, V.; Van Loon, E.; and Stroosnijder, L. 2014. Relationship between topography, land use and soil moisture in loess hills. _Environmental Resources Research_, 1(2): 141-165. Tobler, W. R. 1970. A Computer Movie Simulating Urban Growth in the Detroit Region. _Economic Geography_, 46: 234-240. Torres, R. N.; Fraternali, P.; Milani, F.; and Frajberg, D. 2018. A Deep Learning Model for Identifying Mountain Summits in Digital Elevation Model Data. _AIKE_, 212-217. Torres, R. N.; Milani, F.; and Fraternali, P. 2019. Algorithms for Mountain Peaks Discovery: A Comparison. In _ACM/SIGAPP_, SAC '19, 667-674. New York, NY, USA: Association for Computing Machinery. ISBN 9781450359337. Weissel, J. K.; Malinverno, A.; Harding, D. J.; and Karner, G. D. 1995. _Erosional Development of the Ethiopian Plateau of Northeast Africa from a Fractal Analysis of Topography_, 127-142. Boston, MA: Springer US. ISBN 978-1-4615-1815-0. Wu, C.-H.; and Chen, S.-C. 2009. Determining landslide susceptibility in Central Taiwan from rainfall and six site factors using the analytical hierarchy process method. _Geomorphology_, 112(3): 190 - 204. Yue, L.; Shen, H.; Yuan, Q.; and Zhang, L. 2015. Fusion of multi-scale DEMs using a regularized super-resolutionmethod. _International Journal of Geographical Information Science_, 29: 2095 - 2120. * Zhang et al. (2018) Zhang, Z.; Liu, Q.; and Wang, Y. 2018. Road Extraction by Deep Residual U-Net. _IEEE Geoscience and Remote Sensing Letters_, 15: 749-753.
Recent advances in deep learning have transformed many fields by introducing generic embedding spaces, capable of achieving great predictive performance with minimal labeling effort. The geology field has not yet met such success. In this work, we introduce an extension for self-supervised learning techniques tailored for exploiting the fractal-effect in remote-sensing images. The fractal-effect assumes that the same structures (for example rivers, peaks and saddles) will appear in all scales. We demonstrate our method's effectiveness on elevation data, we also use the effect in inference. We perform an extensive analysis on several classification tasks and emphasize its effectiveness in detecting the same class on different scales. To the best of our knowledge, it is the first attempt to build a generic representation for topographic images. 1 Penta-AI 2 HUJI: The Hebrew University, Jerusalem, Israel 3 Bar-Ilan University, Ramat Gan, Israel 4 The Open University, Ra'anana, Israel 5 Technion--Israel Institute of Technology, Haifa, Israel [email protected], [email protected], [email protected], [email protected]
Write a summary of the passage below.
245
arxiv-format/2201_13111v1.md
# Statistical Downscaling of Model Projections with Multivariate Basis Graphical Lasso Ayesha Ekanayaka University of Cincinnati, Cincinnati, USA. [email protected] University of Cincinnati, Cincinnati, USA. Emily Kang University of Cincinnati, Cincinnati, USA. Amy Braverman Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA. Peter Kalmus Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA. ## 1 Introduction Global climate models (GCMs) produce projections on relatively coarse spatial scales of up to \\(\\sim\\)100 km, due to computational limitations, the need for global coverage, and the need for model runs out to at least 2100. However, many applications requireprojections at significantly higher resolution. This gulf in resolution can be bridged by downscaling, a means for obtaining fine-resolution climate projections from coarse-resolution GCMs. Downscaling comes in two main varieties: dynamical downscaling (DD), in which the coarse-resolution projections are used as inputs to drive regional models that produce fine-resolution results; and statistical downscaling (SD), in which statistical relationships are derived between the coarse-resolution GCM projections and fine-resolution observations. Here, we introduce a novel statistical downscaling method. We developed our method in the application to downscaling sea surface temperatures (SSTs) for use in understanding projected tropical coral reef severe bleaching. Coral reefs are critically threatened by rapidly increasing ocean warming (Hughes et al., 2003; Hoegh-Guldberg et al., 2007; Gattuso et al., 2015; Masson-Delmotte et al., 2018) in addition to local stresses such as destructive fishing practices and coastal development. Anomalously high sea surface temperatures (SSTs) contributes to severe coral bleaching, in which corals expel their photosynthetic algal partners, and can even cause more immediate thermal death (Carilli et al., 2009; Hughes et al., 2018). There is increasing interest in use of projections of SST from global climate models and Earth system models (GCMs and ESMs) to infer possible futures of coral reefs (Hoegh-Guldberg, 1999; Van Hooidonk et al., 2013, 2016). However, to be useful either for research or for local conservation management, these projections must capture spatial variation on scales much finer than current GCM and ESM resolutions of \\(\\sim\\)100 km (Van Hooidonk et al., 2016). These coarse-scale projections may be downscaled via regional models (dynamical downscaling) or via high-resolution observational data sets (statistical downscaling). For the coral reef application, statistical downscaling of SST time series to the 4 km scale compares well to dynamical downscaling, which is still computationally prohibitive (Van Hooidonk et al., 2015, 2016). Statistical downscaling is performed by establishing a strong link between coarse resolution model projections and fine-resolution observations. In Earth science applications, downscaling methods referred to as statistical downscaling do not always utilize statistical models. For example, statistics from both model outputs and observations or reanalysis can be used to build the link, which can be deterministic and not be based on a statistical model; examples of this include Model Output Statistics (MOS) and Perfect Prognosis (PP) (Schmidli et al., 2006; Soares et al., 2019). Prior downscaling approaches typically do not handle spatial dependency structures; often, smoothing or interpolation methods are used (Timm et al., 2015; Ahmed et al., 2013). For example, a recent work demonstrated a downscaling approach based on bilinear interpolation and produced SSTs at 4 km resolution; however, the method did not account for spatial dependencies, nor did not provide associated uncertainty measures (Van Hooidonk et al., 2015). Popular probabilistic approaches which do provide uncertainty estimates include fitting regression-based models (Mishra et al., 2014) and generalized linear models relating predictand with potential environmental covariates (Beecham et al., 2014). Yet, there are some downscaling strategies which are based on statistical models (usually regression models) but do not provide an uncertainty measure, or only provide uncertainty measures at the observation location (e.g., monitoring stations) (Sachindra et al., 2014; Gaitan et al., 2019). (Berrocal et al., 2010) proposed another linear regression model directly relating fine-scale observations to numerical model outputs as a fully model-based solution to the SD problem.The proposed spatio-temporal model in this work assumes spatially and temporally-varying regression coefficients but computationally prefer independence across time. Further, computation involves MCMC simulations which can be challenging for a global scale SD. Another interesting two stage SD method was proposed by Poggio and Gimona (2015) using Generalized Additive Models (GAMs) to first model the trend and then use kriging to model for residuals. Their process is repeated iteratively subjected to a pre-specified stopping criteria and hence can be computationally challenging. In most Earth science applications for downscaling, significant spatio-temporal dependencies exist, and can be utilized to improve downscaling performance. Here, we propose such a statistical downscaling technique. We compare the method with the standard downscaling method (Van Hooidonk et al., 2015) and a two-stage downscaling method (Poggio and Gimona, 2015) implemented using local approximate Gaussian Process (laGP) (Gramacy and Apley, 2015), via application to SST projections for coral reef studies. BGL is a modeling framework developed for highly-multivariate processes observed at large number of spatial locations with non-stationary data and inter-variable dependencies Krock et al. (2021). With BGL, we propose a computationally efficient statistical downscaling technique accounting for spatial dependence, providing associated uncertainty, that can be used in Earth science contexts with large datasets. Here, we demonstrate it by downscaling SST projections to the 1 km scale, a 16-fold resolution improvement over a prior interpolation-based SST downscaling method. We perform a representative case study and validation in the Great Barrier Reef (GBR) region. According to (Hashmi et al., 2009) uncertainties of downscaling results can arise from (1) parent GCM; (2) climate change emission scenarios; (3) observed data; and (4) the method used for downscaling. Therefore, a proper probabilistic assessment of uncertainty is a demanded skill in the context of statistical downscaling. Thus, the advantages of our novel downscaling method include significant skill improvement and uncertainty quantification. Section 2 describes the data and GCM models used in our study. Section 3 describes our downscaling methodology. Section 4 validates our method using representative results from the GBR case study, quantifies skill improvement relative to the prior state of the art, and introduces the uncertainty quantification from our methodology. Section 5 provides discussion and conclusion. ## 2 Data and model output We use monthly averaged NASA/JPL Multiscale Ultrahigh Resolution (MUR, JPL MUR MEaSUREs Project (2015)) satellite SST data at 1 km resolution from June 2002 to December 2020 (a total of 223 months). We use a 4 km resolution reef mask from the NOAA Coral Reef Watch thermal history product, v1.0 (Heron et al., 2016) to determine the locations of coral reefs in the global ocean. We use monthly SST output from 19 Coupled Model Intercomparison Project Phase 6 (CMIP6) GCM models under the Shared Socioeconomic Pathways (SSPs) SSP126 (O'Neill et al., 2014). The model time series are re-gridded to a common 1\\({}^{\\circ}\\) grid, and run from from June 2002 to December 2099 (1183 months). At each grid location, we take the mean of these model time series. The study area includes a total of selected 309,700 1 km MUR pixels and 35 1\\({}^{\\circ}\\) coarse grid cells. Only pixels identified as corals and their adjacent neighbours were used for the analysis. ## 3 Methodology Let \\(W_{it}(A_{k})\\) be the averaged GCM outputs at coarse grid cell \\(A_{k}\\) and let \\(w_{it}(s_{n})\\) be the averaged GCM output **interpolated** to MUR pixel \\(s_{n}\\) where, \\(n=1,\\ldots,309700\\) is for all the MUR pixels, \\(k=1,\\ldots,35\\) is for all the coarse grid cells in spatial domain \\(\\mathcal{D}\\) and \\(t=1,\\ldots,T_{i}\\); \\(T_{i}\\) is the total number of months from June 2002 to December 2009 for \\(i=1,2,\\ldots,12\\) denoting 12 months. Let \\(y_{it}(s_{n})\\) be the monthly averaged observational SST at MUR location \\(s_{n}\\) where, \\(t=1,\\ldots,T_{o_{i}}\\); \\(T_{o_{i}}\\); is the total number of observational months from June 2002 to December 2020 (MUR data available only from 2002 to 2020). Then our downscaling method is performed in two stages, assuming an additive model for fine-resolution observations. i.e. we assume, \\[y_{it}(s_{n})=\\mu_{it}(s_{n})+\\epsilon_{it}(s_{n}) \\tag{1}\\] In the first stage, we estimate large-scale spatio-temporal variations, often referred to as the trend or mean component (\\(\\mu\\)), using a deterministic approach. We notice that this mean component is capable of capturing an extensive amount of large-scale variations. However, we hypothesized that significant fine-scale variations remained unexplained, and accounting further for the small-scale variations will help increase the accuracy in downscaled SSTs. Therefore in the second stage, we propose a further step to model for the remaining small-scale variations. ### Stage 1: Estimating trend component The deterministic procedure for trend estimation consist of steps. The first step is to subtract model climatology \\(\\{\\bar{W}_{i}(A_{k}))\\}\\) which is defined as, \\[\\bar{W}_{i}(A_{k})=\\frac{\\sum_{t=1}^{T_{o_{i}}}W_{it}(A_{k})}{T_{o_{i}}} \\tag{2}\\]from model data. Then the resulting time series are interpolated from model pixels to MUR pixels using bivariate interpolation. Finally, the trend is estimated by adding interpolated values to the MUR climatology \\(\\{\\bar{y}_{i}(s_{n})\\}\\) which is defined as, \\[\\bar{y}_{i}(s_{n})=\\frac{\\sum_{t=1}^{T_{o_{i}}}y_{it}(s_{n})}{T_{o_{i}}} \\tag{3}\\] ### Stage 2: Model for small-scale variations In stage 2, we propose a further step to capture remaining small-scale variations using BGL. We begin assuming the vector, \\(\\underline{\\mathcal{c}}(s_{n})=\\left(e_{1}(s_{n}),e_{2}(s_{n})\\right)^{T}\\) where \\(e_{1}(s_{n})=w_{it}(s_{n})-\\frac{\\sum_{t=1}^{T_{o_{i}}}w_{it}(s_{n})}{T_{o_{i}}}\\) and \\(e_{2}(s_{n})=y_{it}(s_{n})-\\hat{\\mu}_{it}(s_{n})\\) for fixed \\(i,t\\) and any \\(s_{n}\\in\\mathcal{D}\\) follow a bivariate Gaussian process. We further assume that this vector \\(\\underline{\\mathcal{c}}(s_{n})\\), can be additively modelled using a spatially correlated stochastic process \\(\\underline{\\mathcal{U}}(s_{n})\\) and a white noise process \\(\\underline{\\mathcal{\\delta}}(s_{n})\\). i.e. \\[\\underline{\\mathcal{c}}(s_{n})=\\underline{\\mathcal{U}}(s_{n})+\\underline{ \\mathcal{\\delta}}(s_{n}) \\tag{4}\\] where \\(\\underline{\\mathcal{U}}(s_{n})=\\left(u_{1}(s_{n}),u_{2}(s_{n})\\right)^{T}\\) and \\(\\hat{\\mathcal{\\delta}}(s_{n})=\\left(\\delta_{1}(s_{n}),\\delta_{2}(s_{n})\\right) ^{T}\\) is a mean zero white noise process with \\(Cov(\\delta_{1}(s_{n}),\\delta_{2}(s_{n}))=diag(\\tau_{1}^{2},\\tau_{2}^{2})\\). Then the BGL idea in Krock et al. (2021) relies on basis expansion of the components of spatially correlated stochastic process. We use empirical orthogonal functions (EOFs) as the basis functions. In the current study, we only have limited number of observational months. Therefore we combine months into seasons and fit four different BGL models for each season. The total number of available EOFs is, \\(T=T_{o_{1}}+T_{o_{2}}+T_{o_{3}}\\). Then the spatially correlated stochastic process \\(u_{j}(s_{n})\\) is further expressed as, \\[u_{j}(s_{n})=\\sum_{l=1}^{L}\\omega_{jl}\\phi_{l}(s_{n})+\\sum_{l=1}^{T-L}\ u_{jl} \\psi_{l}(s_{n}) \\tag{5}\\] where \\(\\phi_{l}(s_{n})\\) and \\(\\psi_{l}(s_{n})\\) are basis functions and \\(\\omega_{jl}\\) and \\(\ u_{jl}\\) are the respective coefficients. Here we assume that \\(\\sum_{l=1}^{L}\\omega_{jl}\\phi_{l}(s_{n})\\) is a stochastic term and the \\(\\sum_{l=1}^{T-L}\ u_{jl}\\psi_{l}(s_{n})\\) term is deterministic. We follow the methodology presented in Huang and Cressie (2000) to separate basis functions into deterministic and stochastic terms. We use ordinary least squares estimates for deterministic coefficients and then for the stochastic coefficientswe further assume, \\[\\underline{\\omega}_{l}=\\Big{(}\\omega_{1l},\\omega_{2l}\\Big{)}^{T}\\sim N\\Big{(}0, \\mathbf{Q}_{l}^{-1}\\Big{)} \\tag{6}\\] and wish to obtain a sparse non-parametric estimate from BGL for the precision matrix \\(\\mathbf{Q}\\) of vector of all the stochastic coefficients \\(\\underline{\\omega}=(\\underline{\\omega}_{1},..,\\underline{\\omega}_{L})^{T}\\), assuming \\(\\mathbf{Q}=diag(\\mathbf{Q}_{1},..,\\mathbf{Q}_{L})\\). This is achieved by first defining a new vector \\(\\boldsymbol{\\xi}=(\\underline{\\mathcal{E}}(s_{1})^{T},..,\\underline{\\mathcal{E }}(s_{n})^{T})^{T}\\) for fixed \\(i,t\\) which lists all the \\(\\underline{\\mathcal{E}}(s_{n})\\) vectors over the spatial domain \\(\\mathcal{D}\\). Here, let \\(\\Sigma=Var(\\boldsymbol{\\xi})\\) and then assuming each \\(\\boldsymbol{\\xi}_{it}\\) is a different realization of \\(\\boldsymbol{\\xi}\\) we see that the joint negative log-likelihood is proportional to, \\[log(det(\\Sigma))+\\frac{\\sum_{i=1}^{3}\\sum_{t=1}^{T_{o_{i}}}\\boldsymbol{\\xi}_ {it}^{T}\\Sigma^{-1}\\boldsymbol{\\xi}_{it}}{T_{o_{1}}+T_{o_{2}}+T_{o_{3}}}=log( det(\\Sigma))+tr(\\mathbf{S}\\Sigma^{-1}) \\tag{7}\\] where \\(\\mathbf{S}=\\frac{\\sum_{i=1}^{3}\\sum_{t=1}^{T_{o_{i}}}\\boldsymbol{\\xi}_{it}^{T }\\boldsymbol{\\xi}_{it}}{T_{o_{1}}+T_{o_{2}}+T_{o_{3}}}\\). Note that here, \\(\\Sigma=\\boldsymbol{\\phi}\\mathbf{Q}^{-1}\\boldsymbol{\\phi}^{T}+\\mathbf{D}\\), where \\(\\boldsymbol{\\phi}\\) is the basis matrix and \\(\\mathbf{D}=diag(\\tau_{1}^{2},\\tau_{2}^{2})\\otimes I_{n}\\). Then BGL solves \\(l_{1}-\\)penalized maximum likelihood equation, \\[\\hat{\\mathbf{Q}}\\in\\underset{\\mathbf{Q}\\succeq 0}{\\arg\\min}\\,log(det( \\boldsymbol{\\phi}\\mathbf{Q}^{-1}\\boldsymbol{\\phi}^{T}+\\mathbf{D}))+tr(\\mathbf{ S}(\\boldsymbol{\\phi}\\mathbf{Q}^{-1}\\boldsymbol{\\phi}^{T}+\\mathbf{D})^{-1})+P( \\mathbf{Q}) \\tag{8}\\] where, \\[P(\\mathbf{Q})=P(\\mathbf{Q}_{1},..,\\mathbf{Q}_{L})=\\lambda\\sum_{l=1}^{L}\\sum_{ i\ eq j}|(\\mathbf{Q}_{l})_{ij}|+\\rho\\sum_{l=1}^{L-1}\\sum_{i\ eq j}|(\\mathbf{Q}_{l}) _{ij}-(\\mathbf{Q}_{l+1})_{ij}|\\] to obtain the an estimate for the precision matrix \\(\\mathbf{Q}\\).Here, \\(\\lambda\\) is the sparsity inducing penalty and \\(\\rho\\) is a fusion penalty which penalize \\(\\mathbf{Q}_{l}\\) matrices at adjacent levels if the off-diagonals are not similar. Notice that, evaluation of this likelihood equation requires an expensive Cholesky decomposition (\\(\\Theta(p^{3}n^{3})\\)). Using the Sherman-Morrison-Woodbury formula, likelihood expression in expression 8 can be re-written reducing likelihood evaluation to \\(\\Theta(p^{3}L^{3})\\) as, \\[log(det(\\mathbf{Q}+\\boldsymbol{\\phi}^{T}\\mathbf{D}^{-1}\\boldsymbol {\\phi}))-log(det(\\mathbf{Q}))-\\] \\[tr(\\boldsymbol{\\phi}^{T}\\mathbf{D}^{-1}\\mathbf{S}\\mathbf{D}^{-1} \\boldsymbol{\\phi}(\\mathbf{Q}+\\boldsymbol{\\phi}^{T}\\mathbf{D}^{-1}\\boldsymbol{ \\phi})^{-1})+P(\\mathbf{Q}) \\tag{9}\\] The block-diagonal structure of \\(\\mathbf{Q}\\) further reduces matrix computation of size \\(pL\\times pL\\) to \\(L\\) computations of \\(p\\times p\\) matrices. However, the likelihood expression in expression 9 is yet non-smooth and non-convex with respect to \\(\\mathbf{Q}\\). Thus, authors use difference-of-convex(DC) algorithm where next guess of \\(\\mathbf{Q}\\) is obtained by solving a convex optimization problem with the concave part linearized at the previous guess \\(\\mathbf{Q}^{(j)}\\)(Krock et al., 2021). Now recall that \\(e_{1}(s_{n})=w_{it}(s_{n})-\\frac{\\sum_{i=1}^{3}\\sum_{t=1}^{T_{o_{i}}}w_{it}(s_{n} )}{I_{o_{1}}+I_{o_{2}}+T_{o_{3}}}\\) are available for \\(t=1,\\ldots,T_{i}(>T_{o_{i}})\\) but \\(e_{2}(s_{n})=Y_{it}(s_{n})-\\hat{\\mu}_{it}(s_{n})\\) are only available for \\(t=1,\\ldots,T_{o_{i}}\\). With a simple re-arrangement of vector of residuals, we can re-write vector \\(\\boldsymbol{\\xi}\\) as follows. \\[\\boldsymbol{\\xi}=\\begin{bmatrix}\\boldsymbol{\\xi}_{1}\\\\ \\boldsymbol{\\xi}_{2}\\end{bmatrix}=\\begin{bmatrix}\\phi\\boldsymbol{\\omega}_{1} \\\\ \\phi\\boldsymbol{\\omega}_{2}\\end{bmatrix}+\\begin{bmatrix}\\delta_{1}\\\\ \\delta_{2}\\end{bmatrix}=\\begin{bmatrix}\\phi A_{1}\\boldsymbol{\\omega}\\\\ \\phi A_{2}\\boldsymbol{\\omega}\\end{bmatrix}+\\begin{bmatrix}\\delta_{1}\\\\ \\delta_{2}\\end{bmatrix}\\] where \\(\\boldsymbol{\\xi}_{1}=\\left(e_{1}(s_{1}),\\ldots,e_{1}(s_{n})\\right)^{T}\\),\\(\\boldsymbol{\\xi}_{2}=\\left(e_{2}(s_{1}),\\ldots,e_{2}(s_{n})\\right)^{T}\\boldsymbol{ \\omega}_{1}=A_{1}\\boldsymbol{\\omega}=\\left(\\omega_{11},\\ldots,\\omega_{1L} \\right)^{T}\\), \\(\\boldsymbol{\\omega}_{2}=A_{2}\\boldsymbol{\\omega}=\\left(\\omega_{21},\\ldots, \\omega_{2L}\\right)^{T}\\) and \\(\\delta_{1}=\\left(\\delta_{1}(s_{1}),\\ldots,\\delta_{1}(s_{n})\\right)^{T}\\), \\(\\delta_{2}=\\left(\\delta_{2}(s_{1}),\\ldots,\\delta_{2}(s_{n})\\right)^{T}\\). Then, for a future month \\(t>T_{o_{i}}\\) our goal is to obtain \\(Exp\\big{[}\\boldsymbol{\\omega}_{2}|\\boldsymbol{\\xi}_{1}\\big{]}\\) as predicted residuals. Given \\(\\boldsymbol{\\xi}_{1}\\) we first obtain generalized least squares estimates for \\(\\boldsymbol{\\omega}_{1}\\). \\[\\hat{\\boldsymbol{\\omega}}_{1_{GLS}}=\\left(\\phi\\Sigma_{1}^{-1}\\phi^{T}\\right)^ {-1}\\phi\\Sigma_{1}^{-1}\\boldsymbol{\\xi}_{1} \\tag{10}\\] where \\(\\Sigma_{2}=Var(\\boldsymbol{\\xi}_{1})=\\phi A_{1}\\mathbf{Q}^{-1}A_{1}^{T}\\phi^ {T}+\\mathbf{D}_{1}\\) and \\(\\mathbf{D}_{1}=\\tau_{1}^{2}I_{n}\\). Recall from equation 6 we assume a bivariate Normal distribution for the vector of stochastic coefficients \\(\\left(\\omega_{1l},\\omega_{2l}\\right)^{T}\\) at each level \\(l\\). Thus, using the law of total expectation we can obtain predicted vector of stochastic coefficients \\(\\hat{\\boldsymbol{\\omega}}_{2}\\) as, \\[\\hat{\\boldsymbol{\\omega}}_{2}=Exp\\Big{[}\\boldsymbol{\\omega}_{2}|\\boldsymbol{ \\xi}_{1}\\Big{]}=Exp\\Big{[}Exp\\Big{[}\\boldsymbol{\\omega}_{2}|\\boldsymbol{\\xi} _{1},\\boldsymbol{\\omega}_{1}\\Big{]}|\\boldsymbol{\\xi}_{1}\\Big{]} \\tag{11}\\] with conditional variance, \\[Var\\Big{[}\\boldsymbol{\\omega}_{2}|\\boldsymbol{\\xi}_{1}\\Big{]}=Exp\\Big{[}Var \\Big{[}\\boldsymbol{\\omega}_{2}|\\boldsymbol{\\xi}_{1},\\boldsymbol{\\omega}_{1} \\Big{]}|\\boldsymbol{\\xi}_{1}\\Big{]}+Var\\Big{[}Exp\\Big{[}\\boldsymbol{\\omega}_{2 }|\\boldsymbol{\\xi}_{1},\\boldsymbol{\\omega}_{1}\\Big{]}|\\boldsymbol{\\xi}_{1} \\Big{]} \\tag{12}\\] Finally, we obtain the vector of downscaled SSTs for a future month \\(t>T_{o_{i}}\\) as, \\[\\hat{\\underline{y}}=\\hat{\\underline{\\mu}}+\\hat{\\boldsymbol{\\xi}}_{2}\\] where, \\(\\hat{\\underline{y}}=\\left(\\hat{y}(s_{1}),\\ldots,\\hat{y}(s_{n})\\right)^{T}\\), \\(\\hat{\\underline{\\mu}}=\\left(\\hat{\\mu}(s_{1}),\\ldots,\\hat{\\mu}(s_{n})\\right)^{T}\\) and \\(\\hat{\\boldsymbol{\\xi}}_{2}=\\phi\\hat{\\boldsymbol{\\omega}}_{2}\\assess prediction accuracy over time as well as over the space. We also used Structural Similarity Index Measure (SSIM) to measure similarity between observational MUR SST maps and the downscaled SST maps (Wang et al., 2004). We compare performances of our method with two previous SD methods; the interpolation based standard statistical downscaling method used by e.g., (Van Hooidonk et al., 2015b), and the two-stage method of (Poggio and Gimona, 2015) but replacing the GAM estimated trend with our trend from Stage 1 and replacing kriging step with local approximate Gaussian Process (laGP) by (Gramacy, 2016). The three input variables used were longitude, latitude and the difference between interpolated GCMs and the trend. i.e \\(\\mathbf{w}-\\hat{\\boldsymbol{\\mu}}\\) where \\(\\mathbf{w}=\\Big{(}\\mathbf{w}_{11},\\ldots,\\mathbf{w}_{1T_{o_{1}}},\\mathbf{w}_ {21},\\ldots,\\mathbf{w}_{2T_{o_{2}}},\\mathbf{w}_{31},\\ldots,\\mathbf{w}_{3T_{o_{ 3}}}\\Big{)}\\), \\(\\mathbf{w}_{it}=\\Big{(}w_{it}(s_{1}),\\ldots,w_{it}(s_{n})\\Big{)}^{T}\\). ### Validation with MUR data We left out the years 2018 to 2020 and performed downscaling using MUR data from June 2002 to December 2017. Table 1 compares averaged MSE values and averaged SSIM values. From the table we see that overall, BGL method has the lowest MSE and the percentage reduction is significant when it is compared to the standard downscaling method and the laGP method. From Figure 1 notice that there is a drastic improvement in MSE specially along the coastline. Figure 2 further confirms that BGL has reduced MSE across the region. We list averaged SSIM values calculated zooming into the regular grid shown in Figure 3. As our study region is in irregular pattern often with empty background, we zoomed into a regular grid to calculate SSIM in a meaning full manner. We use SSIM to measure the structural similarity between observational MUR SST maps and the downscaled SST maps. A number close to 1 indicates a greater similarity. From the table notice that our BGL method has highest averaged SSIM values. Notice from the Figure that laGP introduces an usual instability to the SST process, possibly due to its local structure. \\begin{table} \\begin{tabular}{|l|l|l|l|l|l|} \\hline \\multicolumn{5}{|c|}{MSE} \\\\ \\hline Season & GCM & Standard & laGP & BGL \\\\ \\hline Summer & 0.356 & 0.310 & 0.317 & 0.277 \\\\ Autumn & 0.472 & 0.105 & 0.108 & 0.093 \\\\ Winter & 0.396 & 0.305 & 0.307 & 0.288 \\\\ Spring & 0.380 & 0.212 & 0.221 & 0.200 \\\\ \\hline **Overall** & 0.401 & 0.233 & 0.238 & 0.214 \\\\ \\hline \\end{tabular} \\begin{tabular}{|l|l|l|l|l|l|} \\hline \\multicolumn{5}{|c|}{SSIM} \\\\ \\hline Season & GCM & Standard & laGP & BGL \\\\ \\hline Summer & 0.680 & 0.765 & 0.739 & 0.747 \\\\ Autumn & 0.551 & 0.842 & 0.825 & 0.872 \\\\ Winter & 0.626 & 0.888 & 0.905 & 0.943 \\\\ Spring & 0.646 & 0.767 & 0.743 & 0.875 \\\\ \\hline **Overall** & 0.626 & 0.815 & 0.803 & 0.859 \\\\ \\hline \\end{tabular} \\end{table} Table 1: MSE and SSIM averaged over years from 2018 to 2020 separated by seasons. Figure 1: Comparison of MSE maps under ssp126. Notice the improvement in BGL method along the coastline. ### Future projections In Figure 4 we present a comparison of downscaled SSTs for January 2023 with January 2099 along with the estimated uncertainty from our proposed BGL method. In standard downscaling, the usual practice to obtain an uncertainty measure is to estimate the standard error using MSE which is constant over time. In contrast, our proposed method provides a probabilistic uncertainty which is different for each month. Figure 2: MSE ratio maps. In general, BGL has the lowest MSE across the region. **Fig. 3.** SST maps for January 2020 zoomed into a regular grid in the study region. Compare the structural similarity between downscaled SST maps with the observational MUR SST map. Figure 4: Downscaled SSTs with uncertainty boundaries for January 2023 and January 2099. ## 5 Discussion and conclusion We have presented a novel statistical downscaling method that uses BGL for residual estimation. We have demonstrated our BGL downscaling method in a case study of coral reefs under warming SSTs in the Great Barrier Reef region. The novel BGL downscaling method is computationally tractable for large data sets, provides meaningful uncertainty estimates, and reduced overall MSE significantly in this case study. Therefore, it is suitable for a wide range of applications in Earth science and other fields, e.g. for accomplishing the statistical downscaling of coarse-scale global climate model projections using fine-scale observational data. A hybrid dynamic-statistical downscaling framework also could also be developed that includes global climate model output, regional climate model output, and observations in the downscaling (Walton et al., 2015). However, this would require a more complicated statistical model to jointly model the three data resources and could be the subject of future work. A possible extension is to generalize the current model to the framework of autoregressive co-kriging for multi-fidelity model output and then consider the observations, regional climate model output, and global climate model output as the high-, medium-, and low-fidelity data, respectively. ## 6 Acknowledgments Research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). Financial and in-kind support for this project was provided by NASA ROSES Sustaining Living Systems in a Time of Climate Variability and Change program, grant number 281945.02.03.09.34; and the University of Cincinnati. We acknowledge the World Climate Research Program's Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups for producing and making available their model output. The contents in this manuscript are solely the opinions of the authors and do not constitute a statement of policy, decision or position on behalf of NASA, the Jet Propulsion Laboratory, or the US Government. The authors thank Alex Goodman for developing the Big Climate Data Project (BCDP),and Robert Gramacy for helpful discussion. ## References * Ahmed et al. (2013) Ahmed, K. F., Wang, G., Silander, J., Wilson, A. M., Allen, J. M., Horton, R., and Anyah, R. (2013). Statistical downscaling and bias correction of climate model outputs for climate change impact assessment in the u.s. northeast. _Global and Planetary Change_, 100:320-332. * Beecham et al. (2014) Beecham, S., Rashid, M., and Chowdhury, R. K. (2014). Statistical downscaling of multi-site daily rainfall in a south australian catchment using a generalized linear model. _International Journal of Climatology_, 34(14):3654-3670. * Berrocal et al. (2010) Berrocal, V. J., Gelfand, A. E., and Holland, D. M. (2010). A spatio-temporal downscaler for output from numerical models. _Journal of agricultural, biological, and environmental statistics_, 15(2):176-197. * Carilli et al. (2009) Carilli, J. E., Norris, R. D., Black, B. A., Walsh, S. M., and McField, M. (2009). Local stressors reduce coral resilience to bleaching. _PLOS ONE_, 4(7):1-5. * Gaitan et al. (2019) Gaitan, E., Monjo, R., Portoles, J., and Pino-Otin, M. R. (2019). Projection of temperatures and heat and cold waves for aragon (spain) using a two-step statistical downscaling of cmip5 model outputs. _Science of The Total Environment_, 650:2778-2795. * Gattuso et al. (2015) Gattuso, J.-P., Magnan, A., Bille, R., Cheung, W. W., Howes, E. L., Joos, F., Allemand, D., Bopp, L., Cooley, S. R., Eakin, C. M., et al. (2015). Contrasting futures for ocean and society from different anthropogenic co2 emissions scenarios. _Science_, 349(6243). * Gramacy (2016) Gramacy, R. (2016). lagp: Large-scale spatial modeling via local approximate gaussian processes in r. _Journal of Statistical Software, Articles_, 72(1):1-46. * Gramacy and Apley (2015) Gramacy, R. B. and Apley, D. W. (2015). Local gaussian process approximation for large computer experiments. _Journal of Computational and Graphical Statistics_, 24(2):561-578. * Gramacy et al. (2016)Hashmi, M. Z., Shamseldin, A. Y., and Melville, B. W. (2009). Statistical downscaling of precipitation: state-of-the-art and application of bayesian multi-model approach for uncertainty assessment. _Hydrology and Earth System Sciences Discussions_, 6:6535-6579. * Heron et al. (2016) Heron, S. F., Maynard, J. A., Van Hooidonk, R., and Eakin, C. M. (2016). Warming trends and bleaching stress of the world's coral reefs 1985-2012. _Scientific reports_, 6:38402. * Hoegh-Guldberg (1999) Hoegh-Guldberg, O. (1999). Climate change, coral bleaching and the future of the world's coral reefs. _Marine and freshwater research_, 50(8):839-866. * Hoegh-Guldberg et al. (2007) Hoegh-Guldberg, O., Mumby, P. J., Hooten, A. J., Steneck, R. S., Greenfield, P., Gomez, E., Harvell, C. D., Sale, P. F., Edwards, A. J., Caldeira, K., Knowlton, N., Eakin, C. M., Iglesias-Prieto, R., Muthiga, N., Bradbury, R. H., Dubi, A., and Hatziolos, M. E. (2007). Coral reefs under rapid climate change and ocean acidification. _Science_, 318(5857):1737-1742. * Huang and Cressie (2000) Huang, H.-C. and Cressie, N. (2000). Deterministic/stochastic wavelet decomposition for recovery of signal from noisy data. _Technometrics_, 42(3):262-276. * Hughes et al. (2003) Hughes, T. P., Baird, A. H., Bellwood, D. R., Card, M., Connolly, S. R., Folke, C., Grosberg, R., Hoegh-Guldberg, O., Jackson, J. B. C., Kleypas, J., Lough, J. M., Marshall, P., Nystrom, M., Palumbi, S. R., Pandolfi, J. M., Rosen, B., and Roughgarden, J. (2003). Climate change, human impacts, and the resilience of coral reefs. _Science_, 301(5635):929-933. * Hughes et al. (2018) Hughes, T. P., Kerry, J. T., Baird, A. H., Connolly, S. R., Dietzel, A., Eakin, C. M., Heron, S. F., Hoey, A. S., Hoogenboom, M. O., Liu, G., et al. (2018). Global warming transforms coral reef assemblages. _Nature_, 556(7702):492. * JPL MUR MEASUUREs Project (2015) JPL MUR MEASUUREs Project (2015). Ghrsst level 4 mur global foundation sea surface temperature analysis (v4.1). * Krock et al. (2021) Krock, M., Kleiber, W., Hammerling, D., and Becker, S. (2021). Modeling massive multivariate spatial data with the basis graphical lasso. * Krock et al. (2018)* Masson-Delmotte et al. (2018) Masson-Delmotte, V. et al. (2018). Summary for policymakers. In _Global Warming of 1.5degC. An IPCC Special Report on the impacts of global warming of 1.5degC above preindustrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty_. World Meteorological Organization, Geneva, Switzerland. * Mishra et al. (2014) Mishra, P., Khare, D., Mondal, A., and Kundu, S. (2014). Multiple linear regression based statistical downscaling of daily precipitation in a canal command. * O'Neill et al. (2014) O'Neill, B. C., Kriegler, E., Riahi, K., Ebi, K. L., Hallegatte, S., Carter, T. R., Mathur, R., and van Vuuren, D. P. (2014). A new scenario framework for climate change research: the concept of shared socioeconomic pathways. _Climatic change_, 122(3):387-400. * Poggio and Gimona (2015) Poggio, L. and Gimona, A. (2015). Downscaling and correction of regional climate models outputs with a hybrid geostatistical approach. _spatial statistics_, 14:4-21. * Sachindra et al. (2014) Sachindra, D. A., Huang, F., Barton, A., and Perera, B. J. C. (2014). Statistical downscaling of general circulation model outputs to precipitation--part 1: calibration and validation. _International Journal of Climatology_, 34(11):3264-3281. * Schmidli et al. (2006) Schmidli, J., Frei, C., and Vidale, P. L. (2006). Downscaling from gcm precipitation: a benchmark for dynamical and statistical downscaling methods. _International Journal of Climatology_, 26(5):679-689. * Soares et al. (2019) Soares, P. M. M., Maraun, D., Brands, S., Jury, M. W., Gutierrez, J. M., San-Martin, D., Hertig, E., Huth, R., Belusic Vozila, A., Cardoso, R. M., Kotlarski, S., Drobinski, P., and Obermann-Hellhund, A. (2019). Process-based evaluation of the value perfect predictor experiment of statistical downscaling methods. _International Journal of Climatology_, 39(9):3868-3893. * Timm et al. (2015) Timm, O. E., Giambelluca, T. W., and Diaz, H. F. (2015). Statistical downscaling of rainfall changes in hawai'i based on the cmip5 global model projections. _Journal of Geophysical Research: Atmospheres_, 120(1):92-112. * Van Hooidonk et al. (2013) Van Hooidonk, R., Maynard, J., and Planes, S. (2013). Temporary refugia for coral reefs in a warming world. _Nature Climate Change_, 3(5):508-511. * Van Hooidonk et al. (2016) Van Hooidonk, R., Maynard, J., Tamelander, J., Gove, J., Ahmadia, G., Raymundo, L., Williams, G., Heron, S. F., and Planes, S. (2016). Local-scale projections of coral reef futures and implications of the paris agreement. _Scientific reports_, 6(1):1-8. * Van Hooidonk et al. (2015a) Van Hooidonk, R., Maynard, J. A., Liu, Y., and Lee, S.-K. (2015a). Downscaled projections of caribbean coral bleaching that can inform conservation planning. _Global change biology_, 21(9):3389-3401. * Van Hooidonk et al. (2015b) Van Hooidonk, R., Maynard, J. A., Liu, Y., and Lee, S.-K. (2015b). Downscaled projections of caribbean coral bleaching that can inform conservation planning. _Global change biology_, 21(9):3389-3401. * Copyright American Meteorological Society Jun 15, 2015; Document feature - ; Tables; Graphs; Last updated - 2017-11-22; SubjectsTermNotLitGenreText - Southern California; Los Angeles California. * Wang et al. (2004) Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. (2004). Image quality assessment: from error visibility to structural similarity. _IEEE Transactions on Image Processing_, 13(4):600-612.
We describe an improved statistical downscaling method for Earth science applications using multivariate Basis Graphical Lasso (BGL). We demonstrate our method using a case study of sea surface temperature (SST) projections from CMIP6 Earth system models, which has direct applications for studies of multi-decadal projections of coral reef bleaching. We find that the BGL downscaling method is computationally tractable for large data sets, and that mean squared predictive error is roughly \\(8\\%\\) lower than the current state-of-the-art interpolation-based statistical downscaling method. Finally, unlike most of the currently available methods, BGL downscaling produces uncertainty estimates. Our novel method can be applied to any model output variable for which corresponding higher-resolution observational data is available.
Provide a brief summary of the text.
156
arxiv-format/1506_02641v1.md
# Preliminary EoS for core-collapse supernova simulations with the QMC model Guilherme Grams Alexandre M. Santos Debora P. Menezes Universidade Federal de Santa Catarina, Brazil ## I Introduction Although the theory related to supernova (SN) has made a remarkable progress in the past decade, there are many questions still to be answered. The catastrophic infall of the core of a massive star, reversed to trigger the powerful ejection of the stellar mantle and envelope in a supernova explosion, was identified as the crucial role in the synthesis of heavy elements. But when, why and how it happens is a fundamental problem of stellar astrophysics that remains to be explained. The implosion of stellar cores was also proposed as part of the scenario of the stellar death [1; 2]. There are no doubts that supernovae explosions are an unique phenomenon in nature and an excellent laboratory to test extreme physics conditions. Unfortunately SN tends to happen once in a century per galaxy which makes the study of these _great labs_ very difficult. Due to this observational difficulty, simulations of core-collapse supernova have played an important role in the study of supernovae explosions and their possible remnants. The equation of state of nuclear physics is a fundamental ingredient in the simulation of SN explosion. Simulations of core-collapse supernovae have to be fed with a wide range of thermodynamic conditions. Moreover, extremely high density and temperature may be achieved when black holes are formed by failed supernovae. To date, the temperature is believed to vary from zero to more than 100 MeV, densities from \\(10^{5}\\) to more than \\(10^{15}\\) g.cm\\({}^{-3}\\) and the proton fraction up to about 0.6. Fulfilling these conditions makes the construction of a complete EoS a very hard work, mainly at low densities, where a variety of sub-structures and light clusterization are possible. For these reasons, there are only few complete EoS available in the literature. The most commonly used EoS are those of Lattimer and Swesty [3], Shen [4; 5] and Hempel [6]. The Lattimer [3] EoS is based on a compressible liquid drop model with a Skyrme force for nucleon interactions. The Shen EoS from 1998 [4] was the first equation of state for supernova simulations using a relativistic nuclear model. The upgrade to Shen's work was published in 2011 [5] with more points in the table and the inclusion of the \\(\\Lambda\\) hyperons. Both works developed by Shen were constructed with the relativistic mean field Walecka model [7] using the TM1 [8] parameterization. Hempel's EoS [6] is based on the TM1, TMA [9] and FSUgold [10] parameterizations of the Walecka model and use the nuclear statistical equilibrium model of Hempel and Schaffner-Bielich [11], which includes excluded volume effects. An interesting analysis of the different parameterizations of the relativistic-mean-field (RMF) models was made by Dutra et al [12], where the authors analyzed 263 different RMF models under several constrains related to the symmetric nuclear matter (SNM) and pure neutron matter (PNM). The TM1 parameterization used by Shen failed under six of this constraints, the TMA used by Hempel failed in five, and the FSUgold, also used by Hempel failed in one constraint. More details of this analysis in [12]. The works of Hempel, Lattimer and Shen were successful and very useful in many calculations in the last decade [13; 14; 15; 16; 17; 18], but there are some simulations of SN in which the supernova does not explode [19]. It is believed that these failures are due to some problems with the nuclear EoS. In this work we present our preliminary results for the construction of an EoS grid for core-collapse supernova simulations, with the quark-meson coupling (QMC) model [20]. In the QMC model, nuclear matter is described as a system of non-overlapping MIT bags [21] that interact through the exchange of scalar and vector meson fields. Many applications and extensions of the model have been made in the last years [22; 23; 24; 25; 26]. It is found that the EoS for infinite nuclear matter at zero temperature derived from the QMC model is softer than the one obtained with the Walecka model. This might be a problem if one wants to describe very massive compact objects [27; 28], but as far as SN simulations are concerned, it is worth testing it because apart from numerical accordance, one can interpret that starting from quark degrees of freedom is an advantage on the physical meaning underneath. Moreover, the effective nucleon mass obtained with the QMC model lies in the range 0.7-0.8 of the free nucleon mass, which agrees with the results derived from the non-relativistic analysis of scattering of neutrons from lead nuclei [29] and is larger in comparison with the effective mass obtained with some of the different parameterizations of the Walecka models. A low effective mass at saturation can be a problem whenhyperons are included in the calculation, as discussed in the next section. All the few works for the purpose of the SN simulations that are based on relativistic models, use different parameterizations of the Walecka model, in which the nucleons interact between each other through the exchange of mesons. We believe that with the quarks degree of freedom present in the QMC model, a more fundamental physics lacking in the already used models, can be tested and possibly contribute for the SN simulations to explode. Another possible use of this preliminary EoS obtained with the QMC model is the study of the cooling of compact stars, which serve as an important window on the properties of super-dense matter and neutron star structure, and is very sensitive to the nuclear equation of state [30; 31; 32; 33]. This paper is structured as follows: in the second section we present a review of the quark-meson coupling model. In the third section we present our results for the EoS. The last section is reserved for the final remarks and conclusions. ## II The quark-meson coupling model In the QMC model, the nucleon in nuclear medium is assumed to be a static spherical MIT bag in which quarks interact with the scalar (\\(\\sigma\\)) and vector (\\(\\omega\\), \\(\\rho\\)) fields, and those are treated as classical fields in the mean field approximation (MFA) [20]. The quark field, \\(\\psi_{q_{N}}\\), inside the bag then satisfies the equation of motion: \\[[i\\,\\partial\\!\\!\\!/ - (m_{q}^{0}-g_{\\sigma}^{q})-g_{\\omega}^{q}\\,\\omega\\,\\gamma^{0} \\tag{1}\\] \\[+ \\frac{1}{2}g_{\\rho}^{q}\\tau_{z}\\rho_{03}\\gamma^{0}\\biggr{]}\\;\\psi _{q_{N}}(x)=0\\,\\quad q=u,d\\] where \\(m_{q}^{0}\\) is the current quark mass, and \\(g_{\\sigma}^{q}\\), \\(g_{\\omega}^{q}\\) and \\(g_{\\rho}^{q}\\) denote the quark-meson coupling constants. The normalized ground state for a quark in the bag is given by \\[\\psi_{q_{N}}({\\bf r},t) = {\\cal N}_{q_{N}}\\exp\\left(-i\\epsilon_{q_{N}}t/R_{N}\\right) \\tag{2}\\] \\[\\times \\left(\\begin{array}{c}j_{0_{N}}\\left(x_{q_{N}}r/R_{N}\\right)\\\\ i\\beta_{q_{N}}\\vec{\\sigma}\\cdot\\hat{r}j_{1_{N}}\\left(x_{q_{N}}r/R_{N}\\right) \\end{array}\\right)\\frac{\\chi_{q}}{\\sqrt{4\\pi}}\\,\\] where \\[\\epsilon_{q_{N}}=\\Omega_{q_{N}}+R_{N}\\left(g_{\\omega}^{q}\\,\\omega+\\frac{1}{2} g_{\\rho}^{q}\\tau_{z}\\rho_{03}\\right), \\tag{3}\\] and, \\[\\beta_{q_{N}}=\\sqrt{\\frac{\\Omega_{q_{N}}-R_{N}\\,m_{q}^{*}}{\\Omega_{q_{N}}\\,+ R_{N}\\,m_{q}^{*}}}\\, \\tag{4}\\] with the normalization factor given by \\[{\\cal N}_{q_{N}}^{-2}=2R_{N}^{3}j_{0}^{2}(x_{q})\\left[\\Omega_{q}(\\Omega_{q}-1 )+R_{N}m_{q}^{*}/2\\right]\\Big{/}x_{q}^{2}\\, \\tag{5}\\] where \\(\\Omega_{q_{N}}\\equiv\\sqrt{x_{q_{N}}^{2}+(R_{N}\\,m_{q}^{*})^{2}}\\), \\(m_{q}^{*}=m_{q}^{0}-g_{\\sigma}^{q}\\,\\sigma\\), \\(R_{N}\\) is the bag radius of nucleon \\(N\\) and \\(\\chi_{q}\\) is the quark spinor. The bag eigenvalue for nucleon \\(N\\), \\(x_{q_{N}}\\), is determined by the boundary condition at the bag surface \\[j_{0_{N}}(x_{q_{N}})=\\beta_{q_{N}}\\,j_{1_{N}}(x_{q_{N}}). \\tag{6}\\] The energy of a static bag describing nucleon \\(N\\) consisting of three quarks in ground state is expressed as \\[E_{N}^{\\rm bag}=\\sum_{q}n_{q}\\,\\frac{\\Omega_{q_{N}}}{R_{N}}-\\frac{Z_{N}}{R_{N }}+\\frac{4}{3}\\,\\pi\\,R_{N}^{3}\\,B_{N}\\, \\tag{7}\\] where \\(Z_{N}\\) is a parameter which accounts for zero-point motion of nucleon \\(N\\) and \\(B_{N}\\) is the bag constant. The set of parameters used in the present work is determined by enforcing stability of the nucleon (here, the \"bag\"), much like in [34], so there is a single value for proton and \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline Model & M & \\(m_{q}\\) & \\(m_{\\sigma}\\) & \\(m_{\\omega}\\) & \\(m_{\\rho}\\) & \\(g_{\\rho}\\) & \\(g_{\\omega}\\) & \\(g_{\\sigma}\\) & \\(B_{N}^{1/4}\\) & NLT & DDP \\\\ & (MeV) & (MeV) & (MeV) & (MeV) & (MeV) & & & & (MeV) & & \\\\ \\hline QMC & 939.0 & 5.5 & 550 & 783 & 770 & 8.6510 & 8.9817 & 5.9810\\({}^{*}\\) & 210.85 & no & no \\\\ \\hline NL3 & 939.0 & – & 508.194 & 782.501 & 763 & 8.9480 & 12.868 & 10.217 & – & yes & no \\\\ GM1 & 939.0 & – & 550 & 783 & 770 & 8.1945 & 10.608 & 9.5684 & – & yes & no \\\\ TM1 & 938.0 & – & 511.197 & 783 & 770 & 4.6321 & 12.613 & 10.028 & – & yes & no \\\\ FSUgold & 939.0 & – & 491.5 & 782.5 & 763 & 11.7673 & 14.301 & 10.592 & – & yes & no \\\\ TMA\\({}^{\\dagger}\\) & 938.9 & – & 519.151 & 781.95 & 768.1 & 3.800 & 12.842 & 10.055 & – & yes & no \\\\ TW & 939.0 & – & 550 & 783 & 763 & 7.32196\\({}^{**}\\) & 13.2901\\({}^{**}\\) & 10.728\\({}^{**}\\) & – & no & yes \\\\ \\hline \\end{tabular} \\end{table} Table 1: Parameters used in the QMC model and different Walecka model parameterizations. In the first line we present the parameters used in the present work with the QMC model. The NL3, GM1 and TW parameterizations of the Walecka model are used here for a comparison of bulk matter properties. TM1 is the parameterization used in Shen’s work. The FSUgold and TMA were used in Hempel’s work. \\({}^{*}g_{\\sigma}^{q}\\) is the quark-meson coupling in the QMC model. \\({}^{**}\\) values taken at saturation for the TW parameterization. TMA\\({}^{\\dagger}\\): the coupling parameters \\(g_{i}\\) of the set TMA are chosen to be mass-number dependent such that \\(g_{i}=a_{i}+b_{i}/A^{0.4}\\), with \\(a_{i}\\) and \\(b_{i}\\) being constants [9]; for infinite matter as in the stellar matter, one has an infinite nucleus, and then the limit A \\(\\mapsto\\) infinity is taken so that \\(g_{i}=a_{i}\\). NLT= Nonlinear terms. DDP= Density dependent parameters.)neutron masses. The effective mass of a nucleon bag at rest is taken to be \\(M_{N}^{*}=E_{N}^{\\rm bag}\\). The equilibrium condition for the bag is obtained by minimizing the effective mass, \\(M_{N}^{*}\\) with respect to the bag radius \\[\\frac{d\\,M_{N}^{*}}{d\\,R_{N}^{*}}=0,\\ \\ \\ \\ N=p,n, \\tag{8}\\] By fixing the bag radius \\(R_{N}=0.6\\) fm and the bare nucleon mass \\(M=939\\) MeV the unknowns \\(Z_{N}=4.0050668\\) and \\(B_{N}^{1/4}=210.85\\)MeV are then obtained. Furthermore, the desired values of \\(B/A\\equiv\\epsilon/\\rho-M=-15.7\\) MeV at saturation \\(n=n_{0}=0.15\\) fm\\({}^{-3}\\), are achieved by setting \\(g_{\\sigma}^{q}=5.9810\\), \\(g_{\\omega}=8.9817\\), where \\(g_{\\omega}=3g_{\\omega}^{q}\\) and \\(g_{\\rho}=3g_{\\phi}^{q}\\). All the parameters used in this work are shown in Table 1. The total energy density of the nuclear matter reads \\[\\varepsilon=\\frac{1}{2}m_{\\sigma}^{2}\\sigma+\\frac{1}{2}m_{\\omega }^{2}\\omega_{0}^{2}+\\frac{1}{2}m_{\\rho}^{2}\\rho_{03}^{2}\\] \\[+\\sum_{N}\\frac{1}{\\pi^{2}}\\int_{0}^{k_{N}}k^{2}dk[k^{2}-M_{N}^{* 2}]^{1/2}, \\tag{9}\\] and the pressure is, \\[p=-\\frac{1}{2}m_{\\sigma}^{2}\\sigma+\\frac{1}{2}m_{\\omega}^{2} \\omega_{0}^{2}+\\frac{1}{2}m_{\\rho}^{2}\\rho_{03}^{2}\\] \\[+\\sum_{N}\\frac{1}{\\pi^{2}}\\int_{0}^{k_{N}}k^{4}dk/[k^{2}-M_{N}^{* 2}]^{1/2}. \\tag{10}\\] The vector mean field \\(\\omega_{0}\\) and \\(\\rho_{03}\\) are determined through \\[\\omega_{0}=\\frac{g_{\\omega}(n_{p}+n_{n})}{m_{\\omega}^{2}},\\ \\rho_{03}=\\frac{g_{\\rho}(n_{p}-n_{n})}{m_{\\rho}^{2}}, \\tag{11}\\] where \\[n_{B}=\\sum_{N}\\frac{2k_{N}^{3}}{3\\pi^{2}},\\ \\ \\ N=p,n. \\tag{12}\\] is the baryon density. Finally, the mean field \\(\\sigma\\) is fixed by imposing that \\[\\frac{\\partial\\varepsilon}{\\partial\\sigma}=0. \\tag{13}\\] It is always important to check the behavior of the models in the symmetric nuclear matter at saturation density and zero temperature, i.e., the bulk nuclear matter properties. A comprehensive work in this direction is [12], but the QMC model was not analyzed. Therefore, we compare the QMC model with two parameterization of the well known Walecka-type models, namely: GM1 [35] and NL3 [36] and the density dependent parameter model TW [37]. In this work we have chosen GM1 for being a parameterization which gives a good value for the effective mass, NL3 because it is a very common standard parameterization and TW because it is a very good \\begin{table} \\begin{tabular}{l c c c c c} \\hline Model & \\(B/A\\) & \\(n_{0}\\) & \\(M^{*}/M\\) & \\(\\mathcal{E}_{sym}\\) & K \\\\ & (MeV) & (fm\\({}^{-3}\\)) & & (MeV) & (MeV) \\\\ \\hline QMC & -15.7 & 0.150 & 0.77 & 34.5 & 295 \\\\ \\hline NL3 & -16.2 & 0.148 & 0.60 & 37.4 & 272 \\\\ GM1 & -16.3 & 0.153 & 0.70 & 32.5 & 300 \\\\ TM1 & -16.3 & 0.145 & 0.63 & 36.8 & 281 \\\\ FSUgold & -16.3 & 0.148 & 0.62 & 32.6 & 230 \\\\ TMA & -16.0 & 0.147 & 0.63 & 30.7 & 318 \\\\ TW & -16.2 & 0.153 & 0.56 & 32.6 & 240 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Nuclear matter bulk properties obtained with the QMC model, two different parameterizations of Walecka model and one density dependent model we use in this paper, and the four parameterizations used in the works of Hempel and Shen. All quantities are taken at saturation. Figure 2: Equation of state of the QMC model and three Walecka model parameterizations. Figure 1: The effective mass of the QMC model and three Walecka parameterizations. density dependent parameterization according to [12]. In [12], it was found that GM1 failed under six constrains related to the symmetric nuclear matter (SNM) and pure neutron matter (PNM), NL3 failed under nine and TW did not satisfy only one constraint. In Table 1 we can see the different parameters here analyzed and the ones used in the works of Shen [4; 5] and Hempel [6]. In Table 2 we show the properties of nuclear matter at saturation density and zero temperature. The first column shows the binding energy per nucleon, the second shows the baryonic density of the nucleons at the saturation point, the third the effective mass relative to the free nucleon mass, the symmetry energy and the compression modulus of the nucleons. In Fig. 1 we show the relation of the effective mass as function of the baryonic density where we can see that the effective mass for the QMC model has always a larger value than the other models, which is an important characteristic if hyperons are to be included. As can be seen in [38] the effective mass of some of the Walecka-type models tend to zero at still low densities when all the baryons of the octet are included. For this reason, parameterizations with larger effective masses were proposed by Glendenning [35] (GM1, GL, etc) with the specific purpose of applications to stellar matter. Note that in both, figure and table, we can see that \\(M^{*}/M=0.77\\) at the saturation density, which agrees with the results derived from the non-relativistic analysis of scattering of neutrons from lead nuclei [29], and that this result is larger in comparison with the effective mass from many of the Walecka-type models. In Fig. 2 we show the pressure versus energy density relation for infinite nuclear matter at \\(T=0\\), where we see that the curve for QMC is similar to the one for TW, and softer than the GM1 and NL3. As mentioned in the Introduction, this can be a deficiency for neutron star studies, but those effects on supernovae are still not analyzed. ## III Results and Discussions In this work we construct an EoS table covering a wide range of proton fraction \\(Y_{p}\\) and baryon density \\(n_{B}\\). We show in the paper some results concerning the properties of matter with the QMC model and indicate the website from where the full table can be download. We show in Fig. 3 the binding energy of homogeneous nuclear matter at zero temperature as a function of the baryon density for different proton fractions. For pure neutron matter and low proton fractions, there are no binding states, as expected. This result is in good agreement with Walecka [7] and Shen [4]. In Fig. 4 we show the compression modulus versus the baryonic density, Figure 4: Compression modulus as function of the baryon density with the proton fractions \\(Y_{p}=\\)0.1, 0.3, 0.5 and 0.6. Figure 5: The relation between pressure and \\(\\rho_{B}\\) for the \\(Y_{p}=\\)0.01, 0.1, 0.3, 0.5 and 0.6 protons fractions. Figure 3: Binding energy of the nucleons as function of the baryon density with the proton fractions \\(Y_{p}=\\)0, 0.1, 0.3, 0.5 and 0.6. where the expression for \\(K\\) is \\[K=9\\ n_{B}^{2}\\frac{d^{2}}{dn_{B}^{2}}\\left(\\frac{\\varepsilon}{n_{B}}\\right), \\tag{14}\\] \\(n_{B}\\) is the baryon number density and \\(\\varepsilon\\) the total energy density of the nucleons. Here we see that near the saturation density the compression modulus has the same value independent of the proton fraction and, as the density increases, the compression modulus is different for each \\(Y_{p}\\). In Fig. 5 the pressure \\(p\\) as a function of \\(\\rho_{B}\\) is displayed. The baryon number density is related to the baryon mass density as \\(\\rho_{B}=m_{u}n_{B}\\) where \\(m_{u}=931.49432\\) MeV is the atomic mass unit. We note that the pressure varies more with the \\(Y_{p}\\) for low values of \\(p\\) at lower densities, this result is also in good agreement with Shen's work [4]. In Fig. 6 we show the proton and neutron chemical potentials, \\(\\mu_{p}\\) and \\(\\mu_{n}\\), as function of the baryon density for the proton fractions \\(Y_{p}=0.1\\), \\(0.3\\), \\(0.5\\) and \\(0.6\\). In these curves we see that for \\(Y_{p}=0.1\\) the chemical potential of the neutron is bigger than the one of the proton. As the proton fraction gets bigger, the curves approach each other, until they are the same in \\(Y_{p}=0.5\\), and for \\(Y_{p}=0.6\\) the chemical potential of the proton is bigger than the one of the neutron. This is an obvious result, but as the chemical potentials are very important quantities in the EoS tables, they are also presented graphically. Once bulk nuclear matter properties are shown to behave as expected and present some important differences as compared with the other works, we proceed toward building a preliminary EoS table with the QMC model, for homogeneous matter and zero temperature, which is available on the Web at [http://debora.fsc.ufsc.br/eos_qmc.t0](http://debora.fsc.ufsc.br/eos_qmc.t0). In Table 3 we show the thermodynamic quantities described as in [4; 5] 1. Temperature: \\(T\\)[MeV]. 2. Logarithm of baryon mass density: \\(\\log_{10}(\\rho_{B})\\)[g.cm\\({}^{-3}\\)]. 3. Baryon number density: \\(n_{B}\\) [fm\\({}^{-3}\\)]. 4. Proton fraction: Y\\({}_{p}\\). The proton fraction Y\\({}_{p}\\) of uniform matter made of protons and neutrons, is defined by \\[Y_{p}=\\frac{n_{p}}{n_{n}+n_{p}}\\] where \\(n_{p}\\) and \\(n_{n}\\) are the number density of protons and neutrons, respectively. 5. Free energy per baryon: \\(F\\) [MeV]. The free energy per baryon reads, \\[f=\\varepsilon-Ts.\\] Figure 6: Some fixed values of proton fraction for chemical potentials as function of the baryonic density. The continuous line represents \\(\\mu_{p}\\) and the dashed line represents \\(\\mu_{n}\\). This work is for zero temperature only, hence \\(f=\\varepsilon\\). The free energy per baryon is defined relative to the nucleon mass as, \\[F=\\frac{\\varepsilon}{n_{B}}-M=B/A.\\] 6. Internal energy per baryon:\\(E_{int}\\) [MeV]. The internal energy per baryon is defined relative to the atomic mass unit \\(m_{u}=931.49432\\) MeV as \\[E_{int}=\\frac{\\varepsilon}{n_{B}}-m_{u}.\\] 7. Entropy per baryon: \\(S[k_{B}]\\). The case we work here is for zero temperature, therefore \\(S=0\\). 8. Effective nucleon mass: \\(M_{N}^{*}\\) [MeV]. The effective nucleon mass is obtained in the QMC model for uniform matter with the relation \\(M_{N}^{*}=E_{N}^{bag}\\), where \\(N=p\\), \\(n\\), and the bag energy is obtained through the Eq. 7. 9. Free neutron fraction: \\(X_{n}\\). 10. Free proton fraction: \\(X_{p}\\). 11. Pressure: \\(p\\) [MeV.fm\\({}^{-3}\\)]. The pressure is calculated from Eq. 10. 12. Chemical potential of the neutron: \\(\\mu_{n}\\) [MeV]. For the zero temperature case, the chemical potential of the neutron relative to the free nucleon mass \\(M\\) is calculate through the following equation \\[\\mu_{n}=[k_{n}^{2}+M^{*2}]^{1/2}+g_{\\omega}\\omega_{0}-\\frac{g_{\\rho}}{2}\\rho_{ 03}-M.\\] 13. Chemical potential of the proton: \\(\\mu_{p}\\) [MeV]. For the zero temperature case, the chemical potential of the proton relative to the free nucleon mass \\(M\\) is calculated through the following equation \\[\\mu_{p}=[k_{p}^{2}+M^{*2}]^{1/2}+g_{\\omega}\\omega_{0}+\\frac{g_{\\rho}}{2}\\rho_{ 03}-M.\\] 13. Chemical potential of the proton: \\(\\mu_{p}\\) [MeV]. For the zero temperature case, the chemical potential of the proton relative to the free nucleon mass \\(M\\) is calculated through the following equation \\[\\mu_{p}=[k_{p}^{2}+M^{*2}]^{1/2}+g_{\\omega}\\omega_{0}+\\frac{g_{\\rho}}{2}\\rho_{ 03}-M.\\] ## IV Conclusion and Future Works In this work we have used the QMC model for the first time for the construction of a preliminary EoS that in the future can be useful for the studies involving cooling of neutron stars and supernova simulations. We believe that with the quarks degree of freedom present in the QMC model, the EoS can contribute with part of the physics that lack for SN simulations to explode. The next step of the present work, already under development, is the computation of the EoS grid at finite temperature, which is essential for the supernova simulations. Thereafter we will study the very low density regions, where nuclear matter is no longer uniform. This will be done with the _pasta phase_ approach [39; 40]. We believe that the use of the pasta phase for the description of the non-uniform part of matter that compose the EoS table in the SN simulations will certainly affect SN and cooling simulations. ###### Acknowledgements. The authors would like to thank CNPq and FAPESC under the project 2716/2012, TR 2012000344 for the financial support. ## References * (1) H.-T. Janka, Ann. Rev. Nucl. Part. Sci. **62**, 407 (2012). * (2) F. Hoyle and W. A. Fowler, Astrophys. J. **132**, 565 (1960). * (3) J. M. Lattimer and F. D. Swesty, Nucl. Phys. A **535**, 331 (1991). * (4) H. Shen et al., Nucl. Phys. A **637**, 435 (1998). * (5) H. Shen et al., Astrophys. J. Suppl. **197**, 20 (2011). * (6) M. Hempel et al., Astrophys. J. **748**, 70 (2012). * (7) B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. **16**, 1 (1986). * (8) Y. Sugahara and H. Toki, Nucl. Phys. A **579**, 557 (1994). * (9) H. Toki et al., Nucl. Phys. A **588**, 357c (1995). \\begin{table} \\begin{tabular}{l c c c c c c c c c c c} \\hline T & log\\({}_{10}\\)(\\(\\rho_{B}\\)) & \\(n_{B}\\) & Y\\({}_{P}\\) & F & \\(E_{int}\\) & S & M*\\({}_{N}\\) & X\\({}_{n}\\) & X\\({}_{p}\\) & p & \\(\\mu_{n}\\) & \\(\\mu_{p}\\) \\\\ (MeV) & (g.cm\\({}^{-3}\\)) & (fm\\({}^{-3}\\)) & & (MeV) & (MeV) & (k\\({}_{B}\\)) & (MeV) & & & (MeV fm\\({}^{-3}\\)) & (MeV) & (MeV) \\\\ \\hline 0 & 14.0 & 0.0602 & 0 & 4.890 & 12.40 & 0 & 838.9 & 1 & 0 & 0.2371 & 23.43 & -68.43 \\\\ 0 & 14.1 & 0.0758 & 0 & 6.090 & 13.60 & 0 & 816.9 & 1 & 0 & 0.5103 & 31.20 & -82.21 \\\\ 0 & 14.2 & 0.0954 & 0 & 8.133 & 15.64 & 0 & 791.2 & 1 & 0 & 1.0890 & 42.68 & -97.63 \\\\ 0 & 14.3 & 0.1201 & 0 & 11.56 & 19.07 & 0 & 761.5 & 1 & 0 & 2.2780 & 59.66 & -114.3 \\\\ 0 & 14.4 & 0.1512 & 0 & 17.19 & 24.70 & 0 & 728.8 & 1 & 0 & 4.6550 & 84.64 & -131.4 \\\\ \\hline \\end{tabular} \\end{table} Table 3: EoS table at \\(T=0\\). It covers the proton fraction range \\(Y_{p}=0-0.65\\) with the linear grid spacing \\(\\Delta Y_{p}=0.01\\) (66 points), and the density range \\(\\rho_{B}=10^{14}-10^{16}\\) g cm\\({}^{-3}\\) with the logarithmic grid spacing \\(\\Delta\\)log\\({}_{10}\\) (\\(\\rho_{B}\\) /[g cm\\({}^{-3}\\)]) \\(=0.1\\) (21 points). This table is available in the website. An exerpt of it is shown here for guidance. * (10) B. G. Todd-Rutel and J. Piekarewics, Phys. Rev. Lett. **95**, 122501 (2005). * (11) M. Hempel and J. Schaffner-Bielich, Nucl. Phys. A **837**, 210 (2010). * (12) M. Dutra et al., Phys. Rev. C **90**, 055203 (2014). * (13) Y. Sekiguchi et al., Phys. Rev. D **91**, 064059 (2015). * (14) S. Nasu et al., Astrophys. J. **801**, 78 (2015). * (15) E. Abdikamalov et al., Phys. Rev. D **90**, 044001 (2014). * (16) C. L. Fryer and A. Heger, Astrophys. J. **541**, 1033 (2000). * (17) B. A. Thompson, T. A. and P. A. Pinto, Astrophys. J. Suppl. **592**, 434 (2003). * (18) O. Pejcha and T. A. Thompson, Astrophys. J. **801**, 90 (2015). * (19) R. Buras et al., Phys. Rev. Lett. **90**, 241101 (2003). * (20) P. A. M. Guichon, Phys. Lett. B **200**, 235 (1988). * (21) A. Chodos et al., Phys. Rev. D **9**, 3471 (1974). * (22) S. Fleck et al., Nucl. Phys. A **510**, 731 (1990). * (23) K. Saito and A. W. Thomas, Phys. Lett. B **327**, 9 (1994). * (24) K. Saito and A. W. Thomas, Phys. Rev. C **52**, 2789 (1995). * (25) P. A. M. Guichon et al., Nucl. Phys. A **601**, 349 (1996). * (26) P. K. Panda, D. P. Menezes, and C. Providencia, Phys. Rev. C **69**, 025207 (2004). * (27) P. Demorest et al., Nature **467**, 1081 (2010). * (28) J. Antoniadis et al., Science **26**, 6131 (2013). * (29) H. D. J. Johnson, C. H. and C. Mahaux, Phys. Rev. C **36**, 2252 (1987). * (30) D. Page, U. Geppert, and F. Weber, Nucl. Phys. A **777**, 497 (2006). * (31) R. Negreiros, S. Schramm, and F. Weber, arXiv:1307.7692v1 **astro-ph**, 1 (2013). * (32) M. Fortin et al., Phys. Rev. C **82**, 065804 (2010). * (33) S. M. de Carvalho et al., arXiv:1411.5316v1 **astro-ph**, 1 (2014). * (34) A. M. Santos and C. Providencia, Phys. Rev. C **79**, 045805 (2009). * (35) N. K. Glendenning, _Compact Stars_ (Springer-Verlag, New York, 2000). * (36) G. A. Lalazissis, J. Konig, and P. Ring, Phys. Rev. C **55**, 540 (1997). * (37) S. Typel and H. H. Wolter, Nucl. Phys. A **656**, 331 (1999). * (38) A. M. S. Santos and D. P. Menezes, Phys. Rev. C **69**, 045803 (2004). * (39) T. Maruyama et al., Phys. Rev. C **72**, 015802 (2005). * (40) C. Providencia et al., Eur. Phys. J. A **50**, 44 (2014).
In this work we present the preliminary results of a complete equation of state (EoS) for core-collapse supernova simulations. We treat uniform matter made of nucleons using the the quark-meson coupling (QMC) model. We show a table with a variety of thermodynamic quantities, which covers the proton fraction range \\(Y_{E}=0-0.65\\) with the linear grid spacing \\(\\Delta Y_{p}=0.01\\) (66 points) and the density range \\(\\rho_{B}=10^{14}-10^{16}\\)g.cm\\({}^{-3}\\) with the logarithmic grid spacing \\(\\Delta\\)log\\({}_{10}(\\rho_{B}/[\\)g.cm\\({}^{-3}])=0.1\\) (21 points). This preliminary study is performed at zero temperature and our results are compared with the widely used EoS already available in the literature.
Condense the content of the following passage.
200
arxiv-format/2408_01312v1.md
# A microcomb-empowered Fourier domain mode-locked LiDAR Zhaoyu Cai State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China. Zihao Wang State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China. Ziqi Wei State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China. Baoqi Shi International Quantum Academy, Shenzhen 518048, China. International Quantum Academy, Shenzhen 518048, China. Wei Sun International Quantum Academy, Shenzhen 518048, China. Changxi Yang State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China. Junqi Liu International Quantum Academy, Shenzhen 518048, China. Chengying Bao ###### Photonic integrated circuits (PICs) are playing an increasingly important role in LiDAR technology [1; 2; 3; 4]. Besides beam steering, integrated microresonator frequency combs (microcombs) [5; 6; 7] can be invaluable light sources for LiDARs. Indeed, they have been used for dual-comb ranging [8; 9], chaotic ranging [10; 11] and parallel frequency-modulated continuous wave (FMCW) [12]. In these reports [8; 9; 10; 11; 12], microcombs are usually sent to targets for ranging. Since the power of a microcomb is relatively low, high power amplification is frequently used [9; 10; 11; 12]. In this work, we use an integrated silicon nitride (Si\\({}_{3}\\)N\\({}_{4}\\)) microcomb as a frequency ruler at the local site, as opposed to sending to targets, to calibrate a frequency sweep laser for FMCW ranging [12; 13; 14; 15; 16; 17; 18; 19]. It not only avoids the power issue, but also leverages the large line spacing of microcombs to calibrate frequency sweeping with ultrahigh chirp rates for ranging. This is because the highest chirp rate that can be calibrated by a comb is limited to \\(f_{r}^{2}/4\\) (\\(f_{r}\\) is the comb repetition rate or line spacing) [20]. Calibrating frequency sweeping lasers by optical frequency combs has been widely used for ranging and spectroscopy [20; 21; 22; 23; 24; 25; 26; 27]. Compared to interferometer or cavity based frequency calibration methods, combs provide higher accuracy and better immunity to environment fluctuations [22; 23]. Typically, femtosecond laser combs with repetition rates of hundreds of MHz were used for calibration, which limits the chirp rate to a few PHz/s [20]. Soliton microcombs have been used for laser frequency calibration, but the measured chirp rate was below 12.5 THz/s [26; 27]. Recently, frequency sweep lasers with chirp rates above hundreds of PHz/s have been demonstrated using Fourier domain mode-locked (FDML) lasers [28; 29; 30] and micro-electro-mechanically tuned vertical-cavity surface emitting lasers (MEMS-VCSELs) [31; 32]. These lasers are compelling because their ultrahigh chirp rates and broad bandwidths can boost the FMCW ranging speed, resolution and precision. However, these ultrahigh chirp rate lasers generally sweep in a nonlinear way. Their frequency calibration by microcombs can greatly enhance the functionality in LiDAR, optical coherence tomography (OCT) [28], sensing [33] and spectroscopy [34]. Here, a Si\\({}_{3}\\)N\\({}_{4}\\) microcomb [35] calibrates the frequency of a broadband FDML laser to reach a normalized ranging precision of 0.27 nm \\(\\cdot\\sqrt{\\text{s}}\\), an order of magnitude higher than previously reported FMCW LiDARs [23]. The highest chirp rate calibrated is 320 PHz/s, which is two orders of magnitude higher than previous reports [20; 23] and highlights the unique advantage of large line spacing microcombs. Additionally, there have been assertions that nonlinear frequency sweeps can obscure velocity measurements, even after calibration [36]. We show that large chirp rates can mitigate the impact of frequency sweep nonlinearity, and our system achieves velocity measurements with an uncertainty below 0.4 mm/s. Our work also reveals the frequency sweeping dynamics of an FDML laser with an unprecedented bandwidth and resolution. **LiDAR system and 3D imaging.** Our LiDAR system is illustrated in Fig. 1a. The FDML laser works by matching the modulation frequency (\\(f_{m}\\)) of a fibre Fabry-Perot tunable filter (FFP-TF) with the free-spectral-range (FSR) of the cavity [28] (see Methods). In such a way, all the lasing frequencies can be stored in the cavity, which avoids the lasing buildup process and enables ultrahigh sweep rates. By carefully managing the dispersion, a lasing bandwidth of 33 nm at a rate of 24.6kHz can be attained (Fig. 1b). Since the tunable filter is driven by a sine-wave, the FDML frequency sweeps in a nonlinear way. To facilitate FMCW ranging, which works by homodyne beat between a frequency sweep laser and its delayed replica from the target (Fig. 1a), we use a Si\\({}_{3}\\)N\\({}_{4}\\) soliton microcomb with \\(f_{r}\\)=50.08 GHz, whose spectrum is shown in Fig. 1b, to calibrate the FDML frequency in full-time (see Methods). To showcase the capability of the microcomb-empowered FDML LiDAR, a 3D imaging of the Tsinghua Gate made of diffusive aluminum is presented in Fig. 1c. We scanned the target mechanically for imaging and the received power loss from the target can be as high as 70 dB. The beam steering can be implemented by an optical phase array[3] in the future. We deliberately tilted the target to examine the ability of our LiDAR to measure small height changes. Taking the dashed horizontal slice as an example, the measured height along the slice is shown in Fig. 1d, showing a good linearity. By subtracting the solid linear fit in Fig. 1d from the measured height, we have the height distribution shown in Fig. 1e. The standard deviation (\\(\\sigma\\)) is about 9 \\(\\mu\\)m, which mainly results from the surface roughness of the target. An independent ranging linearity measurement shows a residual error with \\(\\sigma\\)=0.2 \\(\\mu\\)m (Extended Data Fig. 1). **Frequency calibration in FMCW ranging.** Frequency calibration is essential in the above measurement. We heterodyne beat the FDML laser with the microcomb on a balanced photodetector (BPD) for this calibration. The output was measured by an oscilloscope with a bandwidth of 33 GHz (Tektronix MSO 73304DX). To avoid recording the beat signal with two adjacent lines simultaneously, we set the bandwidth of the oscilloscope to 24 GHz. The beat signal can be frequency-divided to relax the requirement on the digitizer bandwidth. An example of the heterodyne beat signal with over 70 microcomb lines is shown in Fig. 2a, where the shaded region corresponds to the filtered pump and adjacent lines (Methods). Heterodyne beat with a CW laser has been used to study the phase stability of an FDML laser[29]. Our microcomb enables real-time phase measurements with a much larger bandwidth than the CW laser scheme. By retrieving the phase of the recorded signal via the Hilbert transform, the instantaneous frequency of the FDML laser can be measured (Methods and Extended Data Fig. 2). We show portions of the instantaneous Fig. 1: **Architecture of FDML LiDAR and its application in 3D imaging**. **a,** Experimental setup of the FDML laser, which is calibrated by an integrated Si\\({}_{3}\\)N\\({}_{4}\\) soliton microcomb for FMCW ranging. **b,** Optical spectra of the FDML laser and the 50 GHz soliton microcomb. **c,** 3D imaging of a diffusive target representing the Tsinghua Gate, which was slightly tilted in the measurement. **d,** Measured height change of the target and its linear fits along the dashed line in panel **c. **e,** Distribution of the height after subtracting the solid linear fit. frequency change measured by 3 different comb lines in Fig. 2b (\\(n\\) is the comb line number with respect to the pump). For two modulation frequencies \\(f_{m1}\\)=24.6411 kHz and \\(f_{m2}\\)=24.6403 kHz, the instantaneous frequencies change distinctively. For \\(f_{m1}\\), there are abrupt frequency jumps at a frequency of about 0.8 GHz, while the frequency spurs occur almost irregularly for \\(f_{m2}\\) (Extended Data Fig. 3). By smoothing the measured phase in a 2 ns time window, the spurs can be averaged and the smoothed frequency sweeps are plotted as the dark blue and pink curves in Fig. 2b. The smoothed frequency sweeps over the full FDML lasing span are plotted in Fig. 2c. Note that FDML lasing exists in both the upward and the downward sweep cycles for \\(f_{m2}\\), while it only exists in the downward cycle for \\(f_{m1}\\). Similar lasing asymmetry was reported in ref. [37]. The chirp rates were also calculated and plotted in Fig. 2c. In general, both the laser frequency and the chirp rate track sine-waves set at \\(f_{m1,2}\\). When normalizing the calibrated chirp rates by the assumed sine-waves, we can observe the chirp rate ac **Fig. 2**: **Microcomb-based frequency calibration for FMCW ranging.****a,** Heterodyne beat signal between the microcomb and the FDML laser. The shaded region corresponds to the filtered pump and adjacent lines. **b,** Portion of the calibrated frequency sweep using three different lines under two drive frequencies \\(f_{m1}\\) and \\(f_{m2}\\). The light curves are calculated from the direct retrieved phase change, while the dark curves are calculated from phase change smoothed in a 2 ns window. **c,** Top: calibrated frequency change after phase smoothing within the whole frequency sweep span. The green curve is a 9th-order polyfit of the frequency sweep signal. Middle: calibrated chirp rate of the FDML laser. Bottom: ratio between the calibrated chirp rates and chirp rates derived from an assumed sinusoidal frequency sweep. **d,** Measured FMCW ranging signal in the time domain and its instantaneous spectra within a 0.15 \\(\\mu\\)s window (dashed lines). **e,** Resampled FMCW signal and the instantaneous spectra. **f,** FMCW ranging output when using the direct measured signal and the resampled data. The inset shows the linear ranging outputs using the microcomb frequency calibration and an assumed sine-wave for resampling. tually deviates from the set sweep, and the deviation is stronger at the edges of the frequency sweep span. Frequency sweep manner is also observed to be different in different modulation cycles. Microcombs can be used as a tool to better understand the ultrafast sweeping FDML or MEMS-VCSEL lasing dynamics in future work. Since the instantaneous frequency change is steadier for \\(f_{m1}\\), we used this frequency for the ranging experiments. The temporal ranging signal recorded by BPD2 (see Fig. 1a) is shown in Fig. 2d. As the output of BPD2 was low-pass filtered by a 100 MHz filter, the frequency spurs in Fig. 2b do not impact this signal. When gating the signal by a 0.15 \\(\\mu\\)s window (dashed lines in Fig. 2d) to look into the instantaneous spectra, time-varying frequency is observed. Since the smoothed frequency sweep in Fig. 2b is still influenced by the frequency spurs, we further used the 9th-order polyfit to fit the frequency sweep (green curves in Fig. 2c). Then, the fitted frequencies were used to resample the FMCW ranging signal in the frequency domain (Methods) [23]. The resampled instantaneous oscillation 'frequency' becomes uniform (Fig. 2e). By Fourier transforming the resampled signal, we obtained the ranging signal in Fig. 2f. A sharp peak centered at 10.2 mm with a resolution of 55 \\(\\mu\\)m can be observed. When zooming in the peak in a linear scale, a transform-limited signal can be observed. In contrast, if we Fourier transform the time-domain signal directly, the ranging spectrum is broad and meaningless. As the FDML laser was driven by a sine-wave, we also assumed the FDML lasing frequency changes in a sinusoidal way with the frequency set at \\(f_{m1}\\) and the starting/ending frequency read from the optical spectrum. When resampling the time-domain signal by this assumed sine-wave, the Fourier transformed signal is still broad (inset of Fig. 2f). Therefore, it is critical to have the FDML laser precisely calibrated for ranging, and we have an ultrahigh chirp rate of 320 PHz/s calibrated for FMCW ranging. **High precision FMCW ranging.** Precise frequency calibration and broad bandwidth of the FDML laser enable sub-10 nm precision. We show the continuously mea **Fig. 3**: **Precision of the FDML LiDAR.****a,** Distance in multiple measurements when the calibration signal was recorded by a 33 GHz oscilloscope and a 8 GHz oscilloscope. The 8 GHz oscilloscope has a larger memory depth and recorded data for a longer time. **b,** Allan deviation of the measured distance, which scale as \\(t^{-1/2}\\) with the averaging time \\(t\\). Sub-10 nm precision is achieved in less than 10 ms. The inset shows normalized precision scales as \\(1/B_{\\text{RF}}\\) with \\(B_{\\text{RF}}\\) being the digitizer bandwidth. **c,** Normalized precision scales linearly with the ranging resolution. **d,** Time domain FMCW ranging signal with a return power of 4 pW. **e,** FMCW ranging output using the direct time domain signal and the microcomb-calibrated signal with a 4 pW return power. **f,** Measured normalized ranging precision with different return powers (local oscillator power was 75 \\(\\mu\\)W). The normalized precision scales as \\(P^{-1/2}\\) for return powers \\(P<\\)10 nW. The inset shows SNR under different return powers. sured distance in Fig. 3a. Since the 33 GHz oscilloscope has a limited memory depth, we also used a 8 GHz oscilloscope (Keysight DSOSS804A) to record the FMCW ranging signal in a longer time. The corresponding Allan deviation is plotted in Fig. 3b. The precision scales as 0.27 nm\\(\\cdot\\)\\(\\sqrt{\\text{s}}/t^{1/2}\\) and 0.79 nm\\(\\cdot\\)\\(\\sqrt{\\text{s}}/t^{1/2}\\) (\\(t\\) is the measurement time), respectively. Since the 8 GHz oscilloscope only calibrate the FDML laser in about a third of the frequency span, the precision is worse than the 33 GHz oscilloscope data. Despite the limited bandwidth, the results obtained by the 8 GHz oscilloscope still reaches sub-10 nm precision in 5 ms. Furthermore, it shows the Allan deviation keeps converging in at least 8 ms. Hence, we believe that our FMCW LiDAR can reach a precision of 3.0 nm at 8 ms, once sufficiently long data are recorded. We further low-pass filtered the 33 GHz oscilloscope recorded data by different bandwidth \\(B_{\\text{RF}}\\) to analyze the ranging precision. The normalized precision scales as \\(1/B_{\\text{RF}}\\) and the 8 GHz oscilloscope data also aligns with the scaling (inset of Fig. 3b). Therefore, only the calibrated frequencies contribute to the improvement of the ranging precision. This means full-time frequency calibration is more advantageous than only using combs for tick-like calibration [27, 21]. As noted in ref. [38], ranging precision is proportional to \\(\\Delta R/\\sqrt{\\text{SNR}}\\), where \\(\\Delta R\\) is the resolution and SNR is the signal-to-noise ratio in the power spectrum of the ranging signal. By selecting different frequency spans of the calibrated signal in Fig. 2e, we derive the normalized precision under different resolutions (Fig. 3c). As expected, a linear relationship with ranging resolution \\(\\Delta R\\) is observed. To further examine the dependence on SNR, we attenuated the received power to measure the ranging precision (see Fig. 1a). When reducing the power to 4 pW (that is about 600 photons in the used downward sweep cycle and 72 dB loss in the signal arm), the recorded signal in the time domain almost shows no interferometric features. In the 'ranging' domain, a peak with SNR of 13 dB can still be observed. No such peak can be observed when the frequency sweep is not calibrated. The normalized precision in this case is 13 nm\\(\\cdot\\sqrt{\\text{s}}\\). When increasing the received power, the normalized Allan deviation decreases in an inverse square-root way, as the SNR is determined by the product of the local and received powers (Fig. 3f and its inset). Ranging precision and SNR saturate for received power above 10 nW, which may be attributed to the spurious reflections in the fibres [23]. Once the parasitic reflections dominate the **Fig. 4**: **Velocity measurements using the FDML LiDAR.****a,** Continuous ranging spectra featuring a decrease in mean frequency and frequency difference, indicating a deceleration motion of the target. **b,** Ranging spectra at 8 ms of panel **a** in a linear scale (using a 1.3 THz lasing bandwidth). **c, d** FMCW measured velocity and velocity calculated from the measured distance. The bottom panels are residual error between the FMCW velocity and its polyfit. When using 2.7 THz lasing bandwidth for measurement, a correction factor should be included, due to the strong frequency sweep nonlinearity. Data were measured by the 8 GHz Keysight oscilloscope. noise floor, increasing received power no longer improves the SNR and the precision is clamped. **Velocity measurements.** Doppler frequency shift from a moving target causes frequency split in the FMCW ranging spectra for the upward and downward sweep cycles. With frequency calibration and data resampling, the ranging spectrum has a time-axis (Figs. 2e, f) rather than a frequency-axis. The time-axis should be converted back to a frequency-axis to locate the two split frequency peaks \\(f_{u}\\) and \\(f_{d}\\) for velocity measurements. Artifacts can arise in the conversion with nonlinear frequency sweeps. Large chirp rates help mitigate the artifacts (Methods). In this experiment, we set the drive frequency at \\(f_{m2}\\) to have FDML lasing in both sweep cycles. The continuous ranging spectra measured from a moving target is shown in Fig. 4a (recorded by the 8 GHz oscilloscope). The inset clearly shows the frequency split and the deceleration of the target. The ranging spectra at 8 ms are plotted in Fig. 4b, indicating that both the upward and the downward cycles are calibrated to have transform-limited ranging signals. We use a relatively sweep bandwidth of 1.3 THz for this measurement so as to lower the frequency sweep nonlinearity, and the time-axis is transferred to the frequency-axis by simply multiplying the overall chirp rate (Methods). We show the distance and velocity measured in real-time in Figs. 4c, d, corresponding to a deceleration and an acceleration scenario, respectively. The FMCW measured velocity agrees well with the velocity calculated from the measured distance. The residual error between the FMCW measured velocities and their polyfit has a standard deviation as small as 0.79 mm/s. This velocity uncertainty can be further reduced to 0.40 mm/s by using a 2.7 THz bandwidth for measurement. With this broader bandwidth, the frequency sweep nonlinearity becomes larger and a correction factor should be included (Figs. 4c, d), see Methods and Extended Data Fig. 4 for details. The velocity uncertainty is fairly low considering the ultrahigh chirp rate, which extends the range of measurable velocity but adds difficulty in measuring velocity precisely (Methods and Extended Data Table 1). **Discussions.** Our work establishes a technique to tame ultrafast frequency sweep lasers (theoretically up to chirp rates of hundreds of EHz/s) for FMCW LiDAR with low return power. A PIC-based microcomb calibrates a broadband FDML laser with ultrahigh chirp rates to reach the best normalized precision for FMCW LiDARs at an update rate of 24.6 kHz (Fig. 5 and Extended Data Table 1). FDML lasers with lasing bandwidth exceeding 20 THz and MHz update rates have been demonstrated [39]. Together with a dispersion-engineered, broadband microcomb, it may further improve the ranging precision and measurement speed. Although the absolute distance in FMCW LiDARs can be adjusted by tuning the length of the reference arm, the measurement range is limited by the linewidth of the frequency sweeping laser. Our FFP-TF has a relatively broad bandwidth, which limits this range to about ten centimetre. Hence, the current system may be more suitable for fast 4D imaging (including velocity) applications. By optimizing the dispersion management and controlling \\(f_{m}\\) precisely, the coherence length of an FDML laser can reach metre-scale [40]. This requires meticulous control of the laser configuration. Frequency sweep dynamics revealed by the microcomb can be used in turn to optimize the laser condition. MEMS-VCSEL lasers with ultrahigh chirp rates and narrow linewidths can pair with microcombs similarly to extend the measurement range to hundreds of metres [31]. Finally, our work can guide precise FMCW velocity measurement with frequency sweep nonlinearity. **Methods** **Fourier domain mode-locked laser.** The FDML laser consists of a FFP-TF (MOI-FFP-TF2-1520-1570-7.5G2000) with a finesse of 2000 and an FSR of 135 nm, a semiconductor optical amplifier (Thorlabs SOA117S), a fibre delay of 8.1 km consisting of a 0.9 km dispersion compensation fibre (DCF) and a 7.2 km single mode fibre (SMF). Two optical isolators were used to ensure unidirectional operation and a polarization controller (PC) was used to optimize the lasing polarization. A pulse shaper (Finisar WS 1000B) was inserted for finer dispersion compensation. The filter was driven by a sinusoidal signal around 24.6 kHz, and the output was derived from the 30%-port of a 70/30 coupler. The output power of the laser was about 1.3 mW, which is mainly limited by the damage power of the filter. **Silicon nitride soliton microcomb.** The foundry manufactured Si\\({}_{3}\\)N\\({}_{4}\\) microresonator has a dimension of 0.81\\(\\times\\)2.2 \\(\\mu\\)m [35]. The intrinsic and loaded Q-factors of the sample are 9.8 million and 6.4 million, respectively. An external cavity diode laser (ECDL, Toptica CTL1550) was amplified to pump the microresonator with an on-chip power of 200 mW. To mitigate the thermal instability, we used a single-sideband modulator (SSBM) based fast laser sweep technique to initiate the soliton generation [41; 42]. To avoid the influence of the residual sidebands from the SSBM, we suppressed the frequencies near the pump by a notch filter with a bandwidth **Fig.** 5: **FDML LiDAR performance in comparison with other reports.** Our system has an excellent combination of normalized precision and chirp rate. of 200 GHz (4 lines). The heterodyne beat between the soliton microcomb and the FDML laser was registered by a BPD (Finisar BPDV2150R) with a bandwidth of 43 GHz, whose output was amplified by a low noise amplifier. **Phase retrieval and frequency calibration.** The phase retrieval process is also described in Extended Data Fig. 2. We first separated the measured heterodyne beat signal into different segments (each lasting 4.6 ns). Then, we Fourier transformed the segmented data and filtered out the strong peaks by a bandwidth of 3.5 GHz (Extended Data Fig. 2b). The filtered spectrum was inverse Fourier transform back to the time domain. Phase of the filtered signal was retrieved by Hilbert transform (Extended Data Fig. 2c). The phase was further smoothed in a 2 ns window to yield the instantaneous frequency (Extended Data Figs. 2d, e). Then the heterodyne frequency results is unwrapped by the comb spacing \\(f_{r}\\)=50.08 GHz to yield FDML laser instantaneous frequency in the full sweep span (Extended Data Fig. 2f). **Resampling of the ranging signal.** For a distance \\(R\\), the round trip time delay is \\(\\Delta t=2R/c\\), where \\(c\\) is the light velocity in the air. We can write the local and signal frequency sweep field as \\(E_{\\rm LO}(t)=E_{1}(t){\\rm exp}\\left[i\\varphi(t)\\right]\\) and \\(E_{\\rm sig}=E_{2}(t+\\Delta t){\\rm exp}\\left[i\\varphi(t+\\Delta t)\\right]\\), where \\(E_{1(2)}\\) is the laser amplitude, and \\(\\varphi(t)\\) include the phase change due to the sweeping frequency. Since resampling mainly deals with the phase of the measured time domain signal, we first omitted the possible amplitude variation in the laser field. Then, the normalized output of BPD2 in Fig. 1a can be written as, \\[V_{r}(t)=\\cos\\left[\\varphi(t+\\Delta t)-\\varphi(t)\\right]. \\tag{1}\\] Since the FDML laser sweeps in a nonlinear way, the instantaneous frequency of \\(V_{r}(t)\\) (from a static target) is time varying and can be written as, \\[\\begin{split} f_{b}(t)=&\\frac{1}{2\\pi}\\frac{d\\left[ \\varphi(t+\\Delta t)-\\varphi(t)\\right]}{dt}=\ u(t+\\Delta t)-\ u(t)\\\\ &\\approx\\Delta t\\cdot{\\rm d}\ u(t)/{\\rm d}t,\\end{split} \\tag{2}\\] where \\(\ u(t)\\) is the instantaneous frequency of the FDML laser. Eq. 2 holds when \\(\\Delta t\\) is relatively small. Thus, \\(\\varphi(t+\\Delta t)-\\varphi(t)\\) can be calculated as, \\[\\varphi(t+\\Delta t)-\\varphi(t)=2\\pi\\int_{0}^{t}f_{b}(\\tau){\\rm d}\\tau\\approx 2 \\pi\\Delta t\\left[\ u(t)-\ u(0)\\right]. \\tag{3}\\] The ranging signal can be written as, \\[V_{r}(t)=\\cos\\left(2\\pi\\Delta t\\left[\ u(t)-\ u(0)\\right]\\right)=\\cos\\left(2 \\pi\\Delta t\ u_{s}(t)\\right), \\tag{4}\\] where we define \\(\ u_{s}(t)\\equiv\ u(t)-\ u(0)\\). By replacing the time-axis with the frequency-axis, the signal \\(V_{r}(t)\\) is resampled as \\(V_{r}(\ u_{s})\\). Then, \\(\\Delta t\\) can be derived via Fourier transform versus \\(\ u_{s}\\). In the Fourier transform, we added zeros, whose length is nine times of the recorded data length, to the resampled data so as to locate the peak and \\(\\Delta t\\) more accurately. **Velocity measurement.** When the target is moving, the beating frequency \\(f_{b}(t)\\) becomes, \\[f_{b}(t)\\approx\\Delta t\\cdot{\\rm d}\ u(t)/{\\rm d}t+f_{\\rm D}, \\tag{5}\\] where \\(f_{\\rm D}=v/\\lambda_{c}\\) and \\(v\\) is the velocity being measured. Thus, the phase difference \\(\\varphi(t+\\Delta t)-\\varphi(t)\\) becomes, \\[\\varphi(t+\\Delta t)-\\varphi(t)\\approx 2\\pi\\Delta t\ u_{s}(t)+2\\pi f_{\\rm D}t. \\tag{6}\\] \\(\ u_{s}(t)\\) is positive (negative) in the upward (downward) cycle. We focus on the positive frequencies in the following analysis and write the complex form of \\(V_{r}(t)\\) as, \\[V_{r}(t)=\\underbrace{\\exp\\left(i2\\pi\\Delta t|\ u_{s}(t)|\\right)}_{V_{r1}} \\underbrace{\\exp\\left(\\pm i2\\pi f_{\\rm D}t\\right)}_{V_{r2}}. \\tag{7}\\] The ranging spectrum is \\(\\widetilde{V}_{r1}\\otimes\\widetilde{V}_{r2}\\), where \\(\\widetilde{V}_{r1(r2)}\\) is the spectrum for \\(V_{r1(r2)}\\) and \\(\\otimes\\) is the symbol for convolution. In a linear sweep case, \\(\ u(t)=\ u(0)+\ u_{1}t\\), where \\(\ u_{1}\\) is the chirp rate. Resampling is not needed, and the ranging spectrum is, \\[\\widetilde{V}_{r}(f)=\\delta(|\ u_{1}|\\Delta t)\\otimes\\delta(\\pm f_{\\rm D})= \\delta(|\ u_{1}|\\Delta t\\pm f_{\\rm D}). \\tag{8}\\] Therefore, the ranging peaks are shifted by \\(\\pm f_{\\rm D}\\) (i.e., spaced by \\(2f_{\\rm D}\\)). In a nonlinear sweep case, we define the relationship that \\(\ u_{s}(t)\\equiv\ u_{1}t+\ u_{c}(t)\\), where \\(\ u_{1}=(\ u(T)-\ u(0))/T\\) (\\(T\\) is the ending time of frequency sweep) is the overall chirp rate and \\(\ u_{c}\\) is an error frequency. Then, \\(V_{r}(t)\\) is resampled as, \\[V_{r}(\ u_{s})=\\underbrace{\\exp\\left(i2\\pi\\Delta t|\ u_{s}|\\right)}_{V_{r1}} \\underbrace{\\exp\\left(\\pm i2\\pi f_{\\rm D}\ u_{s}/\ u_{1}\\right)}_{V_{r2}} \\underbrace{\\exp\\left(\\pm i2\\pi f_{\\rm D}\ u_{e}/\ u_{1}\\right)}_{V_{r3}}. \\tag{9}\\] Fourier transform versus \\(\ u_{s}\\) yields the ranging spectrum as, \\[\\widetilde{V}_{r}(t)=\\delta(\\Delta t)\\otimes\\delta(\\pm f_{\\rm D}/\ u_{1})) \\otimes\\widetilde{V}_{r3}(t)=\\delta(\\Delta t\\pm f_{\\rm D}/\ u_{1})\\otimes \\widetilde{V}_{r3}(t). \\tag{10}\\] When \\(f_{\\rm D}\ u_{e}/\ u_{1}\\) is small, \\(V_{r3}(\ u_{s})\\) can be approximated as a constant (\\(\\widetilde{V}_{r3}(t)=\\delta(0)\\)). Then, \\(\\widetilde{V}_{r}(t)\\) still recovers \\(f_{\\rm D}\\) and velocity (1.3 THz case in Fig. 4). Therefore, velocity measurement works better, when the velocity is low (small \\(f_{\\rm D}\\)) and selecting a relatively small optical bandwidth (small \\(\ u_{e}\\)). Moreover, the ultrahigh chirp rate of FDML lasers (large \\(\ u_{1}\\)) is beneficial for velocity measurement with frequency sweep nonlinearity and data resampling. When \\(f_{\\rm D}\ u_{e}/\ u_{1}\\) is large, \\(\\widetilde{V}_{r3}(t)\\) may distort the velocity measurement. Since \\(\ u_{s}(t)\\) has been measured accurately by the microcomb, \\(\\widetilde{V}_{r3}(t)\\) can be calculated numerically with some prior knowledge of \\(f_{\\rm D}\\), which can be attained by using a small frequency sweep bandwidth first. Then its influence can be corrected (Extended Data Fig. 4). Eq. 10 shows uncertainty in \\(\\Delta t\\) is multiplied by \\(\ u_{1}\\) when determining \\(f_{\\rm D}\\), which means measuring velocity precisely is challenging for large chirp rates. The largest \\(f_{\\rm D}\\) that can be measured is \\(\ u_{1}\\Delta t\\); and the measurable velocity is limited to \\(v_{\\rm max}=\\lambda_{c}\ u_{1}\\Delta t\\). **Data Availability** The data that support the plots within this paper and other findings are available. **Code Availability** The code that supports findings of this study are available from the corresponding author upon request. **Acknowledgements** We thank Prof. Qiang Liu and Prof. Yidong Tan at Tsinghua University for discussions and equipment loan. The silicon nitride chip used in this work was fabricated by Qaleido Photonics. This work is supported by the National Key R&D Program of China (2021YFB2801200), by the National Natural Science Foundation of China (62250071, 62175127), by the Tsinghua-Toyota Joint Research Fund, and by the Tsinghua University Initiative Scientific Research Program (20221080069). J.L. acknowledges support from the National Natural Science Foundation of China (12261131503), Innovation Program for Quantum Science and Technology (2023ZD0301500), Shenzhen-Hong Kong Cooperation Zone for Technology and Innovation (HZQB-KCZYB2020050), and Shenzhen Science and Technology Program (RCJC20231211090042078). **Author Contributions** Z.C. led the experiments with assistance from Z. Wang., Z. Wei, and C.Y.; B.S., W.S. and J.L. prepared and characterized the Si\\({}_{3}\\)N\\({}_{4}\\) chip. The project was supervised by C.B. **Competing Interests** The authors declare no competing interests. ## References * [1] Kim, I. _et al._ Nanophotonics for light detection and ranging technology. _Nature Nanotechnology_**16**, 508-524 (2021). * [2] Sun, J., Timurdogan, E., Yaacobi, A., Hosseini, E. S. & Watts, M. R. Large-scale nanophotonic phased array. _Nature_**493**, 195-199 (2013). * [3] Poulton, C. V. _et al._ Coherent solid-state LIDAR with silicon photonic optical phased arrays. _Opt. Lett._**42**, 4091-4094 (2017). * [4] Zhang, X., Kwon, K., Henriksson, J., Luo, J. & Wu, M. C. A large-scale microelectromechanical-systems-based silicon photonics LiDAR. _Nature_**603**, 253-258 (2022). * [5] Herr, T. _et al._ Temporal solitons in optical microresonators. _Nature Photonics_**8**, 145 (2014). * [6] Kippenberg, T. J., Gaeta, A. L., Lipson, M. & Gorodetsky, M. L. Dissipative Kerr solitons in optical microresonators. _Science_**361**, eaan8083 (2018). * [7] Chang, L., Liu, S. & Bowers, J. E. Integrated optical frequency comb technologies. _Nature Photonics_**16**, 95-108 (2022). * [8] Suh, M.-G. & Vahala, K. J. Soliton microcomb range measurement. _Science_**359**, 884-887 (2018). * [9] Trocha, P. _et al._ Ultrafast optical ranging using microresonator soliton frequency combs. _Science_**359**, 887-891 (2018). * [10] Chen, R. _et al._ Breaking the temporal and frequency congestion of LiDAR by parallel chaos. _Nature Photonics_**17**, 306-314 (2023). * [11] Lukashchuk, A., Riemensberger, J., Tusnin, A., Liu, J. & Kippenberg, T. J. Chaotic microcomb-based parallel ranging. _Nature Photonics_**17**, 814-821 (2023). * [12] Riemensberger, J. _et al._ Massively parallel coherent laser ranging using a soliton microcomb. _Nature_**581**, 164-170 (2020). * [13] Behroozpour, B., Sandborn, P. A., Wu, M. C. & Boser, B. E. Lidar system architectures and circuits. _IEEE Communications Magazine_**55**, 135-142 (2017). * [14] Lihachev, G. _et al._ Low-noise frequency-agile photonic integrated lasers for coherent ranging. _Nature Communications_**13**, 3522 (2022). * [15] Snigirev, V. _et al._ Ultrafast tunable lasers using lithium niobate integrated photonics. _Nature_**615**, 411-417 (2023). * [16] Rogers, C. _et al._ A universal 3D imaging sensor on a silicon photonics platform. _Nature_**590**, 256-261 (2021). * [17] Qian, R. _et al._ Video-rate high-precision time-frequency multiplexed 3D coherent ranging. _Nature Communications_**13**, 1476 (2022). * [18] Wang, S. _et al._ High-performance integrated laser based on thin-film lithium niobate photonics for coherent ranging. _Laser & Photonics Reviews_**2400224 (2024). * [19] Behroozpour, B. _et al._ Electronic-photonic integrated circuit for 3D microimaging. _IEEE Journal of Solid-State Circuits_**52**, 161-172 (2016). * [20] Coddington, I., Giorgetta, F. R., Baumann, E., Swann, W. C. & Newbury, N. R. Characterizing fast arbitrary CW waveforms with 1500 THz/s instantaneous chirps. _IEEE Journal of Selected Topics in Quantum Electronics_**18**, 228-238 (2011). * [21] Del'Haye, P., Arcizet, O., Gorodetsky, M. L., Holzwarth, R. & Kippenberg, T. J. Frequency comb assisted diode laser spectroscopy for measurement of microcavity dispersion. _Nature Photonics_**3**, 529-533 (2009). * [22] Giorgetta, F., Coddington, I., Baumann, E., Swann, W. & Newbury, N. Fast high-resolution spectroscopy of dynamic continuous-wave laser sources. _Nature Photonics_**4**, 853-857 (2010). * [23] Baumann, E. _et al._ Comb-calibrated frequency-modulated continuous-wave ladar for absolute distance measurements. _Opt. Lett._**38**, 2026-2028 (2013). * [24] Twayana, K. _et al._ Frequency-comb-calibrated swept-wavelength interferometry. _Opt. Express_**29**, 24363-24372 (2021). * [25] Shi, B. _et al._ Frequency-comb-linearized, widely tunable lasers for coherent ranging. _Photonics Research_**12**, 663-681 (2024). * [26] Yang, Q.-F. _et al._ Vernier spectrometer using counterpropagating soliton microcombs. _Science_**363**, 965-968 (2019). * [27] Jia, L. _et al._ Nonlinear calibration of frequency modulated continuous wave LIDAR based on a microresonator soliton comb. _Opt. Lett._**46**, 1025-1028 (2021). * [28] Huber, R., Wojtkowski, M. & Fujimoto, J. Fourier domain mode locking (FDML): A new laser operating regime and applications for optical coherence tomography. _Opt. Express_**14**, 3225-3237 (2006). * [29] Grill, C. _et al._ Towards phase-stabilized fourier domain mode-locked frequency combs. _Communications Physics_**5**, 212 (2022). * [30] Jiang, Y., Karpf, S. & Jalali, B. Time-stretch LiDAR as a spectrally scanned time-of-flight ranging camera. _Nature Photonics_**14**, 14-18 (2020). * [31] John, D. D. _et al._ Wideband electrically pumped 1050-nm MEMS-tunable VCSEL for phthalamic imaging. _J. Lightw. Technol._**33**, 3461-3468 (2015). * [32] Chen, S. _et al._ High speed, long range, deep penetration swept source OCT for structural and angiographic imaging of the anterior eye. _Sci. Rep._**12**, 992 (2022). * [33] Jung, E. J. _et al._ Characterization of FBG sensor interrogation based on a FDML wavelength swept laser. _Opt. Express_**16**, 16552-16560 (2008). * [34] Kranendonk, L. A. _et al._ High speed engine gas thermometry by Fourier-domain mode-locked laser absorption spectroscopy. _Opt. Express_**15**, 15115-15128 (2007). * [35] Ye, Z. _et al._ Foundry manufacturing of tight-confinement, dispersion-engineered, ultralow-loss silicon nitride photonic integrated circuits. _Photonics Research_**11**, 558-568 (2023). * [36] Zhang, X., Pouls, J. & Wu, M. C. Laser frequency sweep linearization by iterative learning pre-distortion for FMCW LiDAR. _Opt. Express_**27**, 9965-9974 (2019). * [37] Jeon, M. Y., Zhang, J. & Chen, Z. Characterization of Fourier domain mode-locked wavelength swept laser for optical coherence tomography imaging. _Opt. Express_**16**, 3727-3737 (2008). * [38] Caldwell, E. D., Sinclair, L. C., Newbury, N. R. & Deschenes, J.-D. The time-programmable frequency comb and its use in quantum-limited ranging. _Nature_**610**, 667-673 (2022). * [39] Kolb, J. P., Pfeiffer, T., Eibl, M., Hakert, H. & Huber, R. High-resolution retinal swept source optical coherence tomography with an ultra-wideband Fourier-domain mode-locked laser at MHz A-scan rates. _Biomedical Optics Express_**9**, 120-130 (2018). * [40] Pfeiffer, T. _et al._ Analysis of FDML lasers with meter range coherence. In _Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XXI_, vol. 10053, 125-130 (SPIE, 2017). * [41] Stone, J. R. _et al._ Thermal and nonlinear dissipative-soliton dynamics in Kerr-microresonator frequency combs. _Phys. Rev. Lett._**121**, 063902 (2018). * [42] Liu, K. _et al._ Mitigating fast thermal instability by engineered laser sweep in AlN soliton microcomb generation. _Photonics Research_**11**, A10-A18 (2023). * [43] Huang, X. _et al._ Non-line-of-sight imaging and vibrometry using a comb-calibrated coherent sensor. _Phys. Rev. Lett._**132**, 233802 (2024). * [44] Jia, L. _et al._ Nonlinear calibration of frequency modulated continuous wave LIDAR based on a microresonator soliton comb. _Opt. Lett._**46**, 1025-1028 (2021). * [45] Zheng, J. _et al._ High-precision silicon-integrated frequency-modulated continuous wave LiDAR calibrated using a microresonator. _ACS Photonics_**9**, 2783-2791 (2022). * [46] Huang, X., Hong, Y., Li, Z.-P. & Xu, F. Frequency-modulated continuous-wave 3D imaging with high photon efficiency. _Opt. Lett._**47**, 3568-3571 (2022). * [47] Liu, C. _et al._ Highly-linear and wavelength-tunable frequency-modulated continuous-wave hybrid-integrated laser. _Laser & Photonics Reviews_**2300882** (2024). * [48] Martin, A. _et al._ Photonic integrated circuit-based FMCW coherent LiDAR. _J. Lightw. Technol._**36**, 4640-4645 (2018). * [49] DiLazaro, T. & Nehmetallah, G. Large-volume, low-cost, high-precision FMCW tomography using stitched DFBs. _Opt. Express_**26**, 2891-2904 (2018). * [50] Tang, L., Li, L., Li, J. & Chen, M. Hybrid integrated ultralow-linewidth and fast-chirped laser for FMCW LiDAR. _Opt. Express_**30**, 30420-30429 (2022). * [51] Wang, Y. _et al._ Laser feedback frequency-modulated continuous-wave LiDAR and 3-D imaging. _IEEE Trans. Instrum. Meas._**72**, 7002309 (2023). * [52] He, B. _et al._ Massively parallel FMCW lidar with cm range resolution using an electro-optic frequency comb. _Opt. Lett._**48**, 3621-3624 (2023). * [53] Mrokon, A., Oehler, J. & Breunig, I. Continuous adiabatic frequency conversion for FMCW-LiDAR. _Sci. Rep._**14**, 4990 (2024). * [54] Lukashchuk, A., Riemensberger, J., Karpov, M., Liu, J. & Kippenberg, T. J. Dual chirped microcomb based parallel ranging at meegapixel-line rates. _Nature Communications_**13**, 3280 (2022). * [55] Dong, Y., Zhu, Z., Tian, X., Qiu, L. & Ba, D. Frequency-modulated continuous-wave LIDAR and 3D imaging by using linear frequency modulation based on injection locking. _J. Lightw. Technol._**39**, 2275-2280 (2021). * [56] Okano, M. & Chong, C. Swept Source Lidar: simultaneous FMCW ranging and nonmechanical beam steering with a wideband swept source. _Opt. Express_**28**, 23898-23915 (2020). * [57] Hariyama, T., Sandborn, P. A., Watanabe, M. & Wu, M. C. High-accuracy range-sensing system based on FMCW using low-cost VCSEL. _Opt. Express_**26**, 9285-9297 (2018). * [58] Han, Y. _et al._ High-speed two-dimensional spectral-scanning coherent LiDAR system based on tunable VCSEL. _J. Lightw. Technol._**41**, 412-419 (2022). * [59] Zhang, X., Pouls, J. & Wu, M. C. Laser frequency sweep linearization by iterative learning pre-distortion for FMCW LiDAR. _Opt. Express_**27**, 9965-9974 (2019). * [60] Ula, R. K., Noguchi, Y. & Iiyama, K. Three-dimensional object profiling using highly accurate FMCW optical ranging system. _J. Lightw. Technol._**37**, 3826-3833 (2019). **Extended Data Fig. 1: Linearity measurement for the FDML LiDAR. a,** Experimental setup for the ranging linearity measurement. By mounting the mirrors on a linear translation stage, we characterized the linearity measurement accuracy of the FDML LiDAR. The movement of the target was calibrated by a laser interferometer. To cancel the fibre length fluctuations, we added a reference mirror in this setup too. Circ.: circulator, Col.: collimator, BS: beam filter, RM: reference mirror, MMs: measurement mirrors, BPD: balanced photodetector. **b,** Top: FDML LiDAR measured distance change versus the laser interferometer calibrated distance change. The measurement data were recorded by the 33 GHz Tektronix oscilloscope (bandwidth set at 24 GHz). Bottom: The residual error between the FDML LiDAR and the interferometer measured results. The standard deviation is 0.2 \\(\\mu\\)m. **c,** The ranging linearity measurements recorded by the 8 GHz Keysight oscilloscope. The standard deviation of the residual error is 1.0 \\(\\mu\\)m. **Extended Data Fig. 2: Frequency calibration process.****a,** We first selected a segment of the recorded calibration signal by a 4.6 \\(\\mu\\)s window. **b,** The segmented data was Fourier transformed and bandpass filtered (shaded boxes represent the passband). **c,** The bandpass filtered spectrum was inverse Fourier transformed back to the domain. The phase of the signal was retrieved by the Hilbert transform (blue curve). The retrieved phase was further smoothed in a 2 ns window (red curve). **d,** The instantaneous frequency was derived as the derivative of the phase versus time. The blue and red curves show the frequency without and with phase smoothing, respectively. The inset is a zoom in of the dashed box region. **e,** Retrieved frequency in the full sweep span. The frequencies were calibrated by 74 microcomb lines and were wrapped by \\(f_{r}\\). The shaded box corresponds to the filtered pump and adjacent lines. **f,** The retrieved frequencies were further unwrapped by the 50.08 GHz comb line spacing (dots). The unwrapped frequencies were fitted by a 9th-order polyfit. This fit was used to resample the time domain signal for ranging. **Extended Data Fig. 3: Occurring frequency of the frequency spurs in the FDML laser.****a,** Measured occurring frequency of the frequency spurs (abrupt frequency changes shown in Fig. 2b) under different modulation frequencies. The frequency is about 1 GHz and is quite sensitive to the modulation frequency. The middle of the figure corresponds to a regime where there is no obvious occurring frequency. The error bar corresponds to the standard deviation of the occurring frequency in multiple measurements. **b,** Examples of the power spectra of the measured instantaneous frequency after removing the smoothed frequency change (e.g., frequency in the light curve minus the dark curve in Fig. 2b). At a modulation frequency marked by 1 or 3 in panel **a**, the frequency spurs occur at a well-defined frequency. At a modulation frequency marked by 2, the power spectrum is broad, and there is no well-defined frequency. **Extended Data Fig. 4: Correction factor for velocity measurement with nonlinear frequency sweep.****a,** The green curve is the 9th-order polyfit of the calibrated frequency sweep \\(\ u_{s}(t)\\). We used linear fits \\(\ u_{1}t\\) to approximate the nonlinear frequency sweep, where \\(\ u_{1}\\) is the overall chirp rate and was calculated as the frequency difference between the ending moment (\\(t\\)=\\(T\\)) and the starting moment (\\(t\\)=0) divided by the sweep time \\(T\\) (see Methods). Since the frequency sweeps in a nonlinear way, there is an error frequency \\(\ u_{e}\\) between \\(\ u_{s}\\) and the linear fits \\(\ u_{1}t\\), which is plotted in the bottom panels. The error frequency \\(\ u_{e}\\) becomes much larger when the 2.7 THz bandwidth was used. **b,** In a linear sweep case, the ranging spectrum has a peak centered at the set \\(f_{\\rm D}\\)=150 kHz by Fourier transform the time domain data directly (\\(\\widetilde{V}_{r2}(f)\\) and dark blue curve). In a nonlinear sweep case, data resampling is needed, and \\(\\widetilde{V}_{r3}(t)\\) in Eq. 10 should be corrected to locate \\(f_{\\rm D}\\). By using the calibrated \\(\ u_{e}(t)\\) for the 2.7 THz bandwidth shown in panel **a**, we can calculate \\(\\widetilde{V}_{r3}(t)\\) (thus, \\(\\widetilde{V}_{r3}(\ u_{1}t)\\)). The calculated \\(\\widetilde{V}_{r3}(\ u_{1}t)\\) corresponds to a peak at \\(-9.6\\) kHz (light pink curve). Due to this shift, the center of ranging spectrum calculated by \\(\\widetilde{V}_{r2}(\ u_{1}t)\\otimes\\widetilde{V}_{r3}(\ u_{1}t)\\) is located at 140.4 kHz (brick red curve), as opposed to the set 150 kHz. Thus, a correction factor \\(\\alpha\\)=1.068 should be multiplied to the measured frequency shift to locate \\(f_{\\rm D}\\). **c,** Calculated \\(f_{\\rm D}\\) under different set velocity and \\(f_{\\rm D}\\) when using the \\(\ u_{e}(t)\\) with 2.7 THz bandwidth in panel **a. d,** Correction factor \\(\\alpha\\) between the calculated and set \\(f_{\\rm D}\\) shown in panel **c. e,** Correction factor when selecting different optical bandwidths in \\(\ u_{s}(t)\\) in panel **a** for FMCW measurements, assuming a prior knowledge of \\(f_{\\rm D}\\)=100 kHz. \\begin{tabular}{l l c c c c c c c} \\multicolumn{10}{c}{**Extended Data Table 1: Detailed comparison with other FMCW LiDARs**} \\\\ \\hline No. & Type & Resolution & Normalized & Chirp rate & Update rate & Velocity & Vel. uncert./ & Ref. \\\\ & & & precision & & & uncertainty & Chirp rate\\({}^{\\textbf{a}}\\) & \\\\ \\hline 1 & ECDL & 75 \\(\\mu\\)m & 2.5 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 1.0 \\(\\times 10^{14}\\) Hz/s & 100 Hz & N/A & N/A & [43] \\\\ 2 & ECDL & 60 \\(\\mu\\)m & 4.9 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 1.25 \\(\\times 10^{13}\\) Hz/s & 5 Hz & N/A & N/A & [44] \\\\ 3 & ECDL & 60 \\(\\mu\\)m & 0.21 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 1.0 \\(\\times 10^{13}\\) Hz/s & 200 Hz & N/A & N/A & [45] \\\\ 4 & ECDL & 12 mm & N/A & 6.25 \\(\\times 10^{12}\\) Hz/s & 500 Hz & N/A & N/A & [46] \\\\ 5 & ECDL & 214 mm & 1.3 mm \\(\\cdot\\sqrt{\\text{s}}\\) & 1.4 \\(\\times 10^{12}\\) Hz/s & 1 kHz & N/A & N/A & [47] \\\\ 6 & DFB & 280 mm & N/A & 1.6 \\(\\times 10^{13}\\) Hz/s & 1.9 kHz & N/A & N/A & [48] \\\\ 7 & DFB & 17 mm & N/A & 1.1 \\(\\times 10^{14}\\) Hz/s & 6.25 kHz & N/A & N/A & [4] \\\\ 8 & DFB array & **27 \\(\\mu\\)m\\({}^{\\textbf{b}}\\)** & 0.25 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 1.8 \\(\\times 10^{15}\\) Hz/s & 333 Hz & N/A & N/A & [49] \\\\ 9 & DBR & 1.2 mm & 17.9 nm \\(\\cdot\\sqrt{\\text{s}}\\) & 2.2 \\(\\times 10^{16}\\) Hz/s & 180 kHz & N/A & N/A & [19] \\\\ 10 & DBR & 2.8 mm & 0.3 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 7.5 \\(\\times 10^{14}\\) Hz/s & 16 kHz & N/A & N/A & [17] \\\\ 11 & PZT mod. & 125 mm & N/A & 1.7 \\(\\times 10^{15}\\) Hz/s & 800 kHz & N/A & N/A & [14] \\\\ 12 & PZT mod. & 35 mm & 0.27 mm \\(\\cdot\\sqrt{\\text{s}}\\) & 8.4 \\(\\times 10^{14}\\) Hz/s & 10 kHz & N/A & N/A & [50] \\\\ 13 & PZT mod. & 5.2 mm & 1.1 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 1.1 \\(\\times 10^{13}\\) Hz/s & 80 Hz & **60 \\(\\mu\\)m/s\\({}^{\\textbf{c}}\\)** & 5.4 \\(\\mu\\)m/THz & [51] \\\\ 14 & EO mod. & 59 mm & 31.6 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 4.4 \\(\\times 10^{16}\\) Hz/s & 10 MHz & N/A & N/A & [12] \\\\ 15 & EO mod. & 87 mm & 4.9 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 3.44 \\(\\times 10^{16}\\) Hz/s & 5 MHz & 54 mm/s & 1.6 \\(\\mu\\)m/THz & [18] \\\\ 16 & EO mod. & 125 mm & N/A & 1.2 \\(\\times 10^{16}\\) Hz/s & 10 MHz & N/A & N/A & [15] \\\\ 17 & EO mod. & 15.4 mm & 5.6 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 1.17 \\(\\times 10^{16}\\) Hz/s & 390 kHz & N/A & N/A & [52] \\\\ 18 & EO mod. & 220 mm & N/A & 1.0 \\(\\times 10^{16}\\) Hz/s & **20 MHz\\({}^{\\textbf{d}}\\)** & N/A & N/A & [53] \\\\ 19 & EO mod. & 80 mm & N/A & 3.6 \\(\\times 10^{14}\\) Hz/s & 100 kHz & 0.1 m/s & 277 \\(\\mu\\)m/THz & [54] \\\\ 20 & EO mod. & 25 mm & N/A & 6.0 \\(\\times 10^{13}\\) Hz/s & 5 kHz & 7.8 mm/s & 130 \\(\\mu\\)m/THz & [55] \\\\ 21 & EO mod. & 37.5 mm & 74 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 4.7 \\(\\times 10^{12}\\) Hz/s & 588 Hz & 1 mm/s & 213 \\(\\mu\\)m/THz & [16] \\\\ 22 & MEMS & 420 \\(\\mu\\)m & N/A & 1.1 \\(\\times 10^{17}\\) Hz/s & 10 kHz & N/A & N/A & [56] \\\\ 23 & MEMS & 330 \\(\\mu\\)m & 61 nm \\(\\cdot\\sqrt{\\text{s}}\\) & 1.7 \\(\\times 10^{16}\\) Hz/s & 10 kHz & N/A & N/A & [57] \\\\ 24 & MEMS & 3.5 mm & 22.4 \\(\\mu\\)m\\(\\cdot\\sqrt{\\text{s}}\\) & 8.1 \\(\\times 10^{15}\\) Hz/s & 8 kHz & N/A & N/A & [58] \\\\ 25 & MEMS & 130 \\(\\mu\\)m & 2.5 nm \\(\\cdot\\sqrt{\\text{s}}\\) & 3.15 \\(\\times 10^{15}\\) Hz/s & 1 kHz & N/A & N/A & [23] \\\\ 26 & MEMS & 970 \\(\\mu\\)m & N/A & 1.55 \\(\\times 10^{15}\\) Hz/s & 5 kHz & 4 mm/s & 2.6 \\(\\mu\\)m/THz & [59] \\\\ 27 & MEMS & 460 \\(\\mu\\)m & 85.5 nm \\(\\cdot\\sqrt{\\text{s}}\\) & 1.4 \\(\\times 10^{15}\\) Hz/s & 1 kHz & N/A & N/A & [60] \\\\ 28 & FDML & 55 \\(\\mu\\)m & **0.27 nm \\(\\cdot\\sqrt{\\text{s}}\\)** & **3.2 \\(\\times 10^{17}\\) Hz/s** & 24.6 kHz & 0.4 mm/s & **1.8 nm/THz** & This work \\\\ \\hline \\end{tabular} * ECDL, external cavity diode laser; DFB, distributed feedback laser; DBR, distributed Bragg reflector laser; PZT mod., frequency tuning achieved via piezoelectric tuning; EO mod., frequency tuning achieved via electro-optic modulation; MEMS, the laser is micro-electro-mechanically tuned. * Since velocity uncertainty is impacted by the overall chirp rate \\(\ u_{1}\\) (Methods), this metric is also evaluated. * Fine resolution achieved by stitching an array of 12 DFB lasers; a single DFB laser achieves a resolution of 333 \\(\\mu\\)m. * Low velocity measurement uncertainty achieved with low chirp rate. * Frequency sweep range is 0.5 GHz.
**Light detection and ranging (LiDAR) has emerged as an indispensable tool in autonomous technology. Among its various techniques, frequency modulated continuous wave (FMCW) LiDAR stands out due to its capability to operate with ultralow return power, immunity to unwanted light, and simultaneous acquisition of distance and velocity. However, achieving a rapid update rate with sub-micron precision remains a challenge for FMCW LiDARs. Here, we present such a LiDAR with a sub-10 nm precision and a 24.6 kHz update rate by combining a broadband Fourier domain mode-locked (FDML) laser with a silicon nitride soliton microcomb. An ultrahigh frequency chirp rate up to 320 PHz/s is linearized by a 50 GHz microcomb to reach this performance. Our theoretical analysis also contributes to resolving the challenge of FMCW velocity measurements with nonlinear frequency sweeps and enables us to realize velocity measurement with an uncertainty below 0.4 mm/s. Our work shows how nanophotonic microcombs can unlock the potential of ultrafast frequency sweeping lasers for applications including LiDAR, optical coherence tomography and sensing.**
Summarize the following text.
244
arxiv-format/1906_02240v1.md
_Reference: van Haren, H., 2019. Turbulent convection and high-frequency internal wave details in 1-m shallow waters. Limnol. Oceanogr., 64, 1323-1332._ **Turbulent convection and high-frequency internal wave details in 1-m shallow waters** **by Hans van Haren** Royal Netherlands Institute for Sea Research (NIOZ) and Utrecht University, P.O. Box 59, 1790 AB Den Burg, the Netherlands. e-mail: [email protected] ###### Day-time solar heating from above stores large amounts of potential energy into the ocean providing a stable density stratification. In principle, stable stratification reduces mechanical vertical turbulent exchange, although seldom down to the level of molecular diffusion. Two contrasting mechanisms are thought to enhance turbulence in a stratified environment. The stratification supports internal waves 'IW' that may deform it by straining and vertical current shear. During night-time cooling, convective overturning may be found near the surface (Brainerd and Gregg, 1993). Both mechanisms pump heat down and nutrients up. The amounts of turbulence thus generated vary upon the efficiency of the mixing; convection, or 'Rayleigh-Taylor instability RTi', being generally more efficient than shear, or 'Kelvin-Helmholtz instability KHi', see, e.g., (Yabe et al., 1991; Dalziel, 1993). Generally in stratified waters, turbulent overturns are well separated from internal waves as the former last shorter and the latter longer than the buoyancy time-scale. The amounts of turbulence thus generated may be contrasted with that by the breaking of surface wind waves 'SW'. In shallow, less than a few meters deep, sea areas like atolls, estuaries or near beaches, turbulence may also be generated by friction of sheared flow over the bottom (Ekman, 1905; Lamb, 1945), or by the convective heating from corals and the sediment (Jimenez et al., 2008; de la Fuente, 2014). It remains to be investigated whether these processes generate IW. Heating from the atmosphere does not directly force IW, as less dense warm water initially spreads like a pancake over colder water forming a stable stratification. To generate IW this stratification has to be set into motion, either by frictional flow over the surface sucking up colder water (Ozen et al., 2006), or by (wave) flow over small bottom topography (Bell, 1975). Near a beach for example, IW are expected to be forced by flow over sand-banks (van Haren et al., 2012). IW-amplitudes are generally much larger than SW's and their periods O(50-1000 s) are much longer than the 5-10 s of SW, because of the weaker restoring force of reduced gravity. In the present paper we are interested in quantifying turbulence processes using high-resolution instrumentation in shallow waters near beaches under calm atmospheric and daytime-heated stratified sea conditions to question: What are the dominant turbulent mixing processes? What are their dominant scales and appearances? The results may be portable to other near-surface ocean areas to scale up, including atoll Lagoons, and to laboratory turbulence studies to scale down. After all, the ocean including the 1-m scales near beaches are still high bulk Reynolds number \\(>\\)10\\({}^{4}\\) areas. ### Materials and methods During the early summers of 2010 and 2011 instruments for IW-turbulence studies were on stand-by to be taken out the Dutch island of Texel beaches on short notice whenever conditions were right. These conditions implied SW (wind including swell) heights \\(<\\)0.2 m top-crest, sufficient daytime solar heating, Low Water 'LW' tide in early morning and late afternoon for positioning and recovering instrumentation. The primary instrumentation consisted of a wooden pole (Fig. 1) holding several tens of 'NIOZ4' self-contained high resolution temperature (T) sensors at 0.042 m vertical intervals that sampled at a rate of 2 Hz with noise level of \\(<\\)0.0001\\({}^{\\circ}\\)C and a precision of \\(<\\)0.0005\\({}^{\\circ}\\)C (van Haren et al., 2009 for its predecessor NIOZ3 with similar characteristics). Around time of HW, the pole was fixed in about H = 0.5 m water depth to the sandy bottom using two concrete slabs, weighing 40 kg each (in air). Distance to shore, high-water 'HW' mark, was approximately 100 m. The tidal range varied between 1.5 and 2 m. Meteorological data were available from airport Den Helder, about 15 km southward from both sites investigated. SW-amplitudes and periods were estimated from the air-water surface passing the T-sensors. In 2010, 36 T-sensors were mounted with the lowest at 0.13 m above the bottom'mab'. Two sensors failed, including the lowest. The pole was moored at 53\\({}^{\\circ}\\) 02.944\\({}^{\\prime}\\)N, 04\\({}^{\\circ}\\) 42.800\\({}^{\\prime}\\)E, near Texel North Sea 'NS' beach-pole 13 (Fig. 1). In 2011, 52 T-sensors were mounted with the lowest at 0.048 mab. None failed. In addition, a single Sea-Bird Electronics SBE37 self-contained CTD was attached around 0.75 mab. The pole was moored at 53\\({}^{\\circ}\\) 01.744\\({}^{\\prime}\\)N, 04\\({}^{\\circ}\\) 49.208\\({}^{\\prime}\\)E, near Texel Wadden Sea 'WS' beach 'Ceres'. The NS is connected to the open ocean with relatively large SW-action. The WS is an inland tidal flat sea with closer connections to fresh water outflow. The moored T-sensor data are used as a conservative estimate for dynamically more important density (\\(\\rho\\)) variations, using standard relation, \\(\\delta\\rho\\) = -\\(\\alpha\\delta\\)T, \\(\\alpha\\) = 0.23 kg m\\({}^{3}\\)\\({}^{\\circ}\\)C\\({}^{-1}\\) denoting the apparent thermal expansion coefficient under local conditions. Compressibility effects are negligible in the present data. Lacking salinity (S) data in 2010 it is assumed that only temperature contributes to \\(\\delta\\rho\\) for the NS-beach observations. This assumption is justified as differential horizontal advection reinforces solar heating stratification, because in summer the fresher WS is warmer than the saltier NS (van Aken, 2008; van Haren, 2010). Thus, stable density stratification may occur with relatively cold above warm water, but not with relatively salty above fresh water. Hence, the above density-temperature relationship is a conservative one. This is confirmed in 2011, as the CTD gave \\(\\alpha\\) = 0.51 kg m\\({}^{3}\\)\\({}^{\\circ}\\)C\\({}^{-1}\\) during the WS deployment. With temperature as tracer for density variations turbulence can be quantified using the moored T-sensor data. Vertical turbulent kinetic energy dissipation rate \\(\\varepsilon\\), proportional to turbulent diapycnal flux, and eddy diffusivity \\(\\mathrm{K_{z}}\\) are estimated by calculating overturning scales. These scales are obtained after reordering every time-step the potential density (temperature) profile \\(\\mathrm{p(z)}\\), which may contain inversions, into a stable monotonic profile \\(\\mathrm{p(z_{s})}\\) without inversions (Thorpe, 1977). After comparing raw and reordered profiles, displacements \\(\\mathrm{d=min(|z-z_{s}|)\\cdot sgn(z-z_{s})}\\) are calculated necessary for generating the stable profile. Then, \\[\\varepsilon=\\ 0.644^{2}\\mathrm{N}^{3}, \\tag{1}\\] where \\(\\mathrm{N}\\) denotes the buoyancy frequency computed from the reordered profile and the constant follows from empirically relating the overturning scale with the Ozmidov scale \\(\\mathrm{L_{O}=0.8d}\\)(Dillon, 1982), a mean coefficient value from many realizations. Using \\(\\mathrm{K_{z}=\\Gamma\\varepsilon N^{-2}}\\) and a mean mixing efficiency coefficient for conversion of kinetic into potential energy of \\(\\Gamma=0.2\\) for ocean observations (Osborn, 1980; Oakey, 1982; Gregg et al., 2018), we find, \\[\\mathrm{K_{z}=\\ 0.128d^{2}N}. \\tag{2}\\] According to Thorpe (1977), results from (1) and (2) are only useful after averaging over the size of an overturn. In the following,'sufficient' averaging is applied over at least vertical scales of the largest overturns and over at least buoyancy time scales to warrant a concise mixture of convective- and shear-induced turbulence, and to justify the use of the above mean coefficient values. Due to the small precision of the T-sensors, thresholds limit mean turbulence parameter values to \\(<\\)\\(\\varepsilon\\)\\(>\\)\\({}_{\\mathrm{thres}}\\) = O(10\\({}^{\\text{-}12}\\)) m\\({}^{2}\\)s\\({}^{\\text{-}3}\\) and to \\(<\\)\\(\\mathrm{K_{z}}\\)\\(>\\)\\({}_{\\mathrm{thres}}\\) = O(10\\({}^{\\text{-}6}\\)) m\\({}^{2}\\)s\\({}^{\\text{-}1}\\) in weakly stratified waters (van Haren et al., 2015). In comparison with multiple shipborne shear- and temperature-variance microstructure profiling the present method yielded similar results to within a factor of two. ### Observations We focus on morning observations, when heating is up so that the air temperature \\(\\rm T_{a}>T_{w}\\) the water temperature and near-surface waters are potentially stable for direct convective overturning. (Night-time convection indeed gave fully homogeneous, relatively cool waters due to convective overturning). On 30 June 2010 near NS-beach, the pole was moored around sunrise, one hour after LW. Cloud-cover was relatively high (60-100%) the entire day, with some sunshine between days 180.3 and 180.4 UTC. A daily evaporation sum of 0.7 mm was calculated for these conditions following Lin (2007), implying a daily salting of 0.02 g kg-1 m (van Haren et al., 2012). To balance this density contribution one requires about 0.1\\({}^{\\circ}\\)C of warming; the observed T-differences of about 1\\({}^{\\circ}\\)C were larger (Fig. 2a). Potentially stratification counteracting, free convection generating, ground water leakage is found of little importance. Easterly winds created only small SW near the pole, as the NS-beach is exposed to the West. SW-height was about 0.1 m (Fig. 2a) with 3-10 s periods. Trains of well-stratified waters passed the T-sensors with vertical amplitudes varying between 0.2 and \\(>\\)1.0 m, all \\(>\\) SW-heights. T-variations have 'periods' of half an hour, and shorter with changes every 60-90 s. Local stratification has a mean buoyancy period of \\(\\rm T_{N}\\approx 200\\) s and a smallest buoyancy period of \\(\\rm T_{Nmax}\\approx 60\\) s when calculated across thin layers of \\(<\\)0.1 m thickness. The largest turbulent overturns occur from the surface and push the stratification down (Fig. 2b). Turbulent overturns are \\(\\rm|d|>0.5\\) m and last up to \\(\\rm T_{Nmax}\\), with the group of overturns around day 180.305 lasting as long as \\(\\rm T_{N}\\) (Fig. 2c). This timescale of turbulent overturning well exceeds the classic initial RTi growth rates (Ag2\\(\\pi\\)/L)\\({}^{1/2}\\approx 1\\) s-1 (Chandrasekhar, 1981) as, with acceleration of gravity g, for the present data Atwood number A = 2\\(\\rm\\Delta\\rho/\\rho\\approx 0.005\\) and length-scale L \\(\\approx 0.3\\) estimated using phase speed 0.03 m s-1 (van Haren et al., 2012). During this bursting, the resulting 1.5-m-vertical- and \\(\\rm T_{N}\\)-mean turbulence estimates are [\\(<\\)\\(>\\)] = 4\\(\\pm\\)2\\(\\times\\)10\\({}^{\\rm\\text{-}7}\\) m\\({}^{2}\\) s\\({}^{3}\\), [\\(<\\)K\\({}_{x^{>}}\\)] = 7\\(\\pm\\)3\\(\\times\\)10\\({}^{\\rm\\text{-}5}\\) m\\({}^{2}\\) s-1. Due to the small d-scales and high N, the former(\\(\\sim\\)flux) value is relatively high, and the latter value relatively low, compared with deep-ocean turbulence exchange parameter values. It demonstrates that near-beach waters can be turbulent internally, also on calm days without SW-breaking. The period after day 180.315 shows a strong interface deepening by about 1 m in 400-500 s, the interface still about 0.2 m thick. As easterly winds were weak, the deepening is thought to be due to the warming and differential advection. The weaker vertical- and \\(T_{N}\\)-mean [\\(<\\)\\(\\varepsilon\\)\\(>\\)] = 2\\(\\pm\\)1\\(\\times\\)10\\({}^{-8}\\) m\\({}^{2}\\) s\\({}^{-3}\\), [\\(<\\)\\(K_{x^{>}}\\)] = 3\\(\\pm\\)1\\(\\times\\)10\\({}^{-6}\\) m\\({}^{2}\\) s\\({}^{-1}\\) turbulence values are generally occurring in very short--small-scale bursts of about 10 s and 0.3 m (see Details below). The spikes in Fig. 2d are thus not random noise. This small-scale turbulence is spectrally characterized as follows (Fig. 3). Temperature variance is up to 100 times larger near the surface than near the bottom, depending on the frequency band. Near the surface, its peaks follow a \\(\\sigma^{<5/3}\\) '-5/3' slope with frequency \\(\\sigma\\) for the range \\(N<\\sigma<\\sigma_{SW}\\). This suggests a shear-dominated turbulence inertial subrange (Tennekes and Lumley, 1972) or passive scalar (Cimatoribus and van Haren, 2015). This -5/3-range is distributed over a larger frequency range than commonly found in the ocean and linked to the SW-band here. The spectra's bases outside the peaks seem to slope with -1, mainly in the range 0.5N \\(<\\sigma<\\) 2N\\({}_{max}\\). This points at some, weaker, influence of convective turbulent overturning or active scalar, and, potentially only for 0.5N \\(<\\sigma<\\) N, linear internal waves as observed in the open ocean (van Haren and Gostiaux, 2009). The latter comparison is dubious for the present observations, given the irregular motions in Fig. 2 and upcoming details. With reference to the data of the T-sensor at 0.65 mab, the coherence (Fig. 3c) is only significant between directly neighbouring sensors and further away for \\(\\sigma<\\) N\\({}_{max}\\) (2 neighbouring sensors) and \\(\\sigma<\\) N (4 neighbours). At higher frequencies, no significant coherence is found between directly neighbouring sensors. ## Details An apparent wind-shear sucking up cold-water penetrating the warmer near-surface layer is seen to consist of two (sets of) overturns (Fig. 4). The turbulence lasts about 40-50 s \\(<\\)\\(T_{\\rm Nmax}\\) and is found to be a complex of smaller overturn(s) in larger one(s). The more or less singular event does not immediately correspond with a series of cusps as in Ozen et al. (2006). It is noted that wind speeds are low \\(<\\)4 m s\\({}^{-1}\\). Before and after the singular event multiple numerous small-scale overturning is observed of \\(|\\)d\\(|\\) = 0.1-0.15 m and lasting 5-20 s. These are observed throughout the water column. Such small-scale overturning is also observed in other periods, although less numerous when more intense turbulence occurs above (e.g., Fig. 5). Here also an apparent 1 m upward cusp is seen, squeezed between two large turbulent downdraughts. Around 0.5 m the stratification is stronger than in Fig. 4, hence the potentially fewer small overturns, being forced down by the large overturning above. Mean values over the 1.5-m and 250-s ranges are [\\(<\\)\\(>\\)] \\(>\\) 10\\({}^{7}\\) m\\({}^{2}\\) s\\({}^{3}\\), [\\(<\\)K\\(>\\)] \\(>\\) 10\\({}^{-4}\\) m\\({}^{2}\\) s\\({}^{-1}\\), with largest values in the relatively cooler updraughts suggestive of shear-dominated turbulence. However, relatively strong turbulent overturning is also observed in warm downdraughts, suggesting convection-dominated turbulence (Fig. 6), under conditions when principally T\\({}_{\\rm a}\\) \\(>\\) T\\({}_{\\rm w}\\), on the large scale. As before, stratification is pushed down and small turbulent overturning underneath is dampened. Small turbulent overturning reappears when main stratification moves up and weakens in thicker layering, around day 180.303. While the larger overturns last about 15-20 s, around day 180.3025, the small turbulent overturns last 4-5 s. An example of large-scale overturning in an 'interior' relatively thick near-homogeneous layer is given in Fig. 7. It lasts about T\\({}_{\\rm Nmax}\\), while the local minimum buoyancy period T\\({}_{\\rm Nmin}\\) \\(\\approx\\) 1000 s within the particular layer. It is thus not likely associated with intruding partially salinity-compensated waters that can last \\(>\\)T\\({}_{\\rm Nmin}\\). The association with the thin-layer stratification motions is not directly obvious. These IW-motions vary at very high-frequency close to \\(\\sigma_{\\rm SW}\\), but they are highly irregular, more nonlinear. The latter may be due to complex wave-interactions, or, more likely here, to interior turbulence (higher-up) interactions with stratification. ### Inland-sea beach The connection between nonlinear IW and turbulence is also observed in 2011, near a WS-beach (Fig. 8). In this image, comparable to Fig. 2 with similar mean turbulence levels outside the periods with large near-surface levels, progressively larger overturns approach the size of the mean buoyancy period and about \\(|\\mathrm{d}|=0.3\\) m, \\(>>\\mathrm{SW}\\). This shear-induced KHi, best visible in the S-shapes in Fig. 8b, comes with small secondary KHi along its edges as observed in estuaries by Geyer et al. (2010). Near the bottom, turbulent overturning is seen to affect IW-motions near the interface, around day 185.4 the time of HW flow reversal. This is due to the gradual warming of the water column in addition to and possibly driving the shear-flow. It is seen that for episodic periods of 20 s to local \\(\\mathrm{T_{N}}\\), the smallest IW-timescales, the bottom is apparently warming the water from below hence initiating convective overturning thereby generating some high-frequency interfacial IW. It compares with modeling results on a very shallow \\(<\\)0.1 m deep salt-water lake (de la Fuente, 2014) where solar heated sediment warms the water from below. While the present total duration of overturning lasts just under local \\(\\mathrm{T_{Nmin}}=\\) 1000 s, the near-bottom turbulence is not expected to be driven by bottom frictional shear flow. Such a flow would either generate turbulence during the general cooling phase only (Lorke et al., 2005; van Haren, 2010) or drive warmer waters over cooler near-bottom waters in rotational Ekman (1905) dynamics off a slope. Both are not observed here during the gradual warming with the largest episodic warming at the lowest sensor. In addition, the present observations show a reduction with time of the near-homogeneous near-bottom layer, instead of a thickening. This results in a spectral overview that has a clearer \\(\\mathrm{\\sigma^{<}}\\)[53] slope of shear-induced turbulence for \\(\\mathrm{N<\\sigma<\\sigma_{SW}}\\) and \\(\\mathrm{\\sigma^{<}}\\)[1] of convective turbulence for \\(\\mathrm{N<\\sigma<2N_{max}}\\) (Fig. 9) in comparison with NS-data in Fig. 3. With respect to the T-sensor at 0.5 mab, coherence over more than one neighbouring sensor away is only found for \\(\\mathrm{\\sigma<N}\\), and, barely significant, in the SW-band (Fig. 9c). As the -5/3 inertial subrange is observed up to the SW-range, small-scale energy containing turbulence has timescales as short as 3-5 s at least. (Turbulence dissipation scalesare O(1 mm) and O(0.01 s) in the ocean). However, the small-scale overturns occur distributed in the interior of the depth-time range and have no vertical coherence (over scales down to 0.042 m), so that they are not directly associated with SW. This indirectly confirms the non-isotropy of stratified turbulence by a factor of at least two, providing aspect ratios of \\(<\\)0.5. The observations were never made during times and zones of breaking of SW, which are thus assumed to propagate more or less linearly. While the inertial subrange IW-induced-turbulence- and SW-bands directly associate in frequency, the transfer of energy between the two is not established. ## Discussion The present observations demonstrate that IW can occur near shallow beaches in open and enclosed seas, and presumably also in lakes. Any swimmer and paddler should be able to feel them on a calm day, provided T\\({}_{\\rm w}\\)\\(>\\) 15\\({}^{\\circ}\\)C to avoid pain blocking perception (KuhtzBuschbeck et al., 2010). Although they are only observed during calm days outside SW-breakers their turbulent exchange is not negligible. With respect to open-ocean IW-turbulence, the spatial scales near the beach are O(1 m), 100 times smaller. In 2-m deep waters, the 0.1-1 m large IW-amplitudes occur in conjunction with turbulence overturns of similar sizes. While the dominant turbulence-generation process seems to be shear-induced KHi, judging from the spectra, convective RTi occur frequently in the background. The largest RTi are observed near the surface, hypothesized to be driven by differential advection deforming small-scale IW, but also in the interior after IW-straining creating near-homogeneous layers, and near the bottom by episodic heating from below. The RTi associate with IW-generation at the interfacial layers above or below. The precise mechanisms of IW-turbulence interaction, including the association with SW, requires further future theoretical modelling investigation extending works e.g. by Thorpe (2010) for continuous open-ocean stratification. The resulting mean turbulence dissipation rates, which are proportional to turbulent fluxes, are estimated to be within one order of magnitude of those of the largest deep-ocean IW-breaking above seamounts: \\(\\varepsilon=\\text{O}(10^{7})\\text{ m}^{2}\\text{ s}^{-3}\\). The \\(\\varepsilon\\)-value is 10-100 times larger than that of open-ocean 'linear' IW-breaking (e.g., Gregg, 1989). In contrast, mean turbulent diffusivities \\(\\text{O}(10^{-5})\\text{ m}^{2}\\text{ s}^{-1}\\) found near beaches are comparable to those from the open ocean and are thus two to three orders of magnitude smaller than those observed above seamounts (e.g., van Haren et al., 2015). This is due to the small length-scale in the shallow waters near beaches, in association with the strong stratification. The turbulence dissipation rate level of night-time convection is of the same order of magnitude as modest SW-breaking at a beach (Brainerd and Gregg, 1993): Both are about 100 times larger than the mean dissipation rates estimated in the present data. However, short-term peaking of IW-breaking may develop turbulent flux levels up to those of weak-modest SW-breaking and near-surface night-time convection. As a result, under calm atmospheric and, day-time-heated, stratified sea conditions IW-turbulence may be important for vertical exchange in shallow waters, not only near a beach, but also in estuaries, atolls, and near the ocean surface, although in the latter heating from below will not contribute. ## Acknowledgments I thank M. Laan and L. Gostiaux for their collaboration in design and construction of NIOZ T-sensors. The sensors have been financed in part by NWO, the Netherlands Organization for Scientific Research. ## References * Bell (1975) Bell, T. H. 1975. Topographically generated internal waves in the open ocean. J. Geophys. Res. **80**: 320-327. * Brainerd and Gregg (1993) Brainerd, K. E., and M. C. Gregg. 1993. Diurnal restratification and turbulence in the oceanic surface mixed layer. 1. Observations. J. Geophys. Res. **98**: 22,645-22,656. * Chandrasekhar (1981) Chandrasekhar, S. 1981. Hydrodynamic and hydromagnetic stability. Dover Publications. * Cimatoribus and van Haren (2015) Cimatoribus, A. A., and H. van Haren. 2015. Temperature statistics above a deep-ocean sloping boundary. J. Fluid Mech. **775**: 415-435 * Dalziel (1993) Dalziel, S. B. 1993. Rayleigh-Taylor instability: experiments with image analysis. Dyn. Atmos. Oc. **20**: 127-153. * de la Fuente (2014) de la Fuente, A. 2014. Heat and dissolved oxygen exchanges between the sediment and water column in a shallow salty lagoon. J. Geophys. Res. **119**: 596-613 * Dillon (1982) Dillon, T. M. 1982. Vertical overturns: a comparison of Thorpe and Ozmidov length scales. J Geophys. Res. **87**: 9601-9613. * Ekman (1905) Ekman, V. W. 1905. On the influence of the Earth's rotation on ocean-currents. Ark. Math. Astron. Fys. **2(11)**,: 1-52. * Geyer et al. (2010) Geyer, W. R., A. C. Lavery, M. E. Scully, and J. H. Trowbridge. 2010. Mixing by shear instability at high Reynolds number. Geophys. Res. Lett. **37**: L22607, doi:10.1029/2010GL045272. * Gregg (1989) Gregg, M. C. 1989. Scaling turbulent dissipation in the thermocline. J. Geophys. Res. **94**: 9686-9698. * Gregg et al. (2018) Gregg, M. C., E. A. D'Asaro, J. J. Riley, and E. Kunze. 2018. Mixing efficiency in the ocean. Ann. Rev. Mar. Sci. **10**: 443-473. * Jimenez et al. (2008) Jimenez, I. M., M. Kuhl, A. W. D. Larkum, and P. J. Ralph. 2008. Heat budget and thermal microenvironment of shallow-water colars: Do massive corals get warmer than branching corals? Limnol. Oceanogr. **53**: 1548-1561. Kuhtz-Buschbeck, J. P., W. Andresen, S. Gobel, R. Gilster, and C. Stick. 2010. Thermoreception and nociception of the skin: a classic paper of Bessou and Perl and analyses of thermal sensitivity during a student laboratory exercise. Adv. Physiol. Educ. **34**: 25-34. * Lamb (1945) Lamb, H. 1945. Hydrodynamics, 6th edition. Dover Publications. * Lin (2007) Lin, S. D. 2007. Water and wastewater calculations manual, 2\\({}^{\\text{nd}}\\) ed. McGraw-Hill. * Lorke (2005) Lorke, A., F. Peeters, and A. Wuest. 2005. Shear-induced convective mixing in bottom boundary layers on slopes. Limnol. Oceanogr. **50**: 1612-1619. * Oakey (1982) Oakey, N.S. 1982. Determination of the rate of dissipation of turbulent energy from simultaneous temperature and velocity shear microstructure measurements. J. Phys. Oceanogr. **12**: 256-271. * Osborn (1980) Osborn, T. R. 1980. Estimates of the local rate of vertical diffusion from dissipation measurements. J. Phys. Oceanogr. **10**: 83-89. * Ozen (2006) Ozen, B., S. A. Thorpe, U. Lemmin, and T. R. Osborn. 2006. Cold-water events and dissipation in the mixed layer of a lake. J. Phys. Oceanogr. **36**: 1928-1939. * Tennekes (1972) Tennekes, H., and J. L. Lumley. 1972. A first course in turbulence. MIT Press. * Thorpe (1977) Thorpe, S. A. 1977. Turbulence and mixing in a Scottish loch. Phil. Trans. Roy. Soc. Lond. A **286**: 125-181. * Thorpe (2010) Thorpe, S. A. 2010. Breaking internal waves and turbulent dissipation. J. Mar. Res. **68**: 851-880. * van Aken (2008) van Aken, H. M. 2008. Variability of the water temperature in the western Wadden Sea on tidal to centennial time scales. J. Sea Res. **60**: 227-234. * van Haren (2010) van Haren, H. 2010. Very near-bottom tidal straining in a sea strait. Geophys. Res. Lett. **37**: L16603, doi:10.1029/2010GL044186. * van Haren (2009) van Haren, H., M. Laan, D.-J. Buijsman, L. Gostiaux, M. G. Smit, and E. Keijzer. 2009. NIOZ3: independent temperature sensors sampling yearlong data at a rate of 1 Hz. IEEE J. Ocean Eng. **34**: 315-322. van Haren, H., L. Gostiaux, M. Laan, M. van Haren, E. van Haren, and L. J. A. Gerringa. 2012. Internal wave turbulence near a Texel beach. PLoS ONE **7(3)**: e32535, doi: 10.1371/journal.pone.0032535. * van Haren et al. (2015) van Haren, H., A. A. Cimatoribus, and L. Gostiaux. 2015. Where large deep-ocean waves break. Geophys. Res. Lett. **42**: 2351-2357, doi:10.1002/2015GL063329. * Yabe et al. (1991) Yabe, T., H. Hoshino, and T. Tsuchiya. 1991. Two- and three-dimensional behavior of Rayleigh-Taylor and Kelvin-Helmholtz instabilities Phys. Rev. A **44**: 2756-2758. Figure 1: Instrument pole with upper sensor at 1.60 m above the bottom ‘mab’ near Texel North Sea ‘NS’ beach, around Low Water ‘LW’ on day 180.638 UTC, 2010. Insert shows the two mooring locations, including the 2011 Wadden Sea ‘WS’ beach. Figure 2: About 45 min overview of internal wave turbulence observations near Texel NS-beach on 30 June 2010, on average one hour (0.04 day) before HW (on day 180.3542) and one hour after the morning equality of air and water temperature (on day 180.261). (a) Depth-time series of moored T-observations. The vertical scale is 1.75 m with reference to the bottom. The horizontal red bars indicate magnifications in forthcoming Figures (letter F + number). The purple and blue horizontal bars indicate the minimum and mean buoyancy period for the panel, respectively. The yellow vertical bar on the left indicates the spread of surface wave ‘SW’ height, estimated from the water surface passing the sensors. (b) Logarithm of buoyancy frequency after reordering a. to stable profiles every time step and using \\(\\delta\\)p = -0.238T kg m\\({}^{\\text{-3}}\\) °C\\({}^{\\text{-1}}\\) (see text). (c) Overturning displacements following comparison of a. with its reordered data. (d) Time series of logarithms of vertically averaged eddy diffusivity (black, scale to the left) and turbulence dissipation rate (red, scale to the right). Figure 3: Spectral view of the data in Figure 2. (a) Weakly smoothed (3 dof, degrees of freedom) temperature variance for the upper- (blue) and lowermost (purple) levels. (b) Weakly smoothed (3 dof) phase for temperature correlation between the lower two sensors. (c) Depth-frequency series of moderately smoothed (20 dof) coherence between the sensor at the dark-red depth level and all other sensors. The 95% confidence threshold-level is at approximately 0.2 coherence. Figure 4: As Figure 2, but for a 4 min detail of double Kelvin-Helmholtz instability ‘KHi’ overturning (peak at day 180.3096) and numerous small-scale, weakly turbulent overturns causing a broken and broad interfacial layer. In a., the colour-scale is slightly different than in Fig. 2a and black contours are drawn every 0.1\\({}^{\\circ}\\)C. Figure 5: As Figure 4, but for a 4 min detail of convective overturning above an interface around 0.5 mab oscillating with the smallest internal wave period (purple bar) possible. Figure 6: As Figure 4, but for a 3 min (one-mean-buoyancy-period) detail of convective overturning above an initially 0.25 m thick interface that slowly moves upward (by internal wave motion), and subsequently broadens and breaks up by multiple KHi like on day 180.303. In a., the dashed black contours indicate 0.05\\({}^{\\circ}\\)C intervals. Figure 7: As Figure 6, but for a one-mean-buoyancy-period detail of ‘internal convective overturning’ after day 180.3254 around 0.9 mab above a 0.15 m thick interface close to the bottom, which shows a sequence of rapid KHi around day 180.325. Figure 8: As Figure 2, but for a 45 min of data on 05 July 2011, off ‘Ceres’, a WS-beach with small wave action and larger salinity stratification. Due to stable salt stratification \\(\\delta\\)p = -0.51\\(\\delta\\)T kg m\\({}^{\\text{-3}}\\)\\({}^{\\text{o}}\\)C\\({}^{\\text{-1}}\\), as established from a single SBE37 CTD at 0.75 mab. The large S-shaped forms in b. are indicative of single KHi-overturning. HW was at day 185.4104 UTC; waves/wind/sunshine increase at day 185.41. Figure 9: As Figure 3, but for data in Figure 8, with data in a. and b. moderately smoothed (20 dof) due to averaging over data from 7 sensors each.
Vertically 0.042-m-spaced moored high-resolution temperature sensors are used for detailed internal wave-turbulence monitoring near Texel North Sea and Wadden Sea beaches on calm summer days. In the maximum 2 m deep waters irregular internal waves are observed supported by the density stratification during day-times' warming in early summer, outside the breaking zone of \\(<\\)0.2 m surface wind waves. Internal-wave-induced convective overturning near the surface and shear-driven turbulence near the bottom are observed in addition to near-bottom convective overturning due to heating from below. Small turbulent overturns have durations of 5-20 s, close to the surface wave period and about one-third to one-tenth of the shortest internal wave period. The largest turbulence dissipation rates are estimated to be of the same order of magnitude as found above deep-ocean seamounts, while overturning scales are observed 100 times smaller. The turbulence inertial subrange is observed to link between the internal and surface wave spectral bands.
Provide a brief summary of the text.
209
mdpi/02a6f285_1e0d_4136_9923_e2602d43eb82.md
_Sustainability_**2014**, \\(6\\), 5252-5264; doi:10.3390/su6085252 _Sustainability_ **ISSN 2071-1050** www.mdpi.com/journal/sustainability _Article_ ## Cellulose Nanocrystals Obtained from _Cynara Cardunculus_ and Their Application in the Paper Industry **Valentina Coccia *, Franco Cotana \\({}^{\\dagger}\\), Gianluca Cavalaglio \\({}^{\\dagger}\\), Mattia Gelosia\\({}^{\\dagger}\\) and Alessandro Petrozzi** CIRIAF, University of Perugia, via G. Duranti, 67, 06125 Perugia, Italy; E-Mails: [email protected] (F.C.); [email protected] (G.C.); [email protected] (M.G.); [email protected] (A.P.) \\({}^{\\dagger}\\) These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +39-075-585-3615; Fax: +39-075-515-3321. _Received: 28 May 2014; in revised form: 29 July 2014 / Accepted: 6 August 2014 / Published: 13 August 2014_ ## 1 Introduction Lignocellulosic biomass is a potential source of saccharides that can be converted into alternative fuels as bioethanol. Concerning this aspect, a relevant ongoing experimental program is dedicated to producing bioethanol from residues at CRB labs [1; 2; 3; 4] but also on other energy-from-biomass production technologies [5; 6]. Focusing on the bioethanol conversion, it is typically accomplished through the production of hexose and pentose sugars from cellulose and hemicelluloses [7]. In this context, an important aspect to be disclosed (as starting point) is the biomass initial characterization in terms of cellulose, hemicelluloses and lignin content. Several literary studies are available showing the processing of a ligno-cellulosic biomass and calculation of the energy-materials yields obtainable from energy treatments. In higher plants, cellulose plays an essential role as a reinforcing element in the cell wall, generally together with lignin and hemicelluloses. These three polymers are closely associated making up ligno-cellulosic biomass, and the relative content of cellulose and lignin vary among species [8]. The presence of lignin in the plant cell wall, together with the partially crystalline nature of cellulose fibers, results in formidable challenges to deconstruct the lignocellulosic matrix and depolymerise its cellulosic content [9; 10]. Microfibrils are cellulose chains, around 20 nm wide and several micrometers long that consist of alternate crystalline and amorphous domains. Due to extensive inter- and intra-molecular hydrogen bonds by glucosidic hydroxyl groups, the crystalline domain is packed closely and results in an area of high crystallinity. Cellulose microfibrils are susceptible to chemical, enzymatic and mechanical attacks that hydrolyze the amorphous regions in glucose and reduce the crystalline regions into nanocrystalline cellulose (NCC). NCC results in rigid, rod-shaped monocrystalline cellulose domains (whiskers) which are 5 to 100 nm in diameter and 10 to 800 nm in length [11; 12]. NCC is still being tested for diversified applications such as: (i) additives for coatings, paints, lacquers, and adhesives, (ii) switchable optical devices, (iii) pharmaceuticals and drug delivery, (iv) bone replacement and tooth repair, (v) improved paper, packaging and building products, (vi) additives for foods and cosmetics, and (vii) aerogels as super insulators. NCC can be chemically modified with other functional groups and conjugated with molecules or nanoparticles giving improved and novel properties. Recently, some emerging bioapplications of functionalized and modified NCC such as drug delivery, enzyme immobilization, nanocatalysis, _etc._, have been reviewed [13]. Owing to its high mechanical strength, high aspect ratio, large surface area (150-250 m\\({}^{2}\\)/g), and other intriguing electrical and optical properties [14], NCC has been fostered for the preparation of industrial composites. Therefore, the incorporation of a small amount of NCC into plastic and paper could enhance the strength of the latters by several orders of magnitude. NCC-reinforced plastics have mechanical advantages over conventional automotive plastics, being 30% lighter and 3-4 times stronger than the currently used materials. They are also less susceptible to damage from heat, chemicals, and spilled gasoline, so they are employed in car parts such as dashboards, bumpers, and side panels. Due to its low toxicity, NCC's environmentally benign nature is the key advantage in driving the development of innovative, sustainable, and recyclable materials [15]. This article aims to explore the application of NCC in the manufacturing of reinforced paper. Nanocrystalline cellulose from pretreated biomass, by steam explosion, was obtained through an experimental procedure derived from the literature. This kind of pretreatment is still being studied and optimized as it seems very promising. The effects of this process are: cleavage of some accessible glycosidic links, cleavage of \\(\\beta\\)-ether linkages of lignin, cleavage of lignin carbohydrates complex bonds and minor chemical modification of lignin and carbohydrates. Nanocellulose recovery yield obtained by this technique was found to be high compared to other methods [16]. The various colloidal solutions of NCC have been employed to reinforce paper sheets, which have been tested in tensile strength, in durability and in barrier properties. The possibility to use the NCC for the improvement of several properties of materials has been also largely explored in literature. Concerning the mechanical behavior of both synthetic (e.g., PVOH) and natural (e.g., starch) polymers, a significant increase in the Young's modulus and tensile strength has been demonstrated after treatment with NCC [17]. The improvement of the barrier properties of the membranes has been also experienced in several cases [18, 19], addressing the food packaging issue. In addition, using the crystalline content of cellulose matrix seems to be contributory to the development of new biobased prostheses for human wellbeing [20]. Drug delivery and optical iridescence properties are additional fields of constructive applications for NCC [21, 22]. A parallel emerging sector is the nanopaper world. There are a limited number of facilities, mainly located in the northern Europe, operative at industrial scale [23] for the combined production of NCC and its incorporation in the common paper pulp in order to produce an increase in resistance and durability of the paper sheets. Such an opportunity should correspond to a reduction of the paper waste volumes, contributing in the sustainability of the production and disposal processes. In this context, the purpose of the presented research is to demonstrate that the enhancement of the properties of the paper can be obtained also using NCC obtained from residues (e.g., residual biomasses) and not only from dedicated pure cellulosic materials (e.g., cotton, paper, wood, _etc._). ## 2 Experimental Section ### Materials and Methods for NCC Processing The following section illustrates the experimental phases of the work including the description of used materials and laboratory procedures. #### 2.1.1 Raw Materials The experimentation was carried out using the following initial raw materials: 1. _Cynara Cardunculus_ untreated samples 2. MCC commercial powder The following paragraph provides a description of the main features of the used materials. The _Cynara Cardunculus_ samples were firstly characterized using all the equipments that are available at CRB laboratories, using the related quality measurements protocols. In particular, the cellulose, hemicelluloses, acetyl groups, extractives, ashes and lignin components were measured and the results are reported in Table 1. The used MCC powder is a commercial product; named MCC CAS 9004-34-6 Alfa Aesar \\({}^{\\copyright}\\). #### 2.1.2 NCC Produced Samples The experimental procedure was applied to produce three different NCC enriched suspensions obtained from the _Cynara cardunculus_ pretreated (by Steam Explosion) and from the cellulose micro crystals (MCC) varying the duration of the ultrasound treatment (further described in Section 2.2). Table 2 summarizes the characteristics of the samples. The relative amount of NCC in the obtained liquid suspension was of 1.5% w/w. ### The Extraction Methodology of NCC from Pretreated Biomass The extraction methodology consists of a five step protocol allowing the separation of the nanocrystalline content of cellulose. Such a procedure is literary captured [24] with the exception of one step that has been optimized by CRB Lab and it can be applied for the production of NCC from residual biomass and from the micro crystalline cellulose (MCC). In the second case, the procedure was applied starting from Section 2.2.4. Figure 1 shows a picture of the NCC dedicated experimental setup in CRB Labs and Figure 2 is a flow diagram of the experimental procedure. \\begin{table} \\begin{tabular}{c c} \\hline \\hline **Cynara Cardunculus component** & **Measured percentage in weight (\\% dry basis)** \\\\ \\hline Cellulose & \\(34.96\\pm 0.29\\) \\\\ Hemicelluloses & \\(16.17\\pm 0.29\\) \\\\ Acetyl groups & \\(4.54\\pm 0.17\\) \\\\ Extractives & \\(11.22\\pm 1.08\\) \\\\ Ashes & \\(7.72\\pm 0.21\\) \\\\ Lignin & \\(16.89\\pm 0.45\\) \\\\ Other & \\(4.82\\pm 0.47\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Main components of the _Cynara Cardunculus_ samples. Figure 1: NCC dedicated setup in Biomass Research Centre (CRB) Labs: (**a**) hydrolysis reactor and sonication device; (**b**) dialysis device; (**c**) centrifuge. #### 2.2.1 Extractives Removal from the Residual Biomass Using the Soxhlet Apparatus [25] The extractive removal is carried out with a mixture of toluene and ethanol in a 2:1 ratio for 6 h. The thimble containing the residual biomass is cooled at ambient conditions for 6 h and dried under suction from a vacuum pump to remove the solvent excess. The extractive removal is then further carried out with pure ethanol, to remove traces of toluene under the same conditions as the previous phase. The sample is dried under suction of the vacuum pump for 24 h in order to remove residual solvents. #### 2.2.2 Lignin Separation In order to separate lignin from the cellulose component, a basic hydrolysis is carried out applying the conditions described below. The extractive-free biomass is treated with an alkaline solution of sodium hydroxide at 4% w/w with a ratio solvent/residue of 8/1 (v/w). The operation is conducted at 95 \\({}^{\\circ}\\)C for 2 h. The whole resulting sample is filtered and the liquid component includes the dissolved lignin. The solid part remained into the filter (containing the cellulose component) is washed with deionized water, recovered and dried in air for two days or under suction with a vacuum pump at ambient temperature to reduce the time. #### 2.2.3 Energy Bleaching The third step involves a processing stage by energy bleaching [1] with sodium chlorite at controlled pH. The sample is rehydrated with 700 mL of deionized water and placed in a 1L flask. It is then preheated to 70 \\({}^{\\circ}\\)C and 1.5 mL of acetic acid and 6.7 g of sodium chlorite are added. The sample is maintained under these conditions for 12 h during which 4 additions of the initial reagents are required (1.5 mL of acetic acid and 6.7 g of sodium chlorite) at regular intervals of about 2.5 h. After the last addition the sample is kept at 70 \\({}^{\\circ}\\)C for 12 h. At the end of the process, a large excess of deionized water is added, the sample is left to decant, and the supernatant is removed. The precipitate Figure 2: Flow diagram of the experimental procedure carried out by CRB. is recovered, centrifuged and re-washed with deionized water in order to eliminate any trace of the reagents used during the bleaching process. #### 2.2.4 Acid Hydrolysis After the bleaching process, the sample appears in the form of paper pieces. The fourth step allows the deconstruction of the cellulose into its two components: crystalline and amorphous. Moreover it degrades the latter in order to obtain the nanocrystalline cellulose in acid solution. The obtained sample is inserted into a one-liter flask and 64% w/w sulfuric acid is added so as to obtain a sample/solvent ratio of 1/18 (w/v). The whole solution is brought to a temperature of 45 \\({}^{\\circ}\\)C and left for 3 h. The reaction is quickly interrupted with cooled deionized water and ice and left to decant overnight in a large bowl. The precipitate matter is than centrifuged and washed with deionized water to regain a neutral solution. The traces of sulfuric acid are removed by dialysis membrane with deionized water in the dedicated apparatus (Figure 1). #### 2.2.5 Ultrasound Treatment The sample was treated with ultrasound equipment in order to break down NCC agglomerates. In particular, the used ultrasound device is the LABSONIC M-BBI 8535027. This device allowed the continuous treatment at fixed 200 W power by means of deep probe type 30-750mL/BBI-8535671. The duration of the treatment was varied as described in Table 1. Some aliquots were filtered with 0.22 \\(\\upmu\\)m Nylon filters, dried at 45 \\({}^{\\circ}\\)C for 3 h and weighed on an analytical balance for NCC quantification. The yield of the process was estimated by comparing the obtained amount of NCC with the cellulose percentage present in the initial residual biomass. Concerning the NCC steps in Sections 2.2.4 and 2.2.5, some experimental procedures using an integrated approach of acid hydrolysis and ultrasonic treatment are available in the literature [26, 27, 27]. ### SEM Analysis Methodology The experimental procedure used for SEM analysis [28] consisted in the application of the standard test protocol used by the laboratory of the Department of Physics of the University of Perugia [29]. In particular, this methodology is articulated in two phases: 1. samples preparation (_i.e._, mineralization and arrangement on glass supports); 2. SEM protocol execution. Some of the pictures obtained by SEM analysis are presented in the Results and Discussion section. ### Paper Reinforcement Tests The reinforcement properties of the prepared liquid samples were tested according to some relevant ISO standard regulations. The same references were used for the calculation of the significance of the results. The lab tests were carried out by the Laboratory of the Cartiere Fabriano Company. Indeed, the three liquid NCC enriched samples were added to a paper sheet (called linter) by immersion [30]. The latter paper sheets were obtained from the conventional paper-making procedure used by Cartiere Fabriano Company and it consists of some organized \\(\\alpha\\)-cellulose fibers. The characteristics of the untreated material are reported in Tables 4-8. Three different NCC-linter paper sheets were produced, obtained from the three original NCC suspended solutions. The tests performed over the paper sheets are reported in Table 3: ## 3 Results and Discussion ### SEM Analysis The three obtained NCC samples were characterized by SEM analysis in order to verify the existence of the nano-crystalline structure. Some SEM images showing the obtained nanostructure are reported in Figures 3 and 4. Figures 3 and 4 show a bed of NCC whiskers in the background. The round highlighted elements are some residual agglomerates of cellulose. Although the latter structures are not properly hydrolyzed, they are at nanometric scale. The definition and quality of the nanometric structure is a function of many parameters such as the procedure used to obtain NCC and the control of the regulatory parameters. In addition, the structure of the initial matrix strongly influence the nanometric organization of the NCC particles. Some SEM images of NCC produced from wood pine powder [31], from rice straw and from potato tuber [32] are available. In these images, the NCCs appear more defined and clear. ### Mechanical Testing The following section shows the results obtained from mechanical tests performed on four samples of linter paper sheets (one sample of linter paper sheet and three samples of NCC-linter paper sheets). After immersion [30] of the linter paper sheet into the three different NCC liquid suspensions, the absorption rate was estimated in 0.5 g/m\\({}^{2}\\) (considering the efficiency of the used film deposition technique) and the size of the cellulose nanocrystals applied as reinforcing filler varies between 10 and 100 nm, as visible from the SEM characterization measures (Figures 3 and 4). #### 3.2.1 Paper Weight Tests The weight tests were performed in order to assess the parameter variation after treatment. The results are summarized in Table 4. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\multirow{2}{*}{**Type of sample**} & **Linter paper sheet** & **Linter paper sheet** & **Linter paper sheet** & **Linter paper sheet** \\\\ & **sheet without** & **+ NCC from** & **+ NCC from MCC** & **+ NCC from MCC** \\\\ & **NCC** & _Cynara cardunculus_ & **(10 h)** & **(5 h)** \\\\ \\hline Weight measured & & & & \\\\ value (g/m\\({}^{2}\\)) & 75.00 (\\(\\pm\\)0.01) & 75.60 (\\(\\pm\\)0.01) & 75.30 (\\(\\pm\\)0.01) & 75.00 (\\(\\pm\\)0.01) \\\\ \\hline \\end{tabular} \\end{table} Table 4: Results of the weight tests before and after the NCC enrichment of the linter paper sheet. Figure 4: SEM image of NCC obtained from MCC (10 h). The results shown a light sensible increase in the weight measured value after the NCC enrichment treatment. Concerning the significance of the results, the sensibility of the analytic balance used for performing weight measures introduces an error in the range of (\\(\\pm\\)0.01) on the presented numbers. #### 3.2.2 Gurley Porosity Tests The barrier properties of the samples were determined by performing several Gurley porosity tests. A known air flux is imposed passing through the surface of each specimen and the time needed to this mass of air to cross the surface is measured. The main results are reported in Table 5. Only one test for each sample was carried out. It was found that the NCC enrichment treatment produced a sensible increase in the Gurley porosity value. The porosity improvement was between 10.5% and 18.4%, and the best result was obtained for linter paper sheet + NCC from _Cynara cardunculus_. This result is in agreement with the literature [33] and it is justified by the produced increase in the crystallinity, _i.e._, the degree of structural order in a solid. In this context, there is also an increasing interest in the barrier properties, due to increased tortuosity provided by nano-particles. Possible interesting applications of NCC relate to barrier membranes. For instance, barrier membranes of poly (vinyl alcohol), PVOH, were studied by Paralikar _et al._[34] who have shown how NCC imparts enhancement of both mechanical and barrier properties to these membranes. #### 3.2.3 Breaking Load Tests The mechanical properties of the prepared samples were measured by performing breaking load tests and repeating the tests four times each sample. The test is carried out by applying a variable load intensity, able to produce an elongation of the specimen equal to 15 mm. The isotropic nature of the linter paper sheets allowed us to choose a casual direction for the application of the load. Results of the mechanical tests are shown in Table 6 and they were averaged for 4 tests/each sample. The statistical analysis of the results is also reported in Table 7. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{**Type of sample**} & **Linter paper sheet** & **Linter paper sheet** & **Linter paper sheet** & **Linter paper sheet** \\\\ & **sheet without** & **+ NCC from** & **+ NCC from MCC** & **+ NCC from MCC** \\\\ & **NCC** & _Cynara cardunculus_ & **(10 h)** & **(5 h)** \\\\ \\hline Tensile strength & & & & \\\\ (kg/15 mm) & & & & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Breaking load tests before and after the NCC enrichment of the linter paper sheet. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{**Type of sample**} & **Linter paper sheet** & **Linter paper sheet** & **Linter paper sheet** & **Linter paper sheet** \\\\ & **sheet without** & **+ NCC from** & **+ NCC from MCC** & **+ NCC from MCC** \\\\ & **NCC** & _Cynara cardunculus_ & **(10 h)** & **(5 h)** \\\\ \\hline Gurley porosity & & & & \\\\ measured value (s) & 38 & 45 & 43 & 42 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Gurley porosity tests before and after the NCC enrichment of the linter paper sheet. After statistical evaluations, the obtained results appeared acceptable and significant. In particular, the results demonstrated a sensible increase of the 7.9% in the measured tensile strength for the sample Linter paper sheet + NCC from _Cynara cardunculus_ and of the 0.8% for the sample Linter paper sheet + NCC from MCC (10 h), while not significant improvement for this parameter was measured for the Linter paper sheet + NCC from MCC (5 h) sample. There has been significant research in the use of NCC to reinforce natural polymers, e.g., starch and produce all bio-based nanocomposites, fully biodegradables and renewable: it has been for instance demonstrated that starch-based polymers can be reinforced by the addiction of a percentage of NCC as filler, as observed by Cao _et al._[35]. #### 3.2.4 Bending Tests The durability properties of the prepared samples were measured by performing one bending test for each sample since the test is destructive. The number of maxima folds before disruption were computed. The results of bending tests were summarized in Table 8. The results demonstrated a sensible increase in the observed bending behavior for samples Linter paper sheet + NCC from _Cynara cardunculus_ (plus 48% in bending measured value) and Linter paper sheet + NCC from MCC (10 h) (plus 6.3% in bending measured value), while not significant improvement for this parameter was measured for the Linter paper sheet + NCC from MCC (5 h) sample. This positive result is a consequence of the measured improvement of the mechanical properties of reinforced linter paper sheets. The results presented in Tables 4-8 can be probably justified by the hydrogen bonding and gel swelling properties of NCC used as filler of a natural polymer and in the presence of water, as already \\begin{table} \\begin{tabular}{l c c c c} \\hline \\multirow{2}{*}{**Type of sample**} & **Linter paper** & **Linter paper sheet** & **Linter paper sheet** & **Linter paper sheet** \\\\ & **sheet without** & + NCC from** & + NCC from MCC & + NCC from MCC \\\\ & **NCC** & _Cynara cardunculus_ & **(10 h)** & **(5 h)** \\\\ \\hline Number of measures & 4 & 4 & 4 & 4 \\\\ Variance (\\(\\sigma^{2}\\)) & 0.0137 & 0.0019 & 0.0139 & 0.0134 \\\\ Standard deviation (\\(\\sigma\\)) & 0.1170 & 0.0436 & 0.1179 & 0.1158 \\\\ \\hline \\end{tabular} \\end{table} Table 7: Statistical analysis of the breaking load tests before and after the NCC enrichment of the linter paper sheet. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\multirow{2}{*}{**Type of sample**} & **Linter paper** & **Linter paper sheet +** & **Linter paper sheet** & **Linter paper sheet** \\\\ & **sheet without** & **NCC from** & + NCC from MCC & + NCC from MCC \\\\ & **NCC** & _Cynara cardunculus_ & **(10 h)** & **(5 h)** \\\\ \\hline Bending measured & & & & \\\\ value (number) & 95 & 141 & 101 & 95 \\\\ \\hline \\end{tabular} \\end{table} Table 8: Results of the Bending tests before and after the NCC enrichment of the linter paper sheet. observed in literature [36]; correspondently, the water uptake capacity is expected to decrease referring to the untreated material. ## 4 Conclusions The possibility to enhance the mechanical, barrier and durability properties of a paper sheet using NCC was presented. An experimental procedure was applied to produce NCC from _Cynara cardunculus_ and from commercial MCC. The results showed a significant increase in these properties for the investigated samples. Some opportunities for scaling-up the process to a pre-commercial stage will be considered as future development of the research since the paper industry is going to the nano-paper world in order to save resources and increase durability and sustainability of their processes. In this context, a major contributory task could be represented by the further enhancement of biorefinery, obtaining high quality products, e.g., NCC from bio-residues. Moreover the future experimental campaign is dedicated to the development of a new green extraction procedure, using ionic liquids instead of acids as medium agents. ## Acknowledgments The authors would like to acknowledge the Cartiere Fabriano laboratories for their availability to jointly develop the research. ## Author Contributions Franco Cotana took part to the work as scientific coordinator of the CIRIAF/CRB research group; he provided the guidelines and the research goals to be disclosed. Gianluca Cavalaglio, as responsible of the Laboratory equipments, supervised the whole work. Valentina Coccia and Alessandro Petrozzi wrote a part of the context background and analized the results. Mattia Gelosia carried out the lab procedures and wrote some contributory paragraphs. ## Conflicts of Interest The authors declare no conflict of interest. ## References * Marques et al. 2011 Marques, G.; del Rio, J.C.; Gutierrez, A. Lipophilic extractives from several nonwoody lignocellulosic crops (flax, hemp, sisal, abaca) and their fate during alkaline pulping and TCF/ECF bleaching. _Bioresource Technol._**2010**, _101_, 260-267. * Oksman et al. 2011 Oksman, K.; Etang, J.A.; Mathew, A.P.; Jonoobi, M. Cellulose nanow whiskers separated from a bio-residue from wood bioethanol production. _Biomass Bioenergy_**2011**, _35_, 146-152. * Pirani et al. 2013 Pirani, S.; Hashaikeh, R. Nanocrystalline cellulose extraction process and utilization of the byproduct for biofuels production. _Carbohyd. Polym._**2013**, _93_, 357-363. * Cotana et al. 2014 Cotana, F.; Cavalaglio, G.; Gelosia, M.; Nicolini, A.; Coccia, V.; Petrozzi, A. Production of bioethanol in a second generation prototype from pine wood chips. _Energy Procedia_**2014**, _45_, 42-51. * Messineo et al. 2012 Messineo, A.; Volpe, R.; Asdrubali, F. Evaluation of Net Energy Obtainable from Combustion of Stabilised Olive Mill By-Products. _Energies_**2012**, \\(5\\), 1384-1397. * Beatrice et al. 2013 Beatrice, C.; Di Blasio, G.; Lazzaro, M.; Cannilla, C.; Bonura, G.; Frusteri, F.; Asdrubali, F.; Baldinelli, G.; Presciutti, A.; Fantozzi, F.; _et al_. Technologies for energetic exploitation of biodiesel chain derived glycerol: Oxy-fuels production by catalytic conversion. _Appl. Energ._**2013**, _102_, 63-71. * Blanch and Wilke 1982 Blanch, H.W.; Wilke, C.R. Sugars and chemicals from cellulose. _Rev. Chem. Eng._**1982**, \\(1\\), 71-119. * Gani and Naruse 2007 Gani, A.; Naruse, I. Effect of cellulose and lignin content on pyrolysis and combustion characteristics for several types of biomass. _Renewable Energ._**2007**, _32_, 649-661. * Da Costa et al. 2009 Da Costa, L.; Chundawat, S.; Balan, V.; Dale, B. \"Cradle to Grave\" assessment of existing lignocellulosic pretreatment technologies. _Curr. Opin. Biotechnol._**2009**, _20_, 339-347. * Hendriks and Zeeman 2009 Hendriks, A.; Zeeman, G. Pretreatments to enhance the digestibility of lignocellulosic biomass. _Bioresource Technol._**2009**, _100_, 10-18. * Ruiz et al. 2000 Ruiz, M.M.; Cavaille, J.Y.; Dufresne, A.; Gerard, J.F.; Graillat, C. Processing and characterization of new thermoset nanocomposites based on cellulose whiskers. _Compos. Interfaces_**2000**, \\(7\\), 117-131. * De Souza Lima and Borsali 2004 De Souza Lima, M.M.; Borsali, R. Rodlike Cellulose microcrystals: structure, properties, and applications. _Macromol. Rapid Commun._**2004**, _25_, 771-787. * Lam et al. 2012 Lam, E.; Male, K.B.; Chong, J.H.; Leung, A.C.W.; Luong, J.H.T. Applications of functionalized and nanoparticle-modified nanocrystalline cellulose. _Trends Biotechnol._**2012**, _30_, 283-290. * Revol et al. 1998 Revol, J.-F.; Godbout, L.; Gray, D.G. Solid films of cellulose with chiral nematic order and optically variable properties. _J. Pulp. Pap. Sci._**1998**, _24_, 146-149. * Leung et al. 2013 Leung, A.C.W.; Lam, E.; Chong, J.; Hrapovic, S.; Luong, J.H.T. Reinforced plastics and aerogels by nanocrystalline cellulose. _J. Nanopart Res._**2013**, _15_, 1636. * Abdul Khalil et al. 2011 Abdul Khalil, H.P.S.; Bhat, A.H.; Irean Yusra, A.F. Green composites from sustainable cellulose nanofibrils: A review. _Carbohyd. Polym._**2011**, _87_, 963-979. * Bismarck et al. 2005 Bismarck, A.; Mishra, S.; Lampke, T. Plant fibers as reinforcement for green composites. In _Nat. Fibers, Biopoly. and Biocomposites_; Monhanty, A.K., Misra, M., Drzai, L.T., Eds.; CRC Press: Bota Raton, FL, USA, 2005; pp. 37-108. * Liu et al. 2011 Liu, A.; Walther, A.; Ikkala, O.; Belova, L.; Berglund, L.A. Clay nanopaper with tough cellulose nanofiber matrix for fire retardancy and gas barrier functions. _Biomacromolecules_**2011**, _12_, 633-641. * Liu and Berglund 2012 Liu, A.; Berglund, L.A. Clay nanopaper composites of nacre-like structure based on montmorillonite and cellulose nanofibers--Improvements due to chitosan addition. _Carbohyd. Polym._**2012**, _87_, 53-60. * Klemm et al. 2011 Klemm, D.; Kramer, F.; Moritz, S.; Lindstrom, T.; Ankerfors, M.; Gray, D.; Dorris, A. Nanocelluloses: A new family of nature-based materials. _Angew. Chem. Int. Ed._**2011**, _50_, 5438-5466. * Jackson et al. 2011 Jackson, J.K.; Letchford, K.; Wasserman, B.Z.; Ye, L.; Hamad, W.Y.; Burt, H.M. The use of nanocrystalline cellulose for the binding and controlled release of drugs. _Int. J. Nanomedicine_**2011**, \\(6\\), 321-330. * Pisello et al. 2014 Pisello, A.L.; Cotana, F.; Nicolini, A.; Buratti, C. Effect of dynamic characteristics of building envelope onthermal-energy performance in winter conditions: In field experiment. _Energ. Buildings_**2014**, _80_, 218-230. * Pilot plant for nanocellulose. Available online: [http://www.innventia.com/en/Our-Ways-of-Working/Demonstration-and-pilot/Pilot-plant-for-nanocellulose/](http://www.innventia.com/en/Our-Ways-of-Working/Demonstration-and-pilot/Pilot-plant-for-nanocellulose/) (accessed on 28 July 2014). * Habibi et al. 2010 Habibi, Y.; Lucia, L.A.; Rojas, O.J. Cellulose Nanocrystals: Chemistry, Self-Assembly, and Applications. _Chem. Rev._**2010**, _110_, 3479-3500. * Fantozzi et al. 2008 Fantozzi, F.; Barbanera, M.; Bartocci, P.; Massoli, S.; Buratti, C. Caratterizzazione delle biomassse II laboratorio del CRB. _La Termotecnica_**2008**, \\(6\\), 56-60. * Chen et al. 2011 Chen, W.; Yu, H.; Liu, Y.; Hai, Y.; Zhang, M.; Chen, P. Isolation and characterization of cellulose nanofibers from four plant cellulose fibers using a chemical-ultrasonic process. _Cellulose_**2011**, _18_, 433-442. * Tang et al. 2014 Tang, Y.; Yang, S.; Zhang, N.; Zhang, J. Preparation and characterization of nanocrystalline cellulose via low-intensity ultrasonic-assisted sulfuric acid hydrolysis. _Cellulose_**2014**, _21_, 335-346. * McMullan 1995 McMullan, D. Scanning electron microscopy 1928-1965. _Scanning_**1995**, _17_, 175-185. * SEM protocol UNIPG. Available online: [http://www.fisica.unipg.it/dip/servizi-informatici?q=howtoer&howto=sem#](http://www.fisica.unipg.it/dip/servizi-informatici?q=howtoer&howto=sem#) (accessed on 28 July 2014). * Puetz and Aegerter 2004 Puetz, J.; Aegerter, M.A. Dip coating technique. _Sol-Gel Technol. Glass Prod. Users_**2004**, \\(2\\), 37-48. * Abe et al. 2007 Abe, K.; Iwamoto, S.; Yano, H. Obtaining cellulose nanofibers with a uniform width on 15 nm from wood. _Biomacromolecules_**2007**, \\(8\\), 3276-3278. * Abe and Yano 2009 Abe, K.; Yano, H. Comparison of the characteristics of cellulose microfibril aggregates of wood, rice straw and potato tuber. _Cellulose_**2009**, _16_, 1017-1023. * Zhou et al. 2014 Zhou, D.; Tang, Y.; Zhang, N.; Zhang, J.; Liu, D. Effect of various cellulose derivatives on the properties of pigment coatings: A comparative study. _Dig. J. Nanomater. Bios._**2014**, \\(9\\), 305-315. * Paralikar et al. 2008 Paralikar, S.A.; Simonsen, J.; Lombard, J. Poly (vinyl alcohol)/cellulose nanocrystals barrier membranes. _J. Membr. Sci._**2008**, _320_, 248-258. * Cao et al. 2008 Cao, X.; Chen, Y.; Chang, P.R.; Stumborg, M.; Huneault, M.A. Green composites reinforced with hemp nanocrystals in plasticized starch. _J. Appl. Polym. Sci._**2008**, _109_, 3804-3810. * Huq et al. 2012 Huq, T.; Salmieri, S.; Khan, A.; Khan, R.A.; Le Tien, C.; Riedl, B.; Fraschini, C.; Bouchard, J.; Uribe-Calderon, J.; Kamal, M.R.; _et al_. Anocrystalline cellulose (NCC) reinforced alginate based biodegradable nanocomposite film. _Carbohyd. Polym._**2012**, _90_, 1757-1763.
Biorefinery aims at designing new virtuous and high-efficiency energy chains, achieving the combined production of biofuels (e.g., bioethanol) and biobased products. This emerging philosophy can represent an important opportunity for the industrial world, exploiting a new kind of nano-smart biomaterials in their production chains. This paper will present the lab experience carried out by the Biomass Research Centre (CRB) in extracting cellulose nanocrystals (NCC) from a pretreated (_via Steam Explosion_) fraction of _Cynara cardunculus_. This is a very common and invasive arboreal variety in central Italy. The NCC extraction methodology allows the separation of the crystalline content of cellulose. Such a procedure has been considered in the literature with the exception of one step in which the conditions have been optimized by CRB Lab. This procedure has been applied for the production of NCC from both _Cynara cardunculus_ and microcrystalline cellulose (MCC). The paper will discuss some of the results achieved using the obtained nanocrystals as reinforcing filler in a paper sheet; it was found that the tensile strength increased from 3.69 kg/15 mm to 3.98 kg/15 mm, the durability behavior (measured by bending number) changed from the value 95 to the value 141, and the barrier properties (measured by Gurley porosity) were improved, increasing from 38 s to 45 s. **Keywords:** biobased product; biorefinery; cellulose nanocrystals; residual biomass; steam explosion; paper industry
Summarize the following text.
335
arxiv-format/2103_07721v3.md
# Complex ecological communities and the emergence of island species area relationships Ankit Vikrant Ankit Vikrant Department of Space, Earth and Environment, Chalmers University of Technology, Maskingrand 2, 412 58 Gothenburg, Sweden 1Martin Nilsson Jacobi Department of Space, Earth and Environment, Chalmers University of Technology, Maskingrand 2, 412 58 Gothenburg, Sweden Martin Nilsson Jacobi Martin Nilsson Jacobi Department of Space, Earth and Environment, Chalmers University of Technology, Maskingrand 2, 412 58 Gothenburg, Sweden Footnote 2: email: [email protected] Received: date / Accepted: date ## 1 Introduction The species-area relationship (SAR) is arguably the most widely studied scaling law in ecology, having received empirical support from numerous studies spanning different geographical regions and taxa (Drakare et al. (2006); Lomolino and Weiser (2001)). The predominant power law form of the SAR was first described by O. Arrhenius in 1921 (Arrhenius (1921)). It related the number of species \\(S\\) to the area of a habitat \\(A\\) as \\(S\\sim A^{z}\\), where the exponent \\(z\\) varies widely between 0 and 1 (Drakare et al. (2006)). A quantitative meta-analysis of a large number of SAR studies estimated its average value as 0.27 (Drakare et al. (2006)). The power law was contested by a semi-log relationship in 1922, that advocated the form \\(S\\sim z\\log(A)\\) (Gleason (1922)). While the power law relationship is more widely reported, the semi-log SAR has also found support from numerous studies (Drakare et al. (2006); Lomolino and Weiser (2001)). There have been attempts to explain the power law form based on species distributions (Coleman (1981); Leitner and Rosenzweig (1997); Picard et al. (2004); Sizling and Storch (2004)), abundance distributions (Preston (1948)) or population dynamics through constraints on immigration (Bastolla et al. (2001); Durrett and Levin (1996)). The prevalence of SARs has also been attributed to the combined effects of widely observed abundance distributions and the fact that individuals from the same species cluster together (Martin and Goldenfeld (2006)). The semi-log relationship can be recovered from the power law SAR in some limit using species-incidence functions that depend on colonization and extinction rates (Ovaskainen and Hanski (2003)). However, there is no unified framework to explain the emergence of these competing SARs. These scaling relationships are emergent in that they could be described by coarse-grained dynamics of large communities at the species level without reference to finer details and properties of individual organisms. Understanding the assembly of large communities could therefore underpin mechanisms that shape these scaling laws. The analysis of large systems has benefitted from many emerging approaches in the recent decades. In 1972, P.W. Anderson influenced the philosophy of science by suggesting that'more is different' (Anderson (1972)), based on accumulating evidence from various disciplines. This means that the properties of a collective composed of many parts could be drastically different from the parts themselves. In the same year, R. May used random matrix theory to show that large ecosystems become unstable when their complexity increases beyond a threshold (May (1972)), which contradicted the prevailing notion that diversity increases stability. May's analytical results showed that one cannot have indefinite stability in large and complex ecosystems with many interactions. There is a limit beyond which an ecosystem is not resilient to small perturbations and can exhibit large fluctuations in the population abundances of the constituent species. He defined complexity in terms of connectance and interaction strength of the random matrix that encodes species interactions. The complex dynamics of such random interaction networks can be modelled using the Generalized Lotka-Volterra (GLV) equations. This model has been employed to uncover theoretical results ranging from identification of structural properties that affect coexistence (Servan et al. (2018)) to the study of generic assembly patterns that are consistent across network structures (Barbier et al. (2018); Bunin (2017)). Some recent studies have investigated the distribution of number of coexisting species that results from GLV dynamics of much larger species pools (Servan et al. (2018)). Others have even explored the progression and boundaries of extinction in large ecosystems (Pettersson et al. (2020)). These studies depart from identifying constraints on parameters that result in complete co-existence of all species. When the interaction strength is increased beyond the regime where all species co-exist, the system wades through a phase characterized by single-species extinctions (Pettersson et al. (2020)). May's stability limit marks the end of this phase beyond which no stable equilibria exist. We hypothesize that a modified GLV model accounting for spatial scaling could exhibit SARs through the assembly of random communities of different sizes. Our analysis relies on introducing an area parameter to the GLV equations to test these questions. We explore a large part of this area parameter space to recover different number of surviving species beyond the regime of complete co-existence. By further allowing demographic immigration in the modified GLV model, we demonstrate that the two widely reported forms of the SAR stem from differences in immigration rates and the skewness towards weak interactions. We discuss the implications of our results in the context of island systems. The differences in the two SAR forms are more significant for smaller islands especially on distant archipelagoes, which we describe in 4.1 and Supplementary appendix S2 using data from empirical studies (Diamond (1972); Diamond and Mayr (1976); Gooriah et al. (2020); Whittaker et al. (2014)). ## 2 Methods ### Generalized Lotka-Volterra with spatial scaling In its usual form, the GLV model describes the dynamics of species with densities \\(y_{i}\\) through the following equations: \\[\\frac{dy_{i}}{dt}=r_{i}y_{i}(1-\\frac{y_{i}}{K_{i}})+\\sigma y_{i}\\sum_{j\ eq i}B _{ij}y_{j} \\tag{1}\\] where \\(K_{i}\\) and \\(r_{i}\\) denote the carrying capacity and growth rate of \\(i^{th}\\) species. \\(B_{ij}\\) expresses pairwise inter-specific interaction strengths between species \\(i\\) and any other species \\(j\\). The full matrix B contains information about all possible pairwise interaction strengths between species. Equation 1 implies that in the absence of interactions, each species grows to its carrying capacity \\(K_{i}\\). \\(\\sigma\\) is the interaction strength parameter that scales all pairwise interaction strengths and consequently the variance of the interaction matrix \\(B\\). For a given value of \\(\\sigma\\) below May's limit, the system eventually relaxes to a stable equilibrium that represents an assembled community where species densities are resilient to small perturbations (Fig. 1). The equilibrium densities of species at this stable fixed point could either be zero or positive. Fixed points without any extinctions are called feasible solutions but these are of little concern to us since we are interested in communities with different number of surviving species assembled from a species pool. To address a spatial scaling perspective, we first argue for a physical interpretation of the parameter \\(\\sigma\\). First note that \\(\\sigma\\) operates by scaling all the interaction strengths within the ecosystem by the same magnitude. This implies that an increase in this parameter reflects increased encounter rates between various species. We posit that \\(\\sigma\\) behaves like the inverse of area such that larger values this parameter correspond to smaller areas. Figure 1: Equilibrium abundances of a competitive community of 100 species for increasing values of the interaction strength parameter (\\(\\sigma\\)). Each set of vertical dots represents an assembled community corresponding to a given value of \\(\\sigma\\). The bold black line in the inset traces the corresponding number of surviving species. Note that higher values of \\(\\sigma\\) correspond to more extinctions. The interaction strengths, growth rates and carrying capacities are chosen from normal distributions with means -1, +1 and +1 respectively. The standard deviation is set to 0.2 for each of these. More specifically, we replace densities in the GLV model by absolute abundances \\(x_{i}\\) and area. Without the interaction strength, the equations are: \\[\\frac{dx_{i}}{dt}=r_{i}x_{i}(1-\\frac{x_{i}}{K_{i}})+\\frac{x_{i}A_{0}}{A}\\sum_{j \ eq i}B_{ij}x_{j} \\tag{2}\\] where \\(A_{0}\\) parameterises this model for a given ecosystem. We set this parameter equal to 1 from hereon. The carrying capacity \\(K_{i}\\) is now an absolute quantity instead of a density, which explains why an area factor does not appear in that term. To account for the effect of decreasing areas on carrying capacities, we rewrite 2 as: \\[\\frac{dx_{i}}{dt}=r_{i}x_{i}(1-\\frac{x_{i}}{K_{i}(\\frac{A}{A_{init}})^{\\gamma} })+\\frac{x_{i}}{A}\\sum_{j\ eq i}B_{ij}x_{j} \\tag{3}\\] where \\(A_{init}\\) is the area corresponding to the first extinction. We fix \\(\\gamma=0.25\\) for the analysis described in this paper ( In general, \\(\\gamma<0.5\\) is consistent with the results that we report) such that the carrying capacities are scaled weakly as compared to the scaling of interspecies interaction strengths. Equivalently, the self-interactions scale weakly with changes in area. This assumption is consistent with the fact that individuals of the same species cluster, which has also been widely echoed in various works explaining the power law SARs (Martin and Goldenfeld (2006); Plotkin et al. (2000)). The self-interactions already exist at higher levels, and therefore change less drastically as compared to interactions with other species. This set of equations is similar to the usual GLV equations but now the interaction strength parameter \\(\\sigma\\) is replaced by the inverse of area. When working with abundances, it is more natural to argue that increase in the interaction strength parameter is analogous to decrease in area, which increases the encounter rates between species. We aim to describe a scenario where a regional pool of species is available to colonise different islands in a region (Kessler and Shnerb (2015)). For an island defined by its area, the dynamics resulting from our model culminates in a final community where some species from the regional pool might not be feasible. Islands of different sizes yield communities with different compositions as a consequence. We use our model to simulate ecosystem dynamics as follows: 1. We pick entries of the interaction matrix \\(B_{ij}\\) from a normal distribution that is symmetric around a negative mean (We fix mean = -1 and standard deviation = 0.2). 2. The growth rates \\(r_{i}\\) are drawn from a normal distribution with mean = 1 and standard deviation = 0.2. The constraints on interactions and growth rates describe a community of competitive species. 3. The carrying 4. Starting from an initial area, the number of surviving species is plotted against successively smaller island areas \\(A\\). We are primarily interested in investigating the properties and processes of community assembly that could possibly influence SARs using our spatially-implicit model. In all cases that we describe, we only show comparisons between the power-law and semi-log relationship forms. We perform a non-linear least-squares (NLSQ) analysis to fit and compare these forms using the least_squares function in'scipy.optimize' package. This function implements the Trust Region Reflective algorithm described in Branch et al. (1999). We also plot the linear regression of the corresponding better form for each of the cases. If there are considerable differences between the parameter estimates from the linear regression and the NLSQ analysis (this is the case only for the power-law estimates from an empirical dataset with few islands (Whittaker et al. (2014)) ), then we perform model averaging using the R package'sars' (Matthews et al. (2019)) to discern the better fit. Analogous to the scenario of increasing \\(\\sigma\\), the system relaxes to a unique stable fixed point when the area parameter is above a certain threshold. We obtain the number of surviving species from the fixed point for each value of the area parameter (Fig. 1). We hypothesize that the different number of surviving species obtained by varying the area parameter result in widely reported SARs. These relationships are usually studied for one type of species or species that are placed in the same trophic level. This is congenial to our choice of a competitive interaction matrix. A competitive system could represent functional groups such as pollinators that compete for some common resources. A competitive GLV model with demographic noise has been shown to reproduce neutral island theories of Wilson-MacArthur and Hubbell (Kessler and Shnerb (2015)). The power-law SAR has also been recovered from a spatially-explicit extension of the Lotka-Volterra competition model that allowed migration between patches (O'Sullivan et al. (2019)). ### Spatial scaling patterns with immigration Immigration slows down the decline in number of surviving species through introduction of new species (MacArthur and Wilson (1963)) or by delaying extinctions through incoming individuals of existing species (demographic immigration) (Brown and Kodric-Brown (1977)). What effects do different levels of immigration have on spatial scaling patterns in our model ecosystem? To address this, we redefine our GLV model with an additional term for demographic immigration: \\[\\frac{dx_{i}}{dt}=r_{i}x_{i}(1-\\frac{x_{i}}{K_{i}(\\frac{A}{A_{init}})^{\\gamma} })+\\frac{x_{i}}{A}\\sum_{j\ eq i}B_{ij}x_{j}+\\lambda e^{-\\beta/\\sqrt{A}} \\tag{4}\\] The last term represents the immigration rate. This term has a negligible contribution for smaller values of area, where a species may go extinct without support from the growth and interaction terms. A species is considered extinct in our simulations if its abundance falls below \\(10^{-5}\\). As the area of an ecosystem shrinks, its distance from other patches also increases. This reduces the possibility of immigrants entering the ecosystem. The immigration term in the above equation has an exponential function that represents varying levels of demographic rescue (Brown and Kodric-Brown (1977)) as a function of area. For large values of area, \\(\\lambda\\) is the effective immigration rate. \\(\\beta\\) is a constant in the exponential function analogous to the characteristic length scale in the spatially extended GLV model described in O'Sullivan et al. (2019). We fixed \\(\\beta=1000\\) and compared the results for different values of \\(\\lambda\\). We also consider interaction networks with more realistic connectances and distributions of interactions between species. Real food-webs are characterized by many weak and few strong interactions that also endow stability (McCann et al. (1998)). We study how the preponderance of weak interactions influences SARs. We use interactions drawn from exponential distributions that represent communities with varying skewness towards weak interactions. The rate parameter of the exponential distribution serves as a measure of this skew. In 4.1, we discuss our results in the light of two related empirical studies (Diamond (1972); Diamond and Mayr (1976)) that exemplify the dependence of SARs on immigration rates. Both studies investigated bird diversity in the Southwest Pacific but differ in terms of their remoteness from the'source island' of New Guinea. Our findings for low immigration rates (equivalently remote archipelago) are also consistent with the data from the Andaman and Azores Islands ( Gooriah et al. (2020, 2020); Whittaker et al. (2014), See Supplementary Appendix S2). The dataset used in (Diamond (1972); Diamond and Mayr (1976)) has islands with areas spanning over six orders of magnitude, conclusively differentiating between competing forms of the SAR. These studies exclude 'isolated' islands from their analysis, that are far from large islands within the archipelago. Speciation might influence the assembled communities especially on islands with fewer species. Islands whose avifaunas have not reached equilibrium are not included either. These are recolonized volcanic islands and islands that have undergone overall size contraction or modification of connecting land-bridges in the past c. 10,000 years. ## 3 Results Fig. 2 corresponds to the simplest case of an ecosystem with full connectance and no immigration. Starting with 100 species, we plot the number of surviving species for island areas where at least one species goes extinct. The SAR is best represented by a semi-log function through our model. The curve saturates at an upper asymptote for very high values of the area parameter (Fig. 2). The slope of the semi-log SAR varies with changes in the means of interactions and growth rates. It is also worth noting that for an intermediate range of areas, even the log-log plot could show a misleadingly good fit for a power law SAR (Fig. 2). ### What determines a log-log or a semi-log SAR? Our model - in its simplest form - supports the semi-log relationship that is also widely reported in literature (Drakare et al. (2006)). Our analysis suggests that varying levels of immigration lead to different functional forms of the SAR. We start with a very low value of \\(\\lambda\\) and progressively increase it to check the resulting SAR. For very low immigration rates, the semi-log relationship is supported (see Fig. S1 in Supplementary Material) as seen in the scenario without immigration (Fig. 2). However, there exists an intermediate regime best characterised by a power law (Fig. 3). This form of the SAR also lacks the upper asymptote that we observed in the semi-log fit (Fig. 2). Interestingly, using area in the immigration term instead of its square root does not change the above results (Fig. S2 in Supplementary Material). The level of skew towards weak interactions strongly influences SAR shape. Given the same immigration level, a higher skew towards weak interactions favours a semi-log relationship (Fig. 4, Fig. S3 in Supplementary Material). This result does not change for fat-tailed distributions such as the Pareto distribution in the regime where stable solutions exist (Fig. S3 in Supplementary Material). When studied for communities with low connectances, the SARs thus obtained have lower slopes as in most natural communities (Fig. 4). ## 4 Discussion Much attention has been devoted to explain the power law form of the SAR. Our results show the emergence of two most widely observed forms of SAR Figure 2: Species-area plots generated through 50 realizations of interaction matrix with mean = -1 and variance = 0.2. \\(A_{init}\\) = 50000. (A) The semi-log form shows a better fit. (B) The corresponding linear regression on a semi-log plot that shows an obvious upper asymptote. through differences in immigration rates and skewness towards weak interactions (Fig. 5). We use a simple spatially-implicit model that demonstrates properties of SARs in island-like systems where inter-island immigration is very low. Our analysis supports the view that community dynamics could result in the emergence of such spatial patterns. In general, since we relate area to the interaction strength parameter, the SARs seem to stem from the scaling of interactions for species within the same trophic level. In addition to immigration rates and skewness towards weak interactions, connectance also influences SAR slopes. If all other parameters are kept the same, then communities with lower connectance result in lower SAR slopes (Fig. 4, Fig. S3 in Supplementary Information). We also find that higher immigration rates correspond to higher SAR slopes, such that the number of surviving species fall off much more sharply with area for large areas. The SAR slopes we obtain are much more reasonable for choices based on realistic interaction networks (Fig. 4). We expect that some network structures could result in even lower slopes, but without much effect on the SAR form. Figure 3: Species area plots demonstrating the better fit of power law SAR for intermediate values of immigration rates. Panels A and C show the fits for \\(\\lambda=0.1\\) and \\(\\lambda=0.01\\) respectively for 50 instances of the interaction matrix. Panels B and D correspond to the respective linear regressions on log-log plots. The interaction strength mean and variance are -1 and 0.2 respectively. \\(A_{init}\\) = 15000. ### Immigration shapes SARs: The case of remote archipelagoes Our results have important implications for island systems, which we illustrate using two extensive empirical studies from the Southwest Pacific (Diamond (1972); Diamond and Mayr (1976)). The Solomon archipelago in (Diamond and Mayr (1976)) is more than 600 km away from the'source island' of New Guinea. The authors assume that intra-archipelago immigration rates are much higher than the immigration rates from the'source' island of New Guinea. They further plot the SAR for three groups of islands within the Solomon Archipelago, which supports a semi-log form. The slope of the SAR is nearly the same across these three groups of islands (Fig. 6). We surmise that the immigration of birds into an island is also balanced by emigration to other islands within the same group. In other words, the system is at a steady-state of zero or very low immigration within the archipelago. Thus, any effective immigration should emanate from the source island or from islands in other distant archipelagos. The large distance from these other sources implies that Figure 4: SAR plots for exponentially distributed interactions with two different rate parameters. All plots correspond to \\(\\lambda\\) = 0.01 and connectance = 0.1, where the entries of the interaction matrix are chosen randomly as an Erdős-Rényi graph. The semi-log form is better supported for rate parameter = 0.5, as demonstrated by the estimates in A (\\(A_{init}\\) = 15000). Plots C shows the fits for rate parameter = 0.25, where the power law performs better (\\(A_{init}\\) = 20000). (B) Linear regression on a semi-log plot using the same simulated data as in panel A. (D) Log-log plot showing the corresponding linear regression for data in C. the net immigration rates to the Solomon islands are very low. In fact, the authors state that with increasing isolation of an archipelago, the SAR may shift in form from a power function to an exponential (semi-log). The species richness data from the Azores (Whittaker et al. (2014)) as well as Andaman Islands (Gooriah et al. (2020b)) also concurs with this claim (See Supplementary Appendix S2). The Azores archipelago includes only nine islands but has been extensively studied over the past many decades. Both relationship forms show good fits to the data primarily because of the small number of islands but the semi-log form is more predictive for smaller islands. All of these studies are consistent with our theoretical finding that low immigration rates lead to semi-log SARs. The dataset in (Diamond and Mayr (1976)) also has many islands smaller than a few square kilometres, which are usually absent in many SAR studies (Lomolino and Weiser (2001)). Both forms of the SAR could show a very good (and similar) fit to data for larger island sizes (Fig. 6). As the authors point Figure 5: Immigration rates and skewness towards weak interactions determine SAR forms. Semi-log relationship dominates in the absence of immigration. Higher immigration rates from a source pool result in power law relationships but these could shift to semi-log SARs if the relative proportion of weak interactions is increased. \\(S\\), \\(A\\) and \\(z\\) represent the number of species, area and the scaling law exponent respectively. out, really small islands should be included in SAR analyses to conclusively identify the correct form of the relationship (Diamond and Mayr (1976)). Another study from islands that lie 5 to 300 miles from New Guinea, found a power-law SAR (Diamond (1972)). Considering that these islands lie closer to the'source' island of New Guinea, the immigration rates are likely to be higher than those for the Solomon Archipelago. This lends support to our theoretical results on the incidence of power-law SARs for higher immigration levels. Figure 6: SAR plots for three groups of non-isolated islands within the Solomon Archipelago. These groups differ in how the islands within them were connected during the Pleistocene period. The islands in Group 3 did not have any history of connections. The semi-log relationship shows a good fit to data (A). The \\(R^{2}\\) values for the regression lines are 0.978, 0.982 and 0.955 for Group 1, 2 and 3 respectively. The slopes for the different groups are very similar. Panel B shows a clear departure from a power-law relationship for smaller areas. The linear regression lines indicate a good fit for islands larger than one square mile. In particular, the \\(R^{2}\\) value for such islands in group 1 is 0.976 from the power-law SAR. This demonstrates that a naive inference could support a power law, in spite of the islands spanning over four orders of magnitude in area ( \\(>\\) 1 square mile). ## 5 Conclusion Given a random configuration of competitive species, we recover many known features of SARs while also identifying factors that might best explain the variation in these relationships. The two SAR forms might show similar fits to data for a large span of areas but their differences could be stark for smaller islands especially when immigration rates from a source pool are low. Our results imply semi-log relationships for low immigration rates, which are possible through factors such as remoteness of an archipelago as in (Diamond and Mayr (1976)). Assuming a power law SAR in such situations could mislead extinction scenarios since these would overestimate the species richness for smaller areas. It is extremely important to investigate the effects of habitat loss, especially on small islands in distant archipelago, given that islands have witnessed disproportionately large number of extinctions (Loehle and Eschenbach (2012); Spatz et al. (2017)). We hope that our study prompts empirical studies to systematically evaluate the effects of immigration and community structure on species-area relationships. ###### Acknowledgements. We would like to thank Susanne Pettersson for useful discussions. AV is especially grateful to Akshay Surendra for detailed feedback on the manuscript. ## References * Anderson (1972) Anderson PW (1972) More is different. Science 177(4047):393-396 * Arrhenius (1921) Arrhenius O (1921) Species and area. Journal of Ecology 9(1):95-99 * Barbier et al. (2018) Barbier M, Arnoldi JF, Bunin G, Loreau M (2018) Generic assembly patterns in complex ecological communities. Proceedings of the National Academy of Sciences 115(9):2156-2161 * Bastolla et al. (2001) Bastolla U, Lassig M, Manrubia S, Valleriani A (2001) Diversity patterns from ecological models at dynamical equilibrium. Journal of Theoretical Biology 212(1):11-34 * Branch et al. (1999) Branch MA, Coleman TF, Li Y (1999) A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems. SIAM Journal on Scientific Computing 21(1):1-23 * Brown and Kodric-Brown (1977) Brown JH, Kodric-Brown A (1977) Turnover rates in insular biogeography: effect of immigration on extinction. Ecology 58(2):445-449 * Bunin (2017) Bunin G (2017) Ecological communities with lotka-volterra dynamics. Physical Review E 95(4):042414 * Coleman (1981) Coleman BD (1981) On random placement and species-area relations. Mathematical Biosciences 54(3-4):191-215 * Diamond (1972) Diamond JM (1972) Biogeographic kinetics: estimation of relaxation times for avifaunas of southwest pacific islands. Proceedings of the National Academy of Sciences 69(11):3199-3203 * Diamond and Mayr (1976) Diamond JM, Mayr E (1976) Species-area relation for birds of the solomon archipelago. Proceedings of the National Academy of Sciences 73(1):262-266Drakare S, Lennon JJ, Hillebrand H (2006) The imprint of the geographical, evolutionary and ecological context on species-area relationships. Ecology letters 9(2):215-227 * Durrett and Levin (1996) Durrett R, Levin S (1996) Spatial models for species-area curves. Journal of Theoretical Biology 179(2):119-127 * Gleason (1922) Gleason HA (1922) On the relation between species and area. Ecology 3(2):158-162 * Gooriah et al (2020a) Gooriah LD, Davidar P, Chase JM (2020a) Data from: Species-area relationships in the andaman and nicobar islands emerge because rarer species are disproportionately favored on larger islands. Dryad, Dataset, DOI [https://doi.org/10.5061/dryad.7m0cfxpr5](https://doi.org/10.5061/dryad.7m0cfxpr5) * Gooriah et al (2020b) Gooriah LD, Davidar P, Chase JM (2020b) Species-area relationships in the andaman and nicobar islands emerge because rarer species are disproportionately favored on larger islands. Ecology and Evolution 10(14):7551-7559 * Kessler and Shnerb (2015) Kessler DA, Shnerb NM (2015) Generalized model of island biodiversity. Physical Review E 91(4):042705 * Leitner and Rosenzweig (1997) Leitner WA, Rosenzweig ML (1997) Nested species-area curves and stochastic sampling: a new theory. Oikos 79(3):503-512 * Loehle and Eschenbach (2012) Loehle C, Eschenbach W (2012) Historical bird and terrestrial mammal extinction rates and causes. Diversity and Distributions 18(1):84-91 * Lomolino and Weiser (2001) Lomolino M, Weiser M (2001) Towards a more general species-area relationship: diversity on all islands, great and small. Journal of biogeography 28(4):431-445 * MacArthur and Wilson (1963) MacArthur RH, Wilson EO (1963) An equilibrium theory of insular zoogeography. Evolution 17:373-387 * Martin and Goldenfeld (2006) Martin HG, Goldenfeld N (2006) On the origin and robustness of power-law species-area relationships in ecology. Proceedings of the National Academy of Sciences 103(27):10310-10315 * Matthews et al (2019) Matthews TJ, Triantis KA, Whittaker RJ, Guilhaumon F (2019) sars: an r package for fitting, evaluating and comparing species-area relationship models. Ecography 42(8):1446-1455 * May (1972) May RM (1972) Will a large complex system be stable? Nature 238(5364):413-414 * McCann et al (1998) McCann K, Hastings A, Huxel GR (1998) Weak trophic interactions and the balance of nature. Nature 395(6704):794-798 * O'Sullivan et al (2019) O'Sullivan JD, Knell RJ, Rossberg AG (2019) Metacommunity-scale biodiversity regulation and the self-organised emergence of macroecological patterns. Ecology Letters 22(9):1428-1438 * Ovaskainen and Hanski (2003) Ovaskainen O, Hanski I (2003) The species-area relationship derived from species-specific incidence functions. Ecology Letters 6(10):903-909 * Pettersson et al (2020) Pettersson S, Savage VM, Nilsson Jacobi M (2020) Predicting collapse of complex ecological systems: quantifying the stability-complexity continuum. Journal of the Royal Society Interface 17(166):20190391 * Picard et al (2004) Picard N, Karembe M, Birnbaum P (2004) Species-area curve and spatial pattern. Ecoscience 11(1):45-54* Plotkin et al (2000) Plotkin JB, Potts MD, Leslie N, Manokaran N, LaFrankie J, Ashton PS (2000) Species-area curves, spatial aggregation, and habitat specialization in tropical forests. Journal of theoretical biology 207(1):81-99 * Preston (1948) Preston FW (1948) The commonness, and rarity, of species. Ecology 29(3):254-283 * Servan et al (2018) Servan CA, Capitan JA, Grilli J, Morrison KE, Allesina S (2018) Coexistence of many species in random ecosystems. Nature ecology & evolution 2(8):1237-1242 * Sizling and Storch (2004) Sizling AL, Storch D (2004) Power-law species-area relationships and self-similar species distributions within finite areas. Ecology letters 7(1):60-68 * Spatz et al (2017) Spatz DR, Zilliacus KM, Holmes ND, Butchart SH, Genovesi P, Ceballos G, Tershy BR, Croll DA (2017) Globally threatened vertebrates on islands with invasive species. Science advances 3(10):e1603080 * Whittaker et al (2014) Whittaker RJ, Rigal F, Borges PA, Cardoso P, Terzopoulou S, Casanoves F, Pla L, Guilhaumon F, Ladle RJ, Triantis KA (2014) Functional biogeography of oceanic islands and the scaling of functional diversity in the azores. Proceedings of the National Academy of Sciences 111(38):13709-13714 ### S1 Additional Methods To set up a system describing the dynamics of many species through our model, we choose normally distributed per capita growth rates, carrying capacities, and entries of the interaction matrix (unless explicitly specified). The growth rates are distributed around a positive mean such that there is a very low probability of having species with negative growth rates. Similarly, the random interactions are distributed around a negative mean such that almost all interactions are negative. Choosing carrying capacities from uniform or log-normal distributions is also consistent with the results that we report in this paper. The model is numerically solved for different areas for which stable equilibria exist. For a given area, this translates to numerically simulating the GLV system for many time steps until it converges to the stable fixed point of equation 2 in main text. With regards to the parameter choices, we tried to maximize the possible area values for which each of the numerical experiments could be run while ensuring that all solutions converge to stable equilibria. With regards to immigration, a species is considered extinct if its abundance falls below \\(10^{-5}\\) for a given area. Initial abundances are drawn from a normal distribution (mean = 10, standard deviation = 0.1) but the final community does not depend on these abundances especially when stable equilibria exist. We simulate over successively smaller areas such that consecutive areas are related by a factor of 0.85. The simulations in figure 2 are an exception (we fix this factor as 0.75), partly because stable solutions exist for more number of iterations here. ## S2 SAR for bird data from the Andaman and Azores islands Andaman Islands is an isolated archipelago comprising of many islands in the Bay of Bengal. Here we show that the bird dataset from this archipelago (Gooriah et al. (2020a)) shows a better fit to the semilog SAR (Fig. S5), which is particularly evident at smaller areas. The Azores archipelago is a well-studied region that is considerably distant from the mainland but it comprises of only nine islands. We used the spider and beetles dataset reported in Whittaker et al. (2014). The parameter estimation from the linear regression and the non-linear least squares differs for the power law - in part because of the low number of islands. Therefore, we also perform a model averaging using the R package'sars' (Matthews et al. (2019)). The corrected Akaike Information Criterion (AICc) weights support the semi-log form, in agreement with the non-linear least squares method. These results also hold when only the indigenous (non-exotic) species are considered. ## S3 Supplementary Figures **Fig. S2** SAR plots when immigration term is \\(\\lambda e^{-\\beta/A}\\) where \\(\\beta=30000\\) and \\(\\lambda=5\\times 10^{-5}\\). We obtain the power-law SAR for the specified \\(\\lambda\\) when the immigration decreases exponentially with area, as shown in A. The semi-log relationship dominates if \\(\\lambda\\) is any lower. (B) Log-log plot with the linear regression using the same data as in A. \\(A_{init}=15000\\) **Fig. S3** SAR plots for \\(\\lambda=0.01\\), connectance \\(=1\\) where the interactions are drawn from an exponential distribution. Panel A corresponds to rate parameter \\(=2\\) that results in a better fit for the semi-log form. Panel C implies a better fit for the power law where rate parameter \\(=0.5\\). Panels B and D show the linear regressions corresponding to data in A and C respectively. \\(A_{init}=20000\\) for all cases here. **Fig. S4** SAR plots for \\(\\lambda=0.01\\), connectance = 1 where the interactions are drawn from a Pareto distribution. Panels A and B correspond to shape parameter = 3. A semi-log function fits the simulated data in A better - panel B shows the corresponding linear regression. The least-squares error in panel C implies a better fit for the power law where shape parameter = 2. Panel D is a log-log plot showing the corresponding linear regression. \\(A_{init}=20000\\) for all cases here. **Fig. S5** (A) The semi-log form shows a better fit to the bird data from Andaman Islands, shown using non-linear least squares errors. (B) and (C) Plots showing linear regression fits on a log-log and semi-log plot respectively. Notice the larger deviation at lower areas in the log-log plot. **Fig. S6** Spider diversity data from Azores Islands. (B) We perform model averaging over the two functional forms to compare the model fits using AICc weights. The semi-log form performs slightly better but the differences are hard to compare using other methods (A, C and D). **Fig. S7** Indigenous spider diversity data from Azores Islands. (B) We perform model averaging over the two functional forms to compare the model fits using AICc weights. The semi-log form performs slightly better but the differences are not hard to assess using other methods (A, C and D). **Fig. S8** Beetle diversity data from Azores Islands. (B) We perform model averaging over the two functional forms to compare the model fits using AICc weights. The semi-log form performs slightly better but the differences are hard to compare using other methods, especially because the differences in parameter estimates for the power-law fit are considerable. (A, C and D). **Fig. S9** Diversity data for indigenous beetles from Azores Islands. (B) We perform model averaging over the two functional forms to compare the model fits using AICc weights. The semi-log form performs slightly better but the differences are not hard to assess using other methods, especially because the differences in parameter estimates for the power-law fit are considerable. (A, C and D).
It has been a century since the species-area relationship (SAR) was first proposed as a power law to explain how species richness scales with area. There have been many attempts to explain the origin of this predominant form. Apart from the power law, numerous empirical studies also report a semi-log form of the SAR, but very few have addressed its incidence. In this work, we test whether these relationships could emerge from the assembly of large random communities on island-like systems. We reformulate the generalized Lotka-Volterra model by introducing an area parameter that determines the species richness of the assembled communities. Our analysis demonstrates that the two most widely reported relationship forms can emerge due to differences in immigration rates and skewness towards weak interactions. We particularly highlight the incidence of the semi-log SAR for low immigration rates from a source pool, which is consistent with several previous empirical studies. The two SAR forms might show good fits to data over a large span of areas but a power-law overestimates species richness on smaller islands in remote archipelago. Keywords:Community assembly macroecology species-area relationship complex ecological communities + Footnote †: journal:
Condense the content of the following passage.
229
arxiv-format/2101_05069v1.md
# Formatting the Landscape: Spatial conditional GAN for varying population in satellite imagery Tomas Langer Intuition Machines Inc contact author: [email protected] &Natalia Fedorova Max Planck Institute for Evolutionary Anthropology &Ron Hagensieker osir.io ## 1 Introduction Human beings are not actionless pawns in the face of climate change, they adapt to both direct and indirect pressures brought about by an increasingly unstable climate. People can either choose to leave problematic areas, migrating both locally and internationally to adapt to their circumstances, or they can stay and change their lifeways. In either case, and on top of expected demographic change, human adaptation to climate change thus reshuffles the settlement landscape [9]. As local populations ebb and flow, land use and land cover change in response. Due to this high mobility of populations, particularly given recent work on climate induced migration [32], one of the major challenges for planning for climate change scenarios is thinking about where people will be, and how this will change the landscape. State-of-the-art work on gridded population forecasts shows us the value of a greater geographic resolution and border oblivious approach [32; 20]. However, such forecasts still require analytic processing to evaluate their consequences for local landscapes. In this paper, we explore the potential for generating satellite imagery conditional on population change as an in-between step for analysis, and as a means of directly visualizing forecasts in a realistic way. To do so, we employ the latest generative models from the field of unsupervised machine learning, namely generative adversarial networks [13]. GANs are a state-of-the-art technique for high resolution image generation, as they can generate images from random noise, also called thelatent space. We refer the reader to a review article [39] for additional details. Generative models have been successfully applied to various high resolution datasets, mainly faces (e.g. CelebAHQ [21], FFHQ [22]), objects (e.g. Imagenet [8]), and scenes (e.g. LSUN [40], Cityscapes [7]). However, generative models in the high resolution earth observation domain are relatively under-explored [33]. On top of image generation, one might wish to edit a real image via a trained generative model. For this, a mapping from image space to latent space is required. This is a natural feature of autoencoders [25] or flow based generative models [24]. However, one can learn the mapping for a trained GAN model as well, which is called projection [1; 23]. In order to facilitate fast and accurate mapping, we decided to train a hybrid model based on Adversarial Latent AutoEncoder (ALAE) [31], which combines a GAN and an AE by training a generator and an encoder end-to-end. Furthermore, to explicitly control the generated population, we add a conditional input to the generator (more information in appendix 5.1), where the input label is a pixel-level population map. Our main contributions are: * Adding spatial conditioning module to the ALAE architecture * Training the model on satellite + population data * Visualizing population in generated images * Evaluating the generative model's performance in terms of quality and the population change effect ## 2 Methods ### Data Two data sets are utilized over a study site of continental Central America including Mexico. We focus on Central America here as it has been identified as a region already experiencing high levels of internal and international migration due to climate change, and has been the focus of prior modelling efforts [20]. However, this approach could of course be applied to any geographic region. Image data is derived from surface reflectance data from ESA's Sentinel-2 mission [26]. Sentinel-2 is a constellation of two satellites, which collect images at a 5 day revisit. The second data set is the Global Human Settlement population data (GHS-POP) for the years \\(2000\\) and \\(2015\\) from the European Commission's Joint Research Centre [34; 11]. Both datasets are publicly available, and the details of how we sample the data are available in appendix 5.3. ### Model architecture We use ALAE [31] for the basis of our model, which is in turn based on the StyleGAN [22] architecture. We refer the reader to these papers for in depth description of the architecture, and focus here instead on our modifications. We adapt the ALAE codebase [30] for our training and evaluation, and our additional code and trained models can be accessed here 2. Figure 1: **SCALAE model training** where white circles represent original ALAE inputs, yellow circles are our additional inputs, green boxes are original ALAE modules, blue boxes are our modified modules, and red boxes are the losses. Our training model differs from ALAE in 2 ways: the added _Spatial Conditional Style_ (SCS) module, and the modified encoder input. Hence, the equation 2 from the ALAE paper can be modified to equation 1. All training modifications are also shown in figure 1. \\[\\text{G = }G\\circ SCS\\circ F\\text{, and D = }D\\circ E \\tag{1}\\] The SCS module is used to feed conditional information (population map) to the generator. The style input \\(w\\) and appropriately resized population map _pop_ are combined together by a learned function that maps both to the same number of channels, then summed together, and finally fed into the adaptive instance normalization layers of the generator. This process can be seen in figure 5 in the appendix 5.5. The discriminator gets the conditional information (population map) as an input so that it provides a learning signal based on how the population map and the generated image fit together. This is done via a simple concatenation of the RGB and population channels. Because the ALAE discriminator is partly an encoder, the population map is concatenated with the RGB channels of either real or fake image and fed into the encoder to predict the style \\(w\\), which is then fed into the discriminator head. ### Reconstruction After model training, we reconstruct a real satellite image by mapping it to the latent space of the model. This is trivial thanks to the autoencoder architecture, visualized in figure 6 in appendix 5.6. Since the population map serves as an input to the generator, we can feed in a custom population map and control the population in the reconstructed output. This process can be used to visualize how real world places would look like with an alternative population distribution, for example, a climate change induced population change. ## 3 Results ### Generation model quality To evaluate quality of generations we calculate the Frechet Inception Distance (FID) [16], more information in appendix 5.8. The overall model FID is \\(45.13\\) (lower is better) for random generations. Our work would benefit from general benchmarking in the field, as at the moment, FID scores are only useful as a comparison across different models evaluated on the same data set. We thus report the FID score with the hopes of stimulating further comparison in the future. In the absence of a direct comparison, we visually check the images, and as portrayed by figure 8 in appendix 5.10.1 we confirm that generations strongly resemble realistic satellite imagery with sufficient diversity. ### Reconstruction quality To further evaluate model quality we focus on reconstructing real satellite imagery. In this case, we give the model a real reference image as an additional input, which, as expected, improves the FID to \\(35.97\\). Examples of reconstructions can be seen in figure 2 and more in appendix 9. By generating reconstructions of real images, we create matching pairs that can be used for evaluation. We calculate the difference between the pairs of each real vs reconstructed image. The difference measure can be done on the pixel level, and on the semantic level using pretrained Inception features (same features as used in the FID score calculation). We use a standard l2 distance to compute the difference measures. For the pixel level, the mean pixel l2 distance between pairs is \\(277.40\\), but a frequency plot of the distribution of pixel distances for \\(9436\\) image comparisons has a long tail, so much worse images in terms of pixel reconstruction are also possible, shown in figure 6(a) in appendix 5.9. The mean value of the semantic distance is \\(15.15\\) and here a frequency plot of the distances is much more normally distributed, meaning better and worse images are equally likely (see figure 6(b) in appendix 5.9). The extreme tails of both of the above measure are visualized in figure 12 in appendix 5.10.5. ### Population effect To evaluate the effect of population on reconstructed images, and thus show the efficacy of our model (i.e. how well the population conditioning works), we produce generations for varying population inputs and visualize the pixel difference between them. In figure 3, we calculate the pixel difference averaged over 20 samples of style vectors \\(w\\), showing the high consistency of the model reconstruction. Thus highlighting that the population conditioning is spatially consistent. An example of this process is visualized in figure 10 in appendix 5.10.3 and in figure 11 in appendix 5.10.4. ## 4 Discussion and Concluding remarks We have created a model architecture that makes it possible to spatially condition style-based generative methods and thus to explicitly disentangle the latent space from a spatial label. We show that the population in the generated images can be manually controlled in a fine-grained manner, giving the user the ability to change population in specific parts of an image. Moreover, the encoder of our network can be used to map real images to the latent space, making it possible to not only edit fake, but also real, imagery. We believe this model could be useful for visualizing climate change related population forecasts such as those modelled in [20], as it allows practitioners and researchers to generate imagery flexibly, concretely, and with a means to characterize uncertainty. Furthermore, the ability to map real images to the latent space opens up several image editing possibilities. We can continuously perform latent space arithmetic to create meaningful changes in the generated images, following previous GAN examples (e.g. [22; 23; 18]). Moreover, combining latent space arithmetic with explicit population conditioning delivers more control over exactly what is generated and where. Importantly, this can be done continuously, not just to generate a static outcome, but also to interpolate between or visualize a distribution of possible outcomes. Likewise, it is difficult to evaluate the climate change effect on population on real imagery without reference longitudinal data. This will become more possible as longitudinal satellite data collections with matching population grids become available for longer time spans. Finally, the imagery we generate can be fed directly into existing frameworks for land use and land cover analysis, without further retraining or adaptation. Figure 3: _left_: population input, _right_: pixel difference averaged over 20 styles Figure 2: _left_: original input image, _center_: input population map, _right_: generated image ## Acknowledgments and Disclosure of Funding First of all, we would like to thank Lucas Kruitwagen, our mentor from the NeurIPS 2020 \"Tackling Climate Change with Machine Learning\" workshop mentorship program, for feedback on structuring the project and positioning within wider literature. Next we are thankful to Intuition Machines Inc for providing the necessary compute resources for this project, and Tom Bishop from Intuition Machines for general feedback on the paper. Last but not least, we would like to thank Bjorn Lutjens, Esther Wolf, and Aruna Sankaranarayanan for fruitful discussion on the topic of generating satellite imagery and its relation to climate change. ## References * Abdal et al. [2019] R. Abdal, Y. Qin, and P. Wonka. Image2stylegan: How to embed images into the stylegan latent space?, 2019. * Abdal et al. [2020] R. Abdal, P. Zhu, N. Mitra, and P. Wonka. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows, 2020. * Brock et al. [2019] A. Brock, J. Donahue, and K. Simonyan. Large scale gan training for high fidelity natural image synthesis, 2019. * Chen et al. [2019] T. Chen, X. Zhai, M. Ritter, M. Lucic, and N. Houlsby. Self-supervised gans via auxiliary rotation loss, 2019. * Chen et al. [2016] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets, 2016. * Cherti [2018] M. Cherti. Deep generative neural networks for novelty generation : a foundational framework, metrics and experiments, 01 2018. * Cordts et al. [2016] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding, 2016. * Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database, 2009. * Devitt [2015] C. Devitt. Climate Change and Population Displacement. pages 27-32, 2015. * Dieng et al. [2019] A. B. Dieng, F. J. R. Ruiz, D. M. Blei, and M. K. Titsias. Prescribed generative adversarial networks, 2019. * Technical report by the Joint Research Centre (JRC), European Union, 2019. URL [https://ghsl.jrc.ec.europa.eu/documents/GHSL](https://ghsl.jrc.ec.europa.eu/documents/GHSL){_}Data{_}Package{_}2019. pdf?t=1478q532234372. * Geirhos et al. [2019] R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness, 2019. * Goodfellow et al. [2014] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. _Advances in Neural Information Processing Systems_, 3:2672-2680, 2014. ISSN 10495258. * Gorelick et al. [2017] N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D. Thau, and R. Moore. Google earth engine: Planetary-scale geospatial analysis for everyone. _Remote sensing of Environment_, 202:18-27, 2017. * Gu et al. [2020] S. Gu, J. Bao, D. Chen, and F. Wen. Giqa: Generated image quality assessment, 2020. * Heusel et al. [2018] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018. * Huang and Belongie [2017] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization, 2017. * Harkonen et al. [2020] E. Harkonen, A. Hertzmann, J. Lehtinen, and S. Paris. Ganspace: Discovering interpretable gan controls, 2020. * Isola et al. [2018] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks, 2018. * Jones [2020] B. Jones. Modeling Climate Change-Induced Migration in Central America & Mexico Methodological Report, 2020. * Karras et al. [2018] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation, 2018. * Karras et al. [2019] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks, 2019. * Karras et al. [2020] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of stylegan, 2020. * Kingma and Dhariwal [2018] D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible 1x1 convolutions, 2018. * Conference Track Proceedings_, pages 1-14, 2014. * Louis et al. [2016] J. Louis, V. Debaecker, B. Pflug, M. Main-Knorn, J. Bieniarz, U. Mueller-Wilm, E. Cadau, and F. Gascon. Sentinel-2 sen2cor: L2a processor for users, 2016. * Mirza and Osindero [2014] M. Mirza and S. Osindero. Conditional Generative Adversarial Nets. pages 1-7, 2014. URL [http://arxiv.org/abs/1411.1784](http://arxiv.org/abs/1411.1784). * 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019_, pages 462-468, 2019. doi: 10.1109/ICMLA.2019.00086. * Park et al. [2019] T. Park, M. Y. Liu, T. C. Wang, and J. Y. Zhu. Semantic image synthesis with spatially-adaptive normalization. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, 2019-June:2332-2341, 2019. ISSN 10636919. doi: 10.1109/CVPR.2019.00244. * Pidhorskyi et al. [2020] S. Pidhorskyi, D. Adjeroh, and G. Doretto. Adversarial latent autoencoders, 2020. URL [https://github.com/podgorskiy/ALAE](https://github.com/podgorskiy/ALAE). * Pidhorskyi et al. [2020] S. Pidhorskyi, D. A. Adjeroh, and G. Doretto. Adversarial Latent Autoencoders. pages 14092-14101, 2020. doi: 10.1109/cvpr42600.2020.01411. * Rigaud et al. [2018] K. K. Rigaud, A. de Sherbinin, B. Jones, J. Bergmann, V. Clement, K. Ober, J. Schewe, S. Adamo, B. McCusker, S. Heuser, and A. Midgley. Groundswell: Preparing for internal climate migration. 2018. URL [https://openknowledge.worldbank.org/handle/10986/29461](https://openknowledge.worldbank.org/handle/10986/29461). * Rolnick et al. [2019] D. Rolnick, P. L. Donti, L. H. Kaack, K. Kochanski, A. Lacoste, K. Sankaran, A. S. Ross, N. Milojevic-Dupont, N. Jaques, A. Waldman-Brown, A. Luccioni, T. Maharaj, E. D. Sherwin, S. K. Mukkavilli, K. P. Kording, C. Gomes, A. Y. Ng, D. Hassabis, J. C. Platt, F. Creutzig, J. Chayes, and Y. Bengio. Tackling Climate Change with Machine Learning. 2019. URL [http://arxiv.org/abs/1906.05433](http://arxiv.org/abs/1906.05433). * Schiavina et al. [2000] M. Schiavina, S. Freire, and K. MacManus. Ghs population grid multitemporal (1975, 1990, 2000, 2015) r2019a. _Eur. Comm. JRC_, 2019. * Shen et al. [2020] Y. Shen, C. Yang, X. Tang, and B. Zhou. Interfacegan: Interpreting the disentangled face representation learned by gans, 2020. * Srivastava et al. [2017] A. Srivastava, L. Valkov, C. Russell, M. U. Gutmann, and C. Sutton. Veegan: Reducing mode collapse in gans using implicit variational learning, 2017. * [37] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions, 2014. * [38] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans, 2018. * [39] Z. Wang, Q. She, and T. E. Ward. Generative adversarial networks in computer vision: A survey and taxonomy, 2020. * [40] F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. _arXiv preprint arXiv:1506.03365_, 2015. * [41] N. Yu, K. Li, P. Zhou, J. Malik, L. Davis, and M. Fritz. Inclusive gan: Improving data and minority coverage in generative models, 2020. * [42] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks, 2020. * [43] P. Zhu, R. Abdal, Y. Qin, and P. Wonka. SEAN: Image Synthesis With Semantic Region-Adaptive Normalization. pages 5103-5112, 2020. doi: 10.1109/cvpr42600.2020.00515. ## Broader Impact We envision that our research here could be of benefit for both local and international organizations who are committed to integrating AI methodologies into their planning and policy workflows. However, given the complexity of the models and the infrastructure and financing required for training, there is an obvious gap in who is able to actually use these models leading to concerns of centralization. In terms of biases, we identified several sources of bias in our method, that could lead to undesired outcomes, and should be explicitly taken into account before any direct application of this method. First of all, there is the model bias of GANs themselves. GANs are know to suffer from mode dropping, which could results in uncommon features of the dataset being ignored by the generative model, even if they are presents in the data. This bias can be approximately quantified by measuring diversity of generations using recent methods [15] and visualized, which means it can be evaluated in relation to particular use cases. Moreover, recent methods have made substantial improvements to mitigate this bias [36; 10; 41]. Note that our method SCALAE is a hybrid that includes an autoencoder in the latent space, which partly reduces the mode dropping problem, however, proper evaluation of this phenomena is ongoing research and is left for future work. Secondly, there are several sources of bias in the data itself. On one hand, it is the data collection bias. For example, because of cloud cover, satellite data collection usually focuses on the dry season only. On the other hand, there is some concern for leveraging path dependency biases, coming from the fact that the generated images can only reflect patterns that were observed before. It thus cannot capture new developmental trajectories. Nonetheless, there is interesting new research being done in the direction of novelty generation [6]. Finally, we do not anticipate, or recommend, that a generative approach be used in isolation for policy and planning - it is a tool to aid field professionals and ideally should be linked directly with theoretically and locally informed behavioral models, in fact, we postulate that the conditioning approach we develop here makes this imminently possible. ## 5 Appendices ### Additional background on conditional GANs In addition to image generation, GANs and other methods have shown promising results on controlling the generation process. This can be done when the latent space is sufficiently disentangled, in which case, adjusting a section of the latent space results in a meaningful semantic change in the generated image. A well disentangled latent space can be learned in a completely unsupervised fashion [5], however, providing an explicit signal to the model, if available, results in more precise control over the generated image [4]. Therefore, our method focuses on so called conditional image generation. Conditioning, even though it requires additional labels, helps disentanglement and allows explicit content generation. The difference between unconditional and conditional image generation is that the former generates realistic images from random noise, whereas the latter generates realistic images from a given input label. This label can come in many forms, usually image-level [27; 3; 28] or pixel-level [19; 29; 43] attributes, or even images from a different domain [38; 42]. In our case, the input label is a pixel-level population map. ### Related works A popular section of generative models relevant to our problem are the domain transfer models [19; 38; 42]. These methods produce a function that translate images between 2 or more domains (e.g. horse to zebra, day to night, etc). In comparison with our method, they do not allow for sampling random images. Thus, our method has the advantage of being used outside of its current use case without retraining, an important point when we consider the financial and environmental costs. However, most domain transfer methods contain direct skip connections between the reference real image and the generated output, which is known to reproduce better pixel-wise reconstructions. Another side of work focuses on discovering meaningful latent space directions after the model was fully trained [18; 35; 2]. This is complimentary to our model and the exploration of these is left for future work. ### Data: Sampling information Sampling sites for Sentinel-2 imagery were determined based on largest population increase in this time period, and a set of \\(9436\\) tiles with extents of \\(1024\\) by \\(1024\\) pixels was extracted. The selected sites are shown in appendix 5.4. Surface reflectance data of the dry season (Jan-March) was cloud-masked and averaged utilizing Google Earth Engine [14]. Finally, GHS-POP data was reprojected into the corresponding UTM-zones of the Sentinel-2 tiles. Given the illustration purposes of this study we have not included the non-visible bands of Sentinel-2, and focus on the RGB channels only, as those are easily interpretable to the human eye. However, our method is invariant to the number of channels used and could be trivially retrained with the full multi-spectral depth of Sentinel-2. ### Sites for image acquisition ### SCS module Figure 4: **Sites for image acquisition** Image acquisition based on population growth between 2000 and 2015; brighter color indicates higher population growth. Figure 5: **SCS module** To the best of our knowledge, adding spatial conditioning to a StyleGAN-like architecture has not been explored before. [28] successfully add a non-spatial class conditioning into a StyleGAN by feeding a class label \\(c\\) together with noise \\(z\\) into the F network and subsequently into the G network. This approach however, does not take spatial dimensions into account. We take inspiration from [29; 43] as they use the same AdaIN [17] layers as StyleGAN, and introduce the SCS module. ### SCALAE reconstruction method ### Training details We train the SCALAE model end-to-end on the paired Sentinel-2 and GHS-POP data sets on 4xV100 GPUs, following the default training parameters from the ALAE codebase [30]. The Sentinel-2 imagery RGB channels are scaled between -1 and 1. The population map is log transformed and likewise scaled between -1 and 1. We train with progressive growing for 200 epochs, switching to higher resolution every 16 epochs. Our base learning rate is 0.002 and batch size 512, both adjusted accordingly with the default progressive growing schedule. The training losses remain unchanged. Our code and all training parameters will be publicly released upon publication. ### FID details The FID score uses features from an Inception network [37] trained on the Imagenet [8] data set, which consists of natural photos of various objects. However, it does not include any images from satellite or other earth observation domain. We note that the domain shift in this case is not problematic because Imagenet-trained networks mostly focus on textures [12], which are the main features we are trying to quantify. ### Reconstruction: distance histograms Figure 6: **SCALAE reconstruction method** where white circles represent original ALAE inputs, yellow circles are our additional inputs, green boxes are original ALAE modules, and blue boxes are our modified modules. Figure 7: **Histograms of distances** ### Additional visualisations #### 5.10.1 Random generations #### 5.10.2 Reconstructions Figure 8: **Random generated samples** Figure 9: **Targeted reconstructions** Additional examples #### 5.10.3 Population pixel difference Figure 10: **Population pixel difference** Panel D contains the pixel difference between the generated images (panels A and C) and clearly reproduces the input population (panel B). Still, speckles of small pixel differences appear throughout the image. This shows that changing the population has some impact globally, suggesting some level of entanglement between the population map and the latent style \\(w\\). #### 5.10.4 Population manipulation Figure 11: **Custom population input** We fix the latent style input \\(w\\) and change population input. The population is accurately generated, even with unrealistic population distribution in the last row. #### 5.10.5 Best and worst reconstructions Figure 12: **Best and worst generations** From top to bottom: semantic worst, pixel worst, pixel best, semantic best
Climate change is expected to reshuffle the settlement landscape: forcing people in affected areas to migrate, to change their lifeways, and continuing to affect demographic change throughout the world. Changes to the geographic distribution of population will have dramatic impacts on land use and land cover and thus constitute one of the major challenges of planning for climate change scenarios. In this paper, we explore a generative model framework for generating satellite imagery conditional on gridded population distributions. We make additions to the existing ALAE [31] architecture, creating a spatially conditional version: SCALAE. This method allows us to explicitly disentangle population from the model's latent space and thus input custom population forecasts into the generated imagery. We postulate that such imagery could then be directly used for land cover and land use change estimation using existing frameworks, as well as for realistic visualisation of expected local change. We evaluate the model by comparing pixel and semantic reconstructions, as well as calculate the standard FID metric. The results suggest the model captures population distributions accurately and delivers a controllable method to generate realistic satellite imagery.
Give a concise overview of the text below.
220
arxiv-format/1101_2223v2.md
**Remote-Sensing Quantum Hyperspace by Entangled Photon Interferometry** _By Gergely A. Nagy, Rev. 1.5, 2011_ Submitted 19.01.2011 App. 'A' w/MDHW Theory pre-published by Idokep.hu Ltd., R&D, Science columns, article no. 984. _Article Status_ - Manuscript. Awaiting approval/review for itl. publication (as of 19.01., 2011) _Experimental status_ - Quantum-optics lab w/observatory needed for experimental testing **Sc.Field** Quantum Physics / Quantum optics **Topic** Hyperspace interactions # Remote-Sensing Quantum Hyperspace by Entangled Photon Interferometry By _Gergely A. Nagy_, Rev 1.5, 2011 ###### We show that such properties can be removed, and quantum-level information from certain hypersurfaces of past, present or future spacetime may be collected real-time, without resulting in any paradox or violation of causality. We examine the possible side effects of the 'Multi-Dimensional Hyperwaves Theory' (also presented as an appendix to this paper), on all above implementations. 1 Footnote 1: Another possibility is that only the sum / avg of \\(V_{s}\\) and \\(V_{i}\\) equals \\(c\\), and either propagations happen at speeds\\(>\\)\\(c\\), (in the local observers frame of reference), as accurate detection of position of either particles may result in extreme uncertainty in its twin’s speed (having no mass, or momentum), based on an extension of Heisenberg’s uncertainty principle. Both alternatives are discussed in detail later in this article. ## Original experiment interpretation Yoon-Ho Kim, R. and by Marlan O. Scully[1] had shown that it is possible to delay both erasing and marking which-way path information using entangled photons by SPCD separation in any such entanglement- combined double-slit experiment(s). Delay, or distance was not limited (in time, or space either). The experiment setup is shown in Fig. 1. The experiment seems to have proven that the earlier detection of the (signal) photon was always - or at least statistically - correlated with the future choice of its idler twin(s). Therefore it also appears to have proven that entanglement between the particles and their twins was not only independent of space-like separation, but also independent of time _[in the local observer's frame of reference]_. ## A relativistic interpretation We propose an alternative theory to explain the phenomenon. If we presume that both signal and the idler photons propagate at the speed of light (\\(c\\))1, in their frame of reference, the theory of Relativity[2] implies that time, for them, is not needed to reach either (D\\({}_{0}\\)-D\\({}_{n}\\)) screens, as they are equally close to the point of SPCD (zero distance). So the difference between the length of optical paths is irrelevant (for the photon). Both the signal and the idler reach their destinations (their detection screens) in no time, having travelled through all mirrors, beamsplitters whichever they will meet in the local observer's frame of reference, _even if_ the experiment setup is intentionally changed between D\\({}_{0}\\)'signal' and D\\({}_{n}\\) 'idler' detections (which is obviously possible in the local observer's frame of reference). Footnote 1: Note that this implies an already existing future; but does not imply that there can be only one future. It would be very much consistent with the ‘Many-Worlds Theory’[13], that the detection at D\\({}_{0}\\) collapses the wavefunction only for the local observer’s universe, and a new universe would be spawned in which detection in D\\({}_{0}\\) could be different, implicating a different future (and not resulting in determinism). So, what we may measure as 'time of detection', or delay between the detection of the signal and the idler, only exists for the local observer. Photons are reaching their target at the same time (in their frame of reference), so they indeed can, and _do_ 'know' instantly how they will be detected (which-way path marked or erased)2. [FOOTNOHowever, extracting instantly available, future-related information on the delayed choice of the signal photon's entangled twin, using data only from the signal (D\\({}_{0}\\)) screen, for the local observer was theorized to be impossible without involving a 'coincidence counter' device, which could remotely match-and filter the entangled idler's wave-function collapses on the remote screens, (D\\({}_{1}\\)-D\\({}_{2}\\), D\\({}_{3}\\)-D\\({}_{4}\\) detectors, respectively). And, as that 'coincidence counter' could only be accessed by the speed of light, early access to results would be prohibited3. Footnote 3: Note that the coincidence counter is only needed for the local observer to prove the existence of the correlation. If it was possible, for the local observer, to interpret the signal photon’s detection in D\\({}_{0}\\), in relation to its idler’s later choice, quantum-level information on the future could be obtained. Of course, it is impossible in the original setup; we now examine the constrains. The restriction manifested itself in the D\\({}_{1}\\)-D\\({}_{2}\\) detection screen's interference fringes being phase-shifted by exactly 180\\({}^{\\circ}\\), or \\(\\pi\\), thus canceling each other out to a collapsed waveform, making it undistinguishable from the D\\({}_{3}\\)-D\\({}_{4}\\) detections. Therefore, by a detection in D\\({}_{0}\\) it could never be pre-assumed whether it will be contributing to QM erasing (both-ways D\\({}_{1}\\), D\\({}_{2}\\)) or to a marking (which-way D\\({}_{3}\\), D\\({}_{4}\\)) joint detection. With such constrains, it seemed to have been proven to be impossible, at least with the topology used, to obtain the information before future detection of the signal's idler twin as well. ### Phase shift development The 180\\({}^{\\circ}\\) (\\(\\pi\\)) phase-shift in D\\({}_{1}\\) and D\\({}_{2}\\) complementary interference patterns can either be explained by QM mathematics (as Yoon-Ho Kim, R. and by Marlan O. Scully had shown in their paper), or simply by the redundant topological symmetry of the detectors in the idler part of the experiment (i.e. trying to extract the both-ways or no-path information with 2 independent detectors, mirroring them symmetrically, leaving the chance for them to cancel each other out). Note that the original paper mentions, but neither explains, nor correlates a shift observed in the D\\({}_{3}\\)-D\\({}_{4}\\) detector's collapsed waveforms' peaks (and detector D\\({}_{4}\\) is not even featured on the schematics in the original paper, as seen in Fig. 1). The D\\({}_{3}\\)-D\\({}_{4}\\) peak shift may be much more important, than it seems. It still indicates that some of the observed key phenomenon (i.e the apparent 180\\({}^{\\circ}\\)(or \\(\\pi\\)) phase shift for the D\\({}_{1}\\)-D\\({}_{2}\\) detectors) is a consequence of the original topology's 'eraser-paths' redundant symmetry, independent of _principle_. Furthermore, if shifted, the D\\({}_{3}\\)-D\\({}_{4}\\) joint detections distribution curve must feature two statistical maximums, which indeed, could make it partially distinguishable from D\\({}_{1}\\)-D\\({}_{2}\\) joint detections, thus carrying an estimated \\(\\sim\\)10\\({}^{1}\\) - 10\\({}^{3}\\) (non-zero) bit/signal detection information on the idler's later choice, in _advance_ of the idler's registration, and _without_ the coincidence counter. However, if original topology is to scale, There may also be a much stronger principal - QM may not allow paradoxes. And to avoid possible violation of causality, it must hide information on the future from the local observer, ensuring that it can not intervene to change the already observed future. ### Explaining topological loophacks If we carefully analyze the original setup of the DCQE experiment by Yoon-Ho Kim, R. and by Marlan O. Scully, we must realize that the idler photon's detection (_at least_ D\\({}_{1}\\) and D\\({}_{2}\\)) lays _inside_ the future light cone of the signal's (D\\({}_{0}\\)) detection (and its local observer's, if any). This is all because mirrors are used to alter the course of the idlers, and not letting them propagate along the light cone's edge. They are redirected close enough to D\\({}_{0}\\). Therefore, if the local observer at D\\({}_{0}\\) gains knowledge on the idler twin's future choice, he (or she) can still (both theoretically, and also technically) intervene to intentionally change the experiment's setup, and by this violate causality and realize a paradox (by changing already observed future). Figure 2: illustrates the questionable shifts. Partial advance-in-time information (shown in right middle graph) may be available because D\\({}_{3}\\)-D\\({}_{4}\\) seems to lay outside, while D\\({}_{1}\\)-D\\({}_{2}\\) inside D\\({}_{0}\\)’s future light cone. If causality stands, QM should not let this to happen, so it must find a way to hide the already existing information on the twin's delayed choice from the local observer at D\\({}_{0}\\). In the original setup, as we had already shown, this manifests itself in the 180\\({}^{\\circ}\\) (complementary) phase shift of the D\\({}_{1}\\)-D\\({}_{2}\\) eraser detectors4. Footnote 4: If it should turn out that any modifications in the topology, without placing the the distant (D\\({}_{1}\\).-D\\({}_{a}\\)) detectors outside the local observer’s light cone, possible, it would give way for a violation of causality, ie. the future retrocausally changing the past. What we are trying to show is that it can be avoided. ### Avoiding the paradox If the observer can get knowledge of the future, but can not, even in principle do anything to change it, causality is _not_ violated. One way of ensuring this would be to place the target of observation on the hypersurface of, or outside the future light cone (relative to the D\\({}_{0}\\) detection and its observer). The easiest way to achieve this is to let the idler twins propagate in a straight line, - preferably in outer space - without any mirrors or beamsplitters altering their path, or course5 (before detection). Footnote 5: If we need to introduce mirrors in the idler’s part, we can still place the D\\({}_{0}\\) detector, along with its local observer, outside the light cone by introducing one more mirror for the signals which reflect the photons to the opposite direction (of the idler’s propagation). This way, we should still be able to obtain information without violating causality or invoking paradoxes. A photon, propagating freely in space, by the speed of light (\\(c\\)) will always be found on (or fluctuating around)6 the hypersurface of the light cone. Therefore, if will interact with anything that causes erasing (or marking) which-way path information, and the local observer gains knowledge of that by observing the local (D\\({}_{0}\\)) detection, he (or she) can do nothing - even in principle - to change it. The interaction's space-like distance would be exactly as far away as achievable by light; this way causality could not be violated, and no paradoxes should occur. Footnote 6: Please see the ‘Multi-dimensional hyperwaves theory’ Therefore, gaining information from that special hypersurface of the spacetime should be possible for the local observer. It should be emphasized, that the local observer (at D\\({}_{0}\\)) would _not_ need to wait (i.e. 2 million years) for the idlers to reach a distant target (i.e. in Andromeda Galaxy). Information on the idler's future fate could be _immediately_ available by local signal (D\\({}_{0}\\)) detection. ### Testing the theory If we are to exploit this feature, we propose to simply remove all mirrors, beamsplitters and coincidence counters from the 'idler' part of the experiment. Then, for the 1\\({}^{\\rm st}\\) test, copy the'signal' setup symmetrically to where the idlers part was. Set up the D\\({}_{1}\\) detector exactly as D\\({}_{0}\\). We predict that the outcome should be an interference pattern on both screens (whether or not a 180\\({}^{\\circ}\\)or \\(\\pi\\) phase shift occurs, although likely, is now obviously irrelevant). Now, for step 2, take the 'idler' part and the D\\({}_{1}\\) screen very far away from the D\\({}_{0}\\) screen and the local observer. We predict, that - even though interference patterns on both screens may be rotating symmetrically - the type of the patterns should not change. For step 3, we introduce a remote triggering mechanism at the distant (D\\({}_{1}\\)) screen that can change the setup very fast (ie. by opto-electronics) to detect or erase the which-way path. The remote triggering mechanism, would be activated by a normal (ie. radio) signal that travels by \\(c\\). Fig. 3 shows experimental setup schematics needed for testing all three (future, present and past) hypersurface quantum signaling. Note that focusing probability waves, with adaptive optics / mirrors to scan larger distances would be technically very challenging, but theoretically possible (even for cosmological distances). Figure 3: Time-frequency of the idler’s light cone by a modified, quantum-entanglement DCO device If our theory is correct, the local wavefunction should start to collapse immediately when we send out the signal from D\\({}_{0}\\) - that is, without having to wait for the triggering signal to reach its distant target (the D\\({}_{1}\\) screen). Why? Because the signal photons in D\\({}_{0}\\) are entangled with the future of their twins (in our frame of reference), and the which-path marking will start exactly at that point in the future when the triggering signal reaches the distant setup. While on the other side, the idlers photons at D\\({}_{1}\\) are entangled with the past of their signals (in our frame of reference). If we send another signal, to restore the original setup (erase which-path information), the interference fringes should start to reappear immediately in D\\({}_{0}\\). From all this - if it works - we can conclude that an observer, who choses to gain knowledge of the which-way path, can only see the past (it would be consistent with our optical observations of the universe). An observer who chooses to erase the which-way path (thus preserving the wavefunction) can only see the future (in his or her local frame of reference). But since particles are entangled, if any side detects the which-way path, the collapse also happens on both ends. If they both measure, there's no information available for any of them, elegantly avoiding another paradox. If the experiment does not comply with predictions, we must assume that either the signal and/or idler photons are, indeed, _not_ all propagating by the speed of light; and the Theory of Relativity may be challenged7. Footnote 7: The original paper by by Yoon-Ho Kim, R. Yu, S.P. Kulik, Y.H. Shih, designed by Marlan O. Scully & Drühl fails to provide actual details on the measured time difference of the signal detection(s) D\\({}_{0}\\) detector, and the detection(s) of the idler(s) in the D\\({}_{1}\\).D\\({}_{a}\\) detectors. The paper seems to presume that both signal and idler photons will definitely propagate by the speed of light (\\(c\\)), so it only introduces a simple calculation, stating that there should be a constant, 8 \\(n\\)Sec delay between D\\({}_{0}\\) and D\\({}_{n}\\) detectors, as they are approx. 2.5 meters apart from D\\({}_{0}\\) (optical path). Note that in itself it can clearly not be true, even if photons indeed are propagating by \\(c\\), since the D\\({}_{0}\\)-D\\({}_{3}\\) and D\\({}_{0}\\)-D\\({}_{4}\\) optical paths are significantly shorter than D\\({}_{0}\\)-D\\({}_{1}\\) and D\\({}_{0}\\)-D\\({}_{2}\\) paths. Missing data may be the deciding factor. For such case, we propose two other solutions, both of them implicating that gaining information from outside the light cone (hyperspace) may still be possible. **Interacting with the 'present' hypersurface** According to Heisenberg's Uncertainty Principle\\({}^{\\@@cite[cite]{[\\@@bibref{}{Hess}{}{}]}}\\), an infinitely accurate marking of position implies infinite uncertainty in momentum. Photons do not have mass, so we could only apply the Principle to speed. Detecting a photon's exact position would lead very high uncertainty in its speed (ie. could reach many times of \\(c\\)), possibly infinite speed (in case of infinitely accurate position detection). Of course, in case of a 'normal' photon this is meaningless to discuss, since accurate detection of a photon's position can only be carried out by destroying the photon at the same time. But with entangled photons, something very different may happen. It may be possible that the both the signal's, and the idler photon's speed will be 'infinite', or very high (many times that of \\(c\\)), if the position of at least one of them will be very accurately measured before detection of both of them (in the local observer's frame of reference). In this case, the wavefunction of the signal photons would not be dependent on the future of the idlers. It would, instead, be dependent on the _present_ (or very close to the present) hypersurface interaction of the idlers in spacetime. Testing the theory would be easy. With the same, remote setup of the D\\({}_{1}\\) screen and with an intentionally controllable marking or erasing of the which-way path, the local observer in D\\({}_{0}\\) could send out a normal triggering signal to start marking which-way path. Local interference pattern should start to be collapsing when the normal triggering signal reaches its destination (travelling by _c_). In this case, we would be interacting with the entangled particle in present hyperspace. Yet, there is one more alternative to discuss. ### Past-hyperspace interaction Uncertainty in the speed of light need not necessarily result in speeds higher than c for both particles. There is one more way of ensuring that detection happens at the same time even in he observer's frame of reference. For this, the speed of the'signal' particle may be forced to be lower than c; while the idler particle would need to move faster than \\(c\\). The exact (\\(v_{s}\\), \\(v_{i}\\)) speeds would be easily expressable by the ratio, or difference of the optical length of paths (between the SPDC source and the D\\({}_{0}\\), D\\({}_{1}\\) screens, respectively). In this case, the wavefunction of the D0 photon would be dependent on the past-hyperspace interaction of the idlers, where the hypersurface's angle (between the past light cone and the present) would also be defined by the ratio of the optical paths of \\(v_{s}\\), \\(v_{i}\\). ### Possible practical uses Each of the theories above offer the obvious ability to realize superluminar communication, as well as remote sensing (mapping) quantum properties of unknown regions of spacetime8. Footnote 8: When using the modified DCQE for measuring remote quantum properties of spacetime, the pattern in the D\\({}_{0}\\) screen will be dependent on whether the interaction (of unknown depth, or distance), is such that it ‘erases’ or locally exploits (’marks’) which-way path. When scanning natural or artificial objects – such as gas, liquids, metal, rocks or plasma, one could not hope to receive either a totally intact interference pattern (fringes), nor a completely collapsed one. The local observer would be likely be receiving very fine fluctuations of the pattern, somewhere between half-collapsed and half-intact fringes. Fig. 5 shows the concept of detecting a Solar burst in advance Also note that such'remote sensing' could reveal information on cosmological events which have not yet entered the normally observable part of the universe, of the light cone (i.e. are happening'real-time', simultaneously with the distant observation.) Figure 4.2. illustrates concept of such sensing. We could, for example, remote-sense a solar flare burst of the Sun, in real time (or even in advance), not having to wait for light of the event to reach us. Fig. 6. shows a superluminar signaling setup Please note that the above realization is a special, symmetrical subcase of the DCQE which does not even require entanglement over the dimension of at time at all. It also does not violate Relativity since if information travels to its past on one side, it travels to its future on the other. Thus, it arrives'real-time' present for the distant receiver. Fig. 7. shows a visualized concept of such mapping of hyperspace Superluminar communication and remote sensing would also be very useful if we enter the interplanetary or interstellar area, where normal communication could take minutes, days or even years. **Conclusion** We theorized that obtaining quantum-level information from either the past, present or future hypersurfaces does not necessarily violate causality, and therefore should be considered possible. We based our conclusion on the results of a classic DCQE experiment, where we had shown that reason for not being able to extract meaningful information before detecting both the idlers and signal photons may simply be a failure of the local loopback topology, with the detection screens located in the light cones of each other, capable of violating causality and causing a paradox. Also, the symmetrical mirroring D1-D2 detectors, leaving the chance to cancel interference fringes out, can simply be avoided. Changing the topology and removing or counter all optical loopbacks should also remove such limitations in principle. Testing the theory is possible with today's technology already available in well equipped quantum-optical laboratories; yet if any chance of success or experimental implication shows predictions could be correct, real use of such remote-sensing equipment would be in space. For humanly observable results, a distance of at least 0.1-1 lightseconds between the local observer (sender), and/or the scanned objects (or receivers) would be desirable. Quantum property map of hyperspace could be scanned just like background microwave radiation; showing the optically non-observable regions of our universe. _Note_ **Appendix A** contains a short introduction of the Multi-dimensional Hyperwaves theory, and its implications in relation of hyperspace remote sensing devices theorized in our article. Contact information on he author(s) is also available in App. A. ## References * [1] 'A Delayed Choice Quantum Eraser' by Yoon-Ho Kim, R. Yu, S.P. Kulik, Y.H. Shih, Marlan O. Scully & Druhl, PACS Number: 03.65.Bz, 42.50.Dv. / Online avail. [http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903047v1.pdf](http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903047v1.pdf) * [2] Einstein A. (1916 (translation 1920)), Relativity: The Special and General Theory, New York: H. Holt and Company * [3] Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, pp 3-140. * [4] Heisenberg, W. (1927), \"Uber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik\", Zeitschrift fur Physik 43: 172-198 Figure 6: shows a superluminar signaling setup Figure 7: shows a visualized concept of such mapping of hyperspace ## Appendix A The 'Multi-dimensional Hyperwaves' Theory9 With its implications on possible hyperspace, or future hypersurface remote-sensing devices Footnote 9: (Original ‘MDHW’ theory presented in a paper by Idokep.hu., Ltd. science columns, article id. 984.) By Gergely A. Nagy, 2010, Hungary10 Footnote 10: Experimenters interested with Quantum-Optics lab access are welcome to contact author(s) by [email protected], [email protected], +36/70-9426259, +36/20/448-2180, Idokep Kft., Bartok Bela str. 65/b., 1224 Hungary, to test theories above for possible joint publication and R&D. Any such contact is much appreciated. We theorize that the probability wave of the individual particles (emitted from the source, but not yet detected, i.e. in a double-slit setup) not only oscillates almost freely in 3-dimensional space, but also in the dimension of Time. Therefore, the individual particles can easily interfere with the next, and the previous particles in the repeated process of emissions as well, interact with each other (in future, and past hyperspace), and return to create the interference pattern in the present. This means, that even though an individual particle has already been detected, its probability wave still exists in its relative future (in the 'present'), and the next emitted particle can interact with it (as its probability wave also fluctuates into its relative past (in hyperspace)). We propose that this theory (which we call '_Multi-dimensional hyperwaves'_, referring to the individual particle's freedom being extended to Time, higher dimensions and maybe even to hyperspace) is much simpler, yet provides a more elegant way of explaining the interference development phenomenon than, for example, the elementary waves theory. And this theory, however extraordinary and controversial it may sound, should not create any paradoxes, after all (even if it may seem to imply an already-existing future, but it does not.) Particles can even interfere with both next & previous instances of individual emissions, and if we are to stop the experiment at will (no future individual emissions), wavefunction should still be preserved, at least partially by past-hyperspace interactions with already emitted, individual particles in the sequence. Our proposal may be examined experimentally by, for example, carefully increasing and decreasing the time between each individual emission, and looking for statistical anomalies (or simply, some type or kind of changes in the distribution of particle manifestations) in the evolution of the interference pattern. If such correlation is revealed, we propose, a _not-yet named_ constant could be derived that would describe the functional dependence (or simply linear ratio) between the units we use to measure space, and time, as we know it. ### Implications on Hyperspace remote-sensing Fluctuation of the probability wave not only in space, but also in the dimension of time means that it is only the statistical (mean) average of the idler photons apparent path, that is lying on the given hypersurface. And since idler photons not always staying on the given hypersurface, their first interaction on remote spacetime may happen outside of it. So some of collected data from hyperspace (or hypersurface of the future light cone) may indeed also originate slightly off course. If the theory is correct, the most crucial implications would be considering the possibility that - when scanning along the hypersurface of the future light cone - we may obtain information from _within_ the light cone [relative to the local observer]. This would, unless countered, threaten a violation of causality. However, we theorize that fluctuations into the opposite direction in time with uniform distribution will ultimately cancel out, and extractable information will always reflect average quantum properties alongside the statistical average (or mean) path, defined by the probability wave of the idler photon's apparent in-line propagation.
Even though ideas of extracting future-related, or Faster-Than-Light (FTL) information from hyperspace using quantum entanglement have generally been refuted in the last ten years, in this paper we show that the original 'Delayed Choice Quantum Eraser Experiment', 11 performed by Yoon-Ho Kim, R. Yu, S.P. Kulik, Y.H. Shih, designed by Marlan O. Scully & Druhl in 1982-1999, still features various hidden topological properties that may have been overlooked by previous analysis, and which prohibit, by principle, such extraction of future-related or real-time information from the detection of the signal particle on the delayed choice of its entangled idler twin(s).
Give a concise overview of the text below.
153
arxiv-format/1204_5821v1.md
[ Received September 14, 2011, in final form November 16, 2011 ###### ###### ###### In 1994, Parsafar and Mason (PM) proposed the following EOS by using a series expansion of internal energy [11] \\[P\\left(\\frac{V}{V_{0}}\\right)^{2}=C_{0}+C_{1}\\left(\\frac{V_{0}}{V}\\right)+C_{2 }\\left(\\frac{V_{0}}{V}\\right)^{2}. \\tag{1}\\] Here, \\(V_{0}\\) is the volume at zero pressure. \\(C_{0}\\), \\(C_{1}\\), \\(C_{2}\\) are three coefficients in the PM EOS. In 1997, Shanker, Singh and Kushwah (SSK) proposed the following EOS [12, 13] \\[P=D_{0}+D_{1}\\left(\\frac{V_{0}}{V}\\right)+D_{2}\\left(\\frac{V_{0}}{V}\\right)^{2}, \\tag{2}\\] where \\(D_{0}\\), \\(D_{1}\\), \\(D_{2}\\) are three coefficients in the SSK EOS. It can be seen that the SSK EOS can be expressed as volume-analytic and pressure-analytic forms. In 1994, Parsafar and Mason proposed the following linear isotherm regularity (LIR) EOS for gases and liquids based on the Lennard-Jones (LJ) (12-6) potential [14] \\[(Z-1)\\left(\\frac{V}{V_{0}}\\right)^{2}=A_{0}+A_{2}\\left(\\frac{V_{0}}{V}\\right)^{2}. \\tag{3}\\] Here, \\(Z\\) is the compressibility factor, which is equal to \\(PV/RT\\). The upper density limit of LIR [14] is less certain but seems to be the freezing line for liquids (\\(T<T_{\\rm c}\\)) and at least about twice the Boyle density for supercritical fluids. LIR EOS has been extended to mixtures [15] and to other forms [16, 17, 18] through adopting different potential functions, including the exponential-6 [16], LJ (6-3) [17], LJ (885-4) [18], and LJ (12-6-3) [19] potentials. Recently, Parsafar, Spohr and Patey (PSP) [19], extended the equation (3) to the following form with three parameters based on an effective near-neighbor pair interaction of an LJ (12-6-3) potential \\[(Z-1)\\left(\\frac{V}{V_{0}}\\right)^{2}=A_{0}+A_{1}\\left(\\frac{V_{0}}{V}\\right)+ A_{2}\\left(\\frac{V_{0}}{V}\\right)^{2}. \\tag{4}\\] The PSP EOS can be equivalently reformulated as truncated Virial form \\[P=\\frac{RT}{V}+\\frac{Q_{1}}{V^{2}}+\\frac{Q_{2}}{V^{3}}+\\frac{Q_{3}}{V^{5}}. \\tag{5}\\] Parsafar et al. [19] claimed that the PSP EOS (4) can be applied to all fluids and solids, and their application for solids [19] does not reveal any pressure or temperature limitations. However, we noticed that the PM EOS (1) and the PSP EOS (4), (5) are physically wrong at high pressure conditions for some solids. This is because the coefficients \\(C_{2}\\) in equation (1) and \\(A_{2}\\) in equation (4) should be positive for all solids to ensure a physically correct tendency at high pressure, \\(P\\rightarrow\\infty\\) as \\(V\\to 0\\). However, the values of \\(C_{2}\\) for most solids studied in this paper are negative; and the values of \\(A_{2}\\) for solids NaCl and CaO studied by Parsafar et al. [19], and for most solids studied in this paper are also negative. This leads to an unphysical tendency, \\(P\\rightarrow-\\infty\\) as \\(V\\to 0\\). The incorrect tendency makes the PM [11] and PSP [19] EOSs inapplicable to high pressure conditions. We may preliminarily analyze the reason for the failure of two EOSs as follows. Holzapfel [20] has pointed out that the limitation of an EOS as the volume tends to zero, should be the Tomas-Fermi (TF) model, \\(P\\propto V^{-5/3}\\). The repulsion terms in PM [11] and PSP [19] EOSs are, \\(P\\propto V^{-4}\\) and \\(P\\propto V^{-5}\\), respectively. Their exponent numbers 4 and 5 are far larger than 5/3, and are too hard for solids. In order to fit experimental \\(P-V\\) data at low and middle pressure ranges, the optimized \\(C_{2}\\) and \\(A_{3}\\) should take on negative values. In this work, we propose generalized LIR (GLIR) EOS based on a near-neighbor pair potential of the extended Lennard-Jones (\\(m_{1}\\),\\(n_{1}\\)) type. The GLIR contains three parameters and can overcome the defect appearing in the PM EOS (1) and PSP EOS (4). In section 2, the three-parameter GLIR EOS is proposed. In section 3, equations (1) and (2) and their modified version, PSP EOS (4) and the GLIR EOS are applied to twenty solids within wide pressure ranges of hundreds GPa and at ambient temperature, the results being analyzed and discussed. In section 4, the conclusion is presented. ## 2 Analytic equations of state We adopt the effective pair interaction of an extended Lennard-Jones (\\(m_{1}\\),\\(n_{1}\\)) type potential [10, 17, 19, 20] \\[\\varepsilon\\left(r\\right)=\\frac{\\varepsilon_{0}}{m_{1}-n_{1}}\\left[n_{1}\\left( \\frac{r_{\\rm e}}{r}\\right)^{m_{1}}-m_{1}\\left(\\frac{r_{\\rm e}}{r}\\right)^{n_{1 }}\\right]. \\tag{6}\\] It is well known that the effective potentials for metals usually have oscillating tails due to Friedel oscillations of electron density, and the Lennard-Jones (LJ) potentials are not really appropriate for correct reproduction of the energetics of metals. However, many works [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] have shown that the LJ potentials can mimic many properties of metals in some compression ranges. By adopting the nearest neighbor assumption [11], the total configurational energy of a solid is \\[U = \\frac{N\\delta\\varepsilon_{0}}{2\\left(m_{1}-n_{1}\\right)}\\left[n_{1} \\left(\\frac{r_{\\rm e}}{r}\\right)^{m_{1}}-m_{1}\\left(\\frac{r_{\\rm e}}{r}\\right)^ {n_{1}}\\right] \\tag{7}\\] \\[= \\frac{N\\delta\\varepsilon_{0}}{2\\left(m_{1}-n_{1}\\right)}\\left[n_{ 1}\\left(\\frac{V_{\\rm e}}{V_{0}}\\right)^{m_{1}/3}-m_{1}\\left(\\frac{V_{\\rm e}}{ V_{0}}\\right)^{n_{1}/3}\\right],\\] where \\(V=\\alpha^{3}/\\gamma\\), \\(V_{0}=\\left(r_{\\rm e}\\right)^{3}/\\gamma\\), \\(a\\) is the nearest neighbor distance, and \\(\\delta\\) is the mean coordination number [10, 19]. Following Parsafar and Mason [8], the internal pressure can be obtained by the derivative of equation (6) \\[P_{\\rm int}=-\\frac{\\partial U}{\\partial V}=\\frac{m_{1}n_{1}N\\delta\\varepsilon _{0}}{6\\left(m_{1}-n_{1}\\right)}\\left[\\left(\\frac{V_{\\rm e}}{V_{0}}\\right)^{m _{1}/3+1}-\\left(\\frac{V_{\\rm e}}{V_{0}}\\right)^{n_{1}/3+1}\\right]. \\tag{8}\\] Let us substitute the equation (7) into the following internal energy equation \\[P=T\\left(\\frac{\\partial P}{\\partial T}\\right)_{V}-\\left(\\frac{\\partial U}{ \\partial V}\\right)_{T}. \\tag{9}\\] After integration, we derive the equation \\[P=\\frac{RT}{V}+A_{1}\\left(\\frac{V_{0}}{V}\\right)^{n_{1}/3+1}+A_{2}\\left(\\frac {V_{0}}{V}\\right)^{m_{1}/3+1}. \\tag{10}\\] Here, \\(A_{1}\\) and \\(A_{2}\\) are functions of temperature. In order to obtain an extended LIR EOS, we would limit parameters \\(m_{1}\\) and \\(n_{1}\\) to satisfy the relationship, \\(m_{1}=2\\)\\(n_{1}\\), and \\[m_{1}/3=2m,\\qquad n_{1}/3=m. \\tag{11}\\] Then, equation (7) changes to the following form \\[P=\\frac{RT}{V}+A_{1}\\left(\\frac{V_{0}}{V}\\right)^{m+1}+A_{2}\\left(\\frac{V_{0} }{V}\\right)^{2m+1}. \\tag{12}\\] By using definition of compressibility, \\(Z=PV/RT\\), equation (12) can be reformulated following the generalized LIR (GLIR) EOS \\[\\left(Z-1\\right)\\left(\\frac{V}{V_{0}}\\right)^{m}=B_{0}+B_{1}\\left(\\frac{V_{0} }{V}\\right)^{m}, \\tag{13}\\] \\[B_{0}=\\frac{A_{0}V_{0}}{RT},\\qquad B_{1}=\\frac{A_{2}V_{0}}{RT}. \\tag{14}\\] It can be seen that the LIR EOS in equation (3) can be included in the GLIR EOS (13) as a special case when \\(m=2\\). Since \\(m=1\\), equation (13) just reduces to the virial EOS. Although the parameter number of PSP EOS (4) is the same as the three-parameter GLIR EOS (13), equation (13) with adjustable parameter \\(m\\) is more flexible and more accurate than equation (4). Otherwise, we found in our calculations that the PM [11] and SSK [12, 13] EOSs can be reformulated in the following forms: \\[P\\left(\\frac{V}{V_{0}}\\right)^{4} = C_{2}+C_{1}\\left(\\frac{V}{V_{0}}\\right)+C_{0}\\left(\\frac{V}{V_{0 }}\\right)^{2}, \\tag{15}\\] \\[P\\left(\\frac{V}{V_{0}}\\right)^{2} = D_{2}+D_{1}\\left(\\frac{V}{V_{0}}\\right)+D_{0}\\left(\\frac{V}{V_{0 }}\\right)^{2}. \\tag{16}\\] We name this form as PMR and SSKR EOSs. Although the PMR EOS and SSKR EOSs are mathematically equivalent to the PM and SSK EOSs, they physically differ from each other. This is because all of equations (1), (2) and equations (15), (16) can be seen as Taylor expansion, but the expansion variable of equations (1), (2) is \\((V_{0}/V)\\), and that of equations (15), (16) is \\((V/V_{0})\\). At zero pressure, both values of \\((V_{0}/V)\\) and \\((V/V_{0})\\) are equal to \\(1\\). At high pressure, the values of \\((V/V_{0})\\) are smaller than \\(1\\), the Taylor expansions in equations (15), (16) are fast convergent. However, the values of \\((V_{0}/V)\\) are larger than \\(1\\) at high pressure, the Taylor expansions in equations (1), (2) are slowly convergent. Thus, the PMR and SSKR EOSs in equations (15), (16) are more accurate than the original PM and SSK EOSs in equations (1), (2). ## 3 Results and discussion Now we apply six EOSs to 28 metallic solids, including GLIR (13), PM [11], PMR (15), SSK [12, 13], SSKR (16) and PSP [19] EOSs. All experimental data are taken from Kennedy and Keeler (1972) [21], except for W [22]. In table 1, we list the volume at zero pressure \\(V_{0}\\), average fitting errors of pressure for the 28 solids. It can be seen that the GLIR (13) yields the smallest fitting errors for 20 solids, and for the other 8 solids the errors are also fairly small. The fitting precision for different solids is fairly stable for the GLIR EOS (13), while instable for the other five EOSs. The largest errors among the 28 solids for the six EOSs are 1.71% of Nb, 7.02% of Zr, 4.21% of Zr, 9.34% of Zn, 5.46% of Zn, 4.98% of Ca, respectively. In the last line of the table, we list the total average error for the 28 solids. It can be seen that the GLIR EOS yields the best results with average error 0.57%; the PMR EOS yields second best results with average error 1.00%; the PSP EOS, PM EOS, SSKR EOS, and SSK EOS subsequently give worse results with average errors 1.12%, 1.29%, 1.61% and 2.39%, respectively. In tables 2 and 3, we list the fitted parameters for the six EOSs, table 2 shows that the values of \\(m\\) in the GLIR EOS (13) are smaller than 1 for 19 solids, and slightly larger than 1 for 10 solids. This implies that the interactions in the metals are far softer than the LJ (12-6) potential, and are approximately approaching the LJ (6-3) potential for the 10 solids, and even softer than the LJ (6-3) potential for other 20 solids. The \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline & & GLIR & PM & PMR & SSK table also shows that the parameter \\(B_{1}\\) in the GLIR EOS (13) always takes on positive values, and this ensures a correct tendency as the volume tends to infinity. However, the values of \\(C_{2}\\) in the PM and PMR EOSs are negative for 18 and 25 solids, respectively. The values of \\(D_{2}\\) in the SSKR EOS, \\(A_{2}\\) in the PSP EOS are also negative for 2 and 18 solids, respectively. For these solids, the corresponding EOSs may exhibit a physically incorrect tendency as the volume tends to infinity. To compare, the GLIR EOS (13) is not only the most precise one, but also is a unique EOS that does not exhibit a physically incorrect tendency among the six EOSs studied in this work. In figures 1, 2 and 3, we plot the experimental compression data and the curves calculated using the GLIR, PMR, SSKR and PSP for 10 solids, including Cu, Mo, Ag, Ti, Ta, Zr, Ni, Nb, Th and Be. These figures show that the calculated compression curves from the GLIR and SSKR EOSs are correct at high pressure for the 10 solids, although for Zr, the parameter \\(D_{2}\\) in the SSKR EOS takes on a negative value. But the PMR EOS yields incorrect compression curves at high pressure for 7 of 10 solids, except for the solid Cu, Ag, and Ni. And the turn point is in the range \\(V/V_{0}\\approx(0.3\\div 0.5)\\) for the 7 solids. Moreover, the PSP EOS also yields incorrect compression curves at high pressure for 9 of 10 solids, except for the solid Ag. And the turn point is about \\(V/V_{0}\\approx 0.3\\) for solids Cu and Ni; about \\(V/V_{0}\\approx 0.5\\) for other 7 solids. In these figures, we also plot the variation of relative errors of pressure with compression ratio \\(V/V_{0}\\). It can be seen from these figures that, for solids Cu and Ag, the oscillations of relative errors from the SSKR EOS are the most prominent, and are the same from other three EOSs; for solids Ti and Zr, the \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline & \\multicolumn{3}{c|}{GLIR} & \\multicolumn{3}{c|}{PM} & \\multicolumn{3}{c|}{PMR} \\\\ \\hline & \\(m\\) & \\(B_{0}\\) & \\(B_{1}\\) & \\(C_{0}\\) & \\(C_{1}\\) & \\(C_{2}\\) & \\(C_{0}\\) & \\(C_{1}\\) & \\(C_{2}\\) \\\\ \\hline \\hline Cu & 0.906 & -449.53 & 448.57 & -153.86 & 167.29 & -13.44 & -13.24 & 166.74 & -153.51 \\\\ \\hline Mo & 0.592 & -1731.45 & 1732.80 & -395.31 & 458.15 & -98.61 & -101.51 & 465.36 & -363.72 \\\\ \\hline Zn & 1.199 & -188.20 & 187.21 & -48.41 & 34.78 & 13.53 & 14.11 & 33.18 & -47.34 \\\\ \\hline Ag & 1.197 & -368.37 & 367.35 & -81.38 & 55.72 & 25.60 & 26.14 & 54.37 & -80.54 \\\\ \\hline Pt & 1.031 & -1000.33 & 999.46 & -272.52 & 263.83 & 8.69 & 9.54 & 261.87 & -271.40 \\\\ \\hline Pt & 0.416 & -1185.34 & 1184.03 & -126.25 & 164.13 & -37.40 & -40.20 & 171.78 & -131.30 \\\\ \\hline Ta & 0.527 & -1655.67 & 1654.63 & -218.00 & 369.91 & -88.75 & -90.88 & 375.05 & -284.06 \\\\ \\hline Au & 1.004 & -762.52 & 761.51 & -184.79 & 183.79 & 0.70 & 1.52 & 181.85 & -183.36 \\\\ \\hline Pd & 1.031 & -683.16 & 682.19 & -188.56 & 181.27 & 7.29 & 7.11 & 181.68 & 188.80 \\\\ \\hline Zr & 0.197 & -2763.5 & 2762.6 & -122.61 & 166.63 & -43.34 & -48.00 & 179.09 & -130.68 \\\\ \\hline Cr & 0.924 & -609.57 & 608.36 & -209.26 & 226.66 & -17.46 & -17.58 & 226.93 & -209.41 \\\\ \\hline Co & 0.729 & -733.72 & 732.70 & -257.16 & 318.67 & -61.49 & -61.29 & 318.20 & -256.89 \\\\ \\hline Ni & 0.893 & -565.39 & 564.35 & -211.11 & 234.21 & -23.09 & -22.88 & 233.72 & -210.83 \\\\ \\hline Nb & 0.582 & -1282.5 & 1281.3 & -250.03 & 331.05 & -81.09 & -82.18 & 333.60 & -251.51 \\\\ \\hline Cd & 1.223 & -216.51 & 215.61 & -36.99 & 23.21 & 13.76 & 13.84 & 23.00 & -36.86 \\\\ \\hline Al & 0.719 & -452.88 & 451.48 & -106.33 & 132.17 & -26.01 & -25.01 & 129.71 & -104.84 \\\\ \\hline Th & 0.613 & -701.54 & 700.18 & -65.47 & 81.33 & -15.71 & -16.41 & 83.24 & -66.73 \\\\ \\hline V & 0.569 & -939.67 & 938.80 & -223.73 & 293.70 & -69.84 & -70.98 & 296.38 & -225.29 \\\\ \\hline In & 1.058 & -240.83 & 239.89 & -38.61 & 36.62 & 1.930 & 2.430 & 35.31 & -37.75 \\\\ \\hline Be & 0.477 & -500.16 & 499.19 & -177.73 & 239.41 & -61.58 & -62.52 & 241.66 & -179.05 \\\\ \\hline Pb & 1.022 & -323.72 & 322.61 & -43.61 & 42.49 & 1.090 & 1.240 & 42.12 & -44.37 \\\\ \\hline Sn & 1.118 & -256.62 & 255.80 & -37.87 & 32.02 & 5.850 & 5.970 & 31.72 & -37.69 \\\\ \\hline Mg & 0.592 & -338.70 & 337.60 & -44.32 & 55.83 & -11.41 & -11.81 & 56.91 & -45.03 \\\\ \\hline Ca & 0.076 & -2812.2 & 2810.9 & -20.35 & 27.60 & -6.810 & -8.040 & 31.34 & -23.05 \\\\ \\hline Tl & 1.154 & -216.85 & 215.66 & -28.85 & 21.85 & 6.990 & 7.160 & 21.43 & -28.60 \\\\ \\hline Na & 0.540 & -112.14 & 111.09 & -6.740 & 8.370 & -1.540 & -1.660 & 8.740 & -7.030 \\\\ \\hline K & 0.419 & -147.58 & 146.11 & -2.900 & 3.670 & -0.670 & -0.750 & 3.980 & -3.160 \\\\ \\hline Rb & 0.457 & -115.00 & 113.29 & -1.780 & 2.230 & -0.360 & 2.440 & -1.970 & 1.210 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Optimized values of coefficients for the GLIR, PM and PMR EOSs determined by fitting experimental compression data. The parameters for the GLIR EOS are dimensionless; and all parameters for PM and PMR EOSs are in GPa. Figure 1: (Color online) Comparison of compression curves of Cu (a), Mo (b), Ag (c), Ti (d), Mg (e), and Zr (f) calculated by using different equations with experimental data (e): solid line, PSP EOS; dashed line, SSKR EOS; dot line, PMR EOS; dot-dashed line, GLIR EOS. And comparison of percentage error of pressure calculated using different equations: +, PSP EOS; \\(\\square\\), SSKR EOS; \\(\\star\\), PMR EOS; \\(\\lozenge\\), GLIR EOS. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline & \\multicolumn{3}{c|}{SSK} & \\multicolumn{3}{c|}{SSKR} & \\multicolumn{3}{c|}{SSKR} & \\multicolumn{3}{c|}{PSP} \\\\ \\hline & \\(D_{0}\\) & \\(D_{1}\\) & \\(D_{2}\\) & \\(D_{0}\\) & \\(D_{1}\\) & \\(D_{2}\\) & \\(A_{0}\\) & \\(A_{1}\\) & \\(A_{2}\\) \\\\ \\hline \\hline Cu & 293.73 & \\(-\\)686.40 & 394.71 & 381.53 & \\(-\\)650.37 & 269.96 & 427.41 & \\(-\\)419.20 & \\(-\\)9.020 \\\\ \\hline Mo & 156.70 & \\(-\\)578.16 & 421.54 & 421.11 & \\(-\\)577.10 & 156.06 & 1316.8 & \\(-\\)1209.8 & \\(-\\)107.85 \\\\ \\hline Zn & 260.70 & \\(-\\)528.62 & 271.29 & 256.94 & \\(-\\)489.17 & 243.34 & 192.99 & \\(-\\)206.99 & 12.750 \\\\ \\hline Ag & 326.55 & \\(-\\)720.31 & 395.34 & 382.78 & \\(-\\)688.72 & 307.02 & 359.42 & \\(-\\)389.56 & 28.99 \\\\ \\hline Pt & 466.76 & \\(-\\)1193.8 & 727.49 & 720.07 & \\(-\\)1176.7 & 456.98 & 996.17 & \\(-\\)1008.9 & 11.81 \\\\ \\hline Ti & 8.260 & \\(-\\)121.04 & 112.44 & 114.63 & \\(-\\)127.03 & 12.22 & 585.21 & \\(-\\)536.06 & \\(-\\)49.24 \\\\ \\hline Ta & 72.99 & \\(-\\)344.84 & 271.82 & 272.46 & \\(-\\)346.38 & 73.90 & 1179.1 & \\(-\\)1066.7 & \\(-\\)113.03 \\\\ \\hline Au & 325.64 & \\(-\\)816.00 & 490.93 & 484.10 & \\(-\\)799.81 & 316.14 & 761.26 & \\(-\\)763.39 & 1.120 \\\\ \\hline Pd & 344.40 & \\(-\\)865.63 & 521.72 & 513.51 & \\(-\\)846.47 & 333.32 & 682.39 & \\(-\\)690.84 & 7.450 \\\\ \\hline Zr & \\(-\\)40.14 & \\(-\\)22.08 & 61.86 & 64.60 & \\(-\\)29.40 & \\(-\\)35.39 & 690.45 & \\(-\\)617.45 & \\(-\\)73.00 \\\\ \\hline Cr & 259.25 & \\(-\\)700.19 & 441.11 & 437.08 & \\(-\\)609.92 & 253.56 & 605.12 & \\(-\\)591.28 & \\(-\\)15.00 \\\\ \\hline Co & 161.67 & \\(-\\)516.19 & 354.61 & 354.11 & \\(-\\)515.05 & 161.00 & 663.78 & \\(-\\)618.34 & \\(-\\)46.25 \\\\ \\hline Ni & 241.20 & \\(-\\)661.39 & 420.41 & 417.51 & \\(-\\)564.68 & 237.36 & 553.90 & \\(-\\)537.02 & \\(-\\)17.83 \\\\ \\hline Nb & 70.990 & \\(-\\)313.58 & 242.46 & 241.83 & \\(-\\)312.12 & 70.140 & 1029.9 & \\(-\\)931.42 & \\(-\\)99.57 \\\\ \\hline Cd & 175.83 & \\(-\\)376.06 & 201.58 & 195.08 & \\(-\\)359.21 & 165.11 & 211.34 & \\(-\\)231.68 & 19.28 \\\\ \\hline Al & 64.27 & \\(-\\)207.75 & 143.33 & 144.62 & \\(-\\)210.93 & 66.190 & 39.50 & \\(-\\)370.24 & \\(-\\)26.01 \\\\ \\hline Th & 39.87 & \\(-\\)129.44 & 89.70 & 89.03 & \\(-\\)127.60 & 38.66 & 500.66 & \\(-\\)467.25 & \\(-\\)33.83 \\\\ \\hline V & 72.94 & \\(-\\)302.04 & 229.17 & 228.67 & \\(-\\)300.87 & 72.25 & 733.25 & \\(-\\)663.29 & \\(-\\)70.70 \\\\ \\hline In & 109.23 & \\(-\\)241.91 & 133.57 & 129.82 & \\(-\\)231.91 & 102.73 & 244.77 & \\(-\\)249.76 & 3.890 \\\\ \\hline Be & 26.94 & \\(-\\)175.10 & 148.11 & 148.97 & \\(-\\)177.14 & 28.14 & 338.68 & \\(-\\)303.43 & \\(-\\)36.13 \\\\ \\hline Pb & 103.88 & \\(-\\)239.29 & 136.04 & 132.39 & \\(-\\)229.89 & 97.94 & 276.25 & \\(-\\)287.00 & 11.52 \\\\ \\hline Sn & 110.10 & \\(-\\)251.61 & 142.07 & 138.48 & \\(-\\)242.59 & 104.52 & 256.41 & \\(-\\)268.25 & 10.89 \\\\ \\hline Mg & 21.76 & \\(-\\)77.32 & 55.60 & 55.56 & \\(-\\)77.22 & 21.69 & 238.92 & \\(-\\)222.07 & \\(-\\)17.46 \\\\ \\hline Ca & \\(-\\)14.73 & 6.430 & 8.020 & 8.840 & 3.940 & \\(-\\)12.93 & 220.27 & \\(-\\)198.94 & \\(-\\)20.39 \\\\ \\hline Tl & 85.87 & \\(-\\)200.18 & 114.59 & 112.74 & \\(-\\)195.66 & 83.15 & 210.86 & \\(-\\)225.44 & 13.47 \\\\ \\hline Na & 3.410 & \\(-\\)12.71 & 9.300 & 9.330 & \\(-\\)12.81 & 3.480 & 61.28 & \\(-\\)58.10 & \\(-\\)3.550 \\\\ \\hline K & 0.170 & \\(-\\)3.950 & 3.700 & 3.810 & \\(-\\)4.330 & 0.490 & 48.05 & \\(-\\)44.26 & \\(-\\)2.410 \\\\ \\hline Rb & 0.530 & \\(-\\)3.450 & 2.850 & 2.910 & \\(-\\)3.680 & 0.750 & 35.90 & \\(-\\)32.56 & \\(-\\)1.400 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Optimized values of coefficients for the SSK, SSKR and PSP EOSs determined by fitting experimental compression data. The parameters for the PSP EOS are dimensionless; and all parameters for SSK and SSKR EOSs are in GPa. Figure 2: (Color online) Comparison of compression curves of Ni (a) and Na (b) calculated by using different equations with experimental data (\\(\\diamond\\)): solid line, PSP EOS; dashed line, SSKR EOS; dot line, PMR EOS; dot-dashed line, GLIR EOS. And comparison of percentage error of pressure calculated using different equations: +, PSP EOS; \\(\\square\\), SSKR EOS; \\(\\star\\), PMR EOS; \\(\\diamond\\), GLIR EOS. oscillations of relative errors from the PSP EOS and PMR EOS are more evident than the SSKR and GLIR EOSs; and for other solids, the oscillations from all four EOSs are equivalent with each other. It is notable that the relative errors from the GLIR EOS are most stable and fairly small for all 10 solids and for all compression ratio ranges. These results show that the GLIR EOS can be seen as the best one among six EOSs studied in this work. ## 4 Conclusion In conclusion, we develop a three-parameter GLIR based on the GLJ potential and the approach of Parsafar and Mason [14] in developing the LIR EOS. Comparing with other five EOSs popular in literature, the precision of the GLIR EOS developed in this paper is superior to other EOSs. The GLIR EOS is capable of overcoming the problem existing in other EOSs where the pressure becomes negative at high enough pressure conditions. ## Acknowledgements This work is supported by the Joint Fund of NSFC and CAEP under Grant No. 10876008, and by the Innovation Fund of UESTC under Grant No. 23601008. ## References * [1] Murnaghan F.D., Proc. Natl. Acad. Sci. U.S.A., 1944, **30**, 244; doi:10.1073/pnas.30.9.244. * [2] Birch F., Phys. Rev., 1947, **71**, 809; doi:10.1103/PhysRev.71.809. * [3] Birch F., J. Geophys. Res., 1978, **83**, 1257; doi:10.1029/JB083B03p01257. * [4] Rose J.H., Smith J.R., Guinea F., Ferrante J., Phys. Rev. B, 1984, **29**, 2963; doi:10.1103/PhysRevB.29.2963. * [5] Vinet P., Smith J.R., Ferrante J., Rose J.H., Phys. Rev. B, 1987, **35**, 1945; doi:10.1103/PhysRevB.35.1945. * [6] Kuchhal P., Kumar R., Dass N., Phys. Rev. B, 1997, **58**, 8042; doi:10.1103/PhysRevB.55.8042. * [7] Schlosser H., Ferrante J., Phys. Rev. B, 1989, **40**, 6405; doi:10.1103/PhysRevB.40.6405. * [8] Baonza V.G., Caceres M., Nunez J., Chem. Phys. Lett., 1994, **228**, 137; doi:10.1016/0009-2614(94)00935-X. * [9] Baonza V.G., Caceres M., Nunez J., J. Phys. Chem., 1994, **98**, 4955; doi:10.1021/j00070a001. * [10] Sun J.X., J. Phys.: Condens. Matter, 2005, **17**, L103; doi:10.1088/0953-8984/17/12/L01. Figure 3: (Color online) Comparison of compression curves of Th (a) and Be (b) calculated by using different equations with experimental data (\\(\\diamond\\)): solid line, PSP EOS; dashed line, SSKR EOS; dot line, PMR EOS; dot-dashed line, GLIR EOS. And comparison of percentage error of pressure calculated using different equations: +, PSP EOS, \\(\\square\\), SSKR EOS; \\(\\star\\), PMR EOS; \\(\\diamond\\), GLIR EOS. 11] Parsafar G.A., Mason E.A., Phys. Rev. B, 1994, **49**, 3049; doi:10.1103/PhysRevB.49.3049. * [12] Shanker J., Singh B., Kushwah S.S., Physica B, 1997, **229**, 419; doi:10.1016/S0921-4526(96)00528-5. * [13] Saxena S.K., J. Phys. Chem. Solids, 2004, **65**, 1561; doi:10.1016/j.jpcs.2004.02.003. * [14] Parsafar G., Mason E.A., J. Phys. Chem., 1994, **98**, 1962; doi:10.1021/100058a040. * [15] Parsafar G., Sohraby N., J. Phys. Chem., 1996, **100**, 12644; doi:10.1021/jp960239v. * [16] Ghatee M.H., Shams-Abadi H., J. Phys. Chem. B, 2001, **105**, 702; doi:10.1021/jp001022a. * [17] Ghatee M.H., Bahadori M., J. Phys. Chem. B, 2001, **105**, 11256; doi:10.1021/jp011592q. * [18] Ghatee M.H., Niroomand-Hossieni F., J. Phys. Chem. B, 2004, **108**, 10034; doi:10.1021/jp036977i. * [19] Parsafar G.A., Spohr H.V., Patey G.N., J. Phys. Chem. B, 2009, **113**, 11977; doi:10.1021/jp903519c. * [20] Holzapfel W.B., Phys. Rev. B, 2003, **67**, 026102; doi:10.1103/PhysRevB.67.026102. * [21] Kennedy G.C., Keeler R.N., American Institute of Physics Handbook, 3rd edn., McGraw-Hill, New York, 1972. * [22] Hixson R., Fritz J.N., J. Appl. Phys., 1992, **71**, 1721; doi:10.1063/1.351203. [MISSING_PAGE_POST]
A three-parameter equation of state (EOS) without physically incorrect oscillations is proposed based on the generalized Lennard-Jones (GLJ) potential and the approach in developing linear isotherm regularity (LIR) EOS of Parsafar and Mason [J. Phys. Chem., 1994, **49**, 3049]. The proposed (GIR) EOS can include the LIR EOS therein as a special case. The three-parameter GLIR, Parsafar and Mason (PM) [Phys. Rev. B, 1994, **49**, 3049], Shanker, Singh and Kushwah (SSK) [Physica B, 1997, **229**, 419], Parsafar, Spohr and Patey (PSP) [J. Phys. Chem. B, 2009, **113**, 11980], and reformulated PM and SSK EOS are applied to 30 metallic solids within wide pressure ranges. It is shown that the PM, PMR and PSP EOSs for most solids, and the SSK and SSKR EOSs for several solids, have physically incorrect turning points, and pressure becomes negative at high enough pressure. The GLIR EOS is capable not only of overcoming the problem existing in other five EOSs where the pressure becomes negative at high pressure, but also gives results superior to other EOSs. three-parameter equation of state, metallic solids, high pressure, physically incorrect oscillation P. 64.30.Ef, 64.10.+h, 05.70.Ce
Provide a brief summary of the text.
325
isprs/d0742c57_31e6_48ff_968a_642f8c9bfc85.md
# 3D Modeling of Heritage Objects By Fringe Projection and Laser Scanning Systems H.-J. Przybilla\\({}^{\\,\\text{a}}\\)1, J. Peipe\\({}^{\\,\\text{b}}\\) \\({}^{\\,\\text{a}}\\) Lab. for Photogrammetry, Dept. of Surveying and Geoinformatics, Bochum University of Applied Sciences, Germany [email protected] \\({}^{\\,\\text{b}}\\) Institute for Photogrammetry and Cartography, Bundeswehr University Munich, Germany [email protected] Footnote 1: Corresponding author ## 1 Introduction The recording and modeling of cultural heritage objects for the purpose of conservation, structural investigation, restoration, visualization, documentation, and archiving is, by common consent, an important task. Mobile and flexible optical 3D measuring systems based on techniques such as photogrammetry, fringe projection, laser scanning, and combinations of these image-based or range-based systems can successfully be applied to the measurement and virtual reconstruction of objects in art and cultural heritage. Recently, it was decided to establish a digital archive of the treasure of the Essen cathedral by means of optical 3D measurement methods. The treasure of the Essen cathedral is one of the most significant collections of ecclesiastic art in Germany. It originates from the former canonses convert's treasure which passed into the property of the parish after the secularisation in 1803. Since 1957 the dicocese of Essen has been responsible for the treasure. Because only a few pieces have got lost in the course of history, the collection is exemplary due to its completeness. However, the treasure includes approximately 250 objects comprising some outstanding works, in particular from the Ottoman epoch (Figs. 1 & 2). A great number of significant works of art varying in size, shape, material, and complexity are to be measured. The artifacts provide more or less co-operative surfaces: a challenge for any measurement system. The 3D reconstruction is to be performed with high accuracy (0.1 to 1.0 mm, depending on the size of the object) at large scales (1:1 to 1:5). The paper overviews both the main objects of interest being among the treasure and the measurement systems or techniques, respectively, to be used for the recording. A chapter dealing with difficulties and drawbacks when digitizing artefacts is included. Finally, the state of the project as up to now is reported on. Figure 1: Otto-Mathilden-Cross, 10th century, different materials, 44.5 cm high, 29.5 cm wide (© Martin Engelbrecht) Figure 2: Child’s Crown, made for Otto III., 10th century, different materials, e.g. gold sheets, jewels, and pearls (© Martin Engelbrecht) ## 2 Measurement systems \"We can safely say that at the moment, for all types of objects and sites, there is no single modeling technique able to satisfy all requirements, like high geometric accuracy, portability, full automation, photo-realism, low costs as well as flexibility and efficiency.\" (Remondino & El-Hakim, 2006) The authors of this paper entirely confirm the above statement from their own practical experience. Therefore, several measurement techniques are proposed for the project - photogrammetry - fringe projection technique - laser scanning. Active sensors using fringe projection or laser technique accomplish the very fast and precise digitization of an object surface resulting in a dense 3D point cloud. The combination of image-based and range-based methods is appropriate to all tasks where a single technique by itself fails. So the structure of an object can be measured by photogrammetry, complex details of the object surface with a range sensor. Texture information for generating a realistic 3D model is provided by high resolution digital colour images. Subsequent to the point cloud generation, software is used for a topologically correct surface reconstruction, for texturing and visualization, in this case by means of the RapidForm and Geomagic software packages (Irus, 2007; Geomagic, 2007). ### Fringe Projection Systems For the data acquisition of the treasure of the Essen cathedral mainly fringe projection systems have been applied so far. At the beginning of the project, a Breuckmann optoTOP HE-600 was used to digitize the _Golden Madonna_, the oldest known statue of Virgin Mary dating from around the year 990 A.D. (Peipe & Przybilla, 2005). Since then a Breuckmann triTOS has been employed (Fig. 3; Breuckmann, 2007). The system consists of a mechanically stable base connecting a 1.3 megapixel camera and the fringe projection unit, and a high-end PC or notebook with the operating and object reconstruction software. The triangulation angle of the system amounts to 20 degree leading to a medium accuracy of about 0.2 mm when digitizing object surfaces. Base length and lenses can be changed to capture different fields of view. Calibration data and accuracy figures of the triTOS system were determined by test measurements published in a previous paper (Bange et al., 2007). A series of scans has to be generated to cover the complex artefacts, including a lot of detailed views to acquire more or less hidden regions, too. The single views can be combined within the triTOS software by a best fit approach based on iterative closest point algorithm (ICP). Moreover, a network of reference points precisely determined by photogrammetric means may serve to merge all the scans of the object captured with the range-based system. Furthermore, a Nikon D2Xs digital SLR camera was used to take colour images for texturing the 3D model. ### Laserscanning In addition to the fringe projection system, a laserscanner will be applied for point cloud generation. Laserscanners have been very expensive, but newly developed devices may change this situation. For example, the MicroScan laser sensor is a powerful but affordable measuring tool (RSI, 2007; Fig. 4). The scanner is connected to the MicroScribe measuring arm, i.e. it is attached to the stylus of the 6DOF MicroScribe. The portable MicroScribe + MicroScan combination provides an accuracy of approx. 0.2 mm within a reach of about 1.5 m. 28.000 3D points per second are registered while the laser line is navigated over the object. The camera of the MicroScan records the deformations of the scanning line caused by shape variations of the object surface. Point clouds originating from different scans can be merged by the system software. In addition to the laserscanning, the MicroScribe measuring arm can also be utilized as a touch probe device for digitizing single 3D point positions. First results of laserscanner calibrations (by means of tools included in the system) and measurements show a good accordance with the specifications given by the manufacturer of the system. Further investigations have to be performed concerning calibration procedures, especially those defined by the German VDI/VDE guideline 2634 (VDIVDE, 2002). An example of object scanning with MicroScan in comparison to an image taken with the Nikon camera is shown in Fig. 5. Figure 4: MicroScan + MicroScribe during calibration Figure 3: triTOS fringe projection system ## 3 State of the Project As mentioned in chapter 2, the _Golden Madonna_ of the cathedral's treasure was digitized and modeled at first. In the meantime, some more works of art have been surveyed (Figs. 6 & 7): * the so-called _Elfenbeinpyxis_, an elliptical box (10.4 cm and 12.5 cm in diameter, 8.8 cm high), made of ivory, originating from the 5th/6th century * the _Ludgeruskelch_, i.e. a ceremonial cup (12.2 cm high, 7 cm diameter at the top), made of copper and gold, originating from the 10th century * the scabbard of a ceremonial sword (94 cm in length), made from wood and covered by golden plates, originating from the 10th century. These objects are very different in size, structure and material. Therefore, the digitizing is complicated and time-consuming. Some difficulties appear in case of - reflections, especially when capturing glittering material (gold etc.) - more or less transparent materials (gems etc.) - complex surfaces providing hidden parts. Specifications of the data acquisition can be found in Tab. 1. ## 4 Data Processing and Visualization Visualization of complex object structures cannot be done by common computer aided design programs. Tools like those used for reverse engineering and rapid prototyping have to be applied to transform scan data into accurate digital models. Favourable results could be achieved with Geomagic Studio (Geomagic, 2007). This software package includes features such as - point processing (data filtering, noise reduction) - polygon building - curvature-based hole filling - automatic edge sharpening - surface smoothing - texture mapping - etc. A serious problem when digitizing and processing complex objects is the large amount of data, especially if the modeled artefact should be posted on the internet. Data reduction is an \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline & Elfenbein- & Ludgerus- & Ceremonial \\\\ & pysis & kelch & sword \\\\ \\hline base length & 50 mm & 50 mm & 300 mm \\\\ \\hline measuring distance & 400 mm & 400 mm & 1000 mm \\\\ \\hline survey depth & 90 mm & 90 mm & 180 mm \\\\ \\hline fixed sensor, object on & & & \\\\ turn-table & & & \\\\ \\hline number of scans & 36 & 85 & 83 \\\\ \\hline measuring time & 8 h & 5 h & 7 h \\\\ \\hline fitting by & during & additional & additional \\\\ ICP & measurement & 5 h & 4 h \\\\ \\hline \\end{tabular} \\end{table} Table 1: Specifications of data acquisition Figure 5: Scan (left) and digital image (right) Figure 6: Digitizing the _Elfenbeinpyxis_ Figure 7: Digitizing the ceremonial swordimportant issue. Fig. 8 presents a detail of the _Elfenbeinpyxis_, beginning (left) with the original resolution and, then, after reducing to 25% and 3% of the primary data volume. Fig. 9 shows the same detail overlaid with photographic texture. Even a significant reduction of 3D data is partly compensated by the overlaid texture. These experiences can be confirmed with the results of other artefacts (Fig. 10). Data processing using reverse engineering software seems to be suitable for modeling complex cultural objects (see Remondino & El-Hakim, 2006). ## 5 Concluding Remarks 3D data from different sources are suitable to establish a digital archive of the treasure of the Essen cathedral. Fringe projection systems have been successfully applied to generate 3D models of several important and valuable works of medieval art. In addition, the combination of a touch probe digitizer and a laserscanner provides an adequate accuracy and a favourable price/performance ratio, and seems very promising for quick data collection of complex objects. Based on the geometrical reconstruction of the artefacts and additional information, multimedia processing is planned. In the near future, the data will be included in a 3D information system supporting any kind of art-historical activities and research, but also useful for the non-professional museum visitor interested in medieval art. [MISSING_PAGE_POST]
The treasure of the Essen cathedral in Germany is one of the few collections of medieval art completely preserved in the course of history, including approximately 250 objects such as crosses, crowns, swords, statues, manuscripts, and reliquaries. These precious works of art show a great variety in size, shape, material, and complexity. In the paper, ideas and concepts for recording the manifold objects of the treasure by optical methods such as photogrammetry, fringe projection and laser technique are reported on. The measurement systems used are described as well as calibration aspects, difficulties of data acquisition and processing are mentioned, and virtual models of the treasure are presented. Cultural Heritage, 3D Digitization, Laserscanning, Modeling, Documentation
Write a summary of the passage below.
149
arxiv-format/2104_02304v1.md
# Hyperspectral Image Denoising Based on Multi-Stream Denoising Network ## 1 Introduction Hyperspectral images (HSIs) are typically composed of many spectral channels ranging from visible spectrum to infrared spectrum. HSIs can simultaneously acquire both spatial and spectral information, which provide richer scene information than other data sources. Nevertheless, HSIs are often affected by various types of noise because of imaging equipment and external environment in the process of acquisition, conversion, transmission, compression, and storage. Noise not only affects the visualization of HSIs, but also limits the subsequent analysis and processing of HSIs. Therefore, it is critical to remove the noise in HSIs before HSI analysis and processing. To remove the noise interference, researchers have proposed many methods for HSIs denoising [1][2]. Band -by-band denoising strategies are usually followed in traditional HSI denoising methods. Each band is considered as one 2-D image, such as block matching and 3-D filtering (BM3D) [3] and weighted nuclear norm minimization (WNNM) [4]. However, these strategies generally lead to large spectral distortions due to disregarding the spectral information. Different from these methods, block matching and 4-D filtering (BM4D) [5] algorithm is 3-D image denoising method suitable for HSIs. However, it fails to take into account the inconsistency of the noise distribution between different bands. Therefore, BM4D does not perform well in the spectral domain. Recently, deep learning has achieved great success in image denoising due to its powerful capabilities of feature learning and nonlinear mapping. Image denoising models based on convolution neural networks (CNNs) develop rapidly. However, most of these methods remove the noise based on a specific noise level, and it is difficult to achieve the promising performance once the noise level changes. We often obtains the over-denoising or under-denoising results. To address the problem, blind denoising techniques are proposed. On the one hand, the denoising model is optimized by a very large training dataset which containing noisy images with various noise levels. However, it may be very tough for the network to learn all noise types at the same time. On the other hand, the noise estimation or noised label is introduced to guide the denoising process. Zhang et al. [6] proposed a fast and flexible denoising network (FFDNet), which exhibited a relevant performance improvement in image denoising. FFDNet removes the complex noise by combining noisy image estimation and noise level estimation. However, unspecified noise level still deteriorates the performance. Therefore, it is challenging to design a generalized denoising model. In this paper, we propose a blind denoising method for HSIs based on Multi-Stream Denoising Network (MSDNet), which can estimate the noise level autonomously instead of taking noise level as input. In MSDNet, noise estimation subnetwork is well-designed to produce the noise estimation, and then the denoising subnetwork is introduced to generate the final result. In particular, a multi-scale fusion module is developed to capture the noise at different scales in the noise estimation subnetwork. Experiments conducted on the HSIs dataset demonstrate that the proposed method is superior toother closely related methods. ## 2 Methodology The framework of the proposed method is illustrated in Fig.1. It consists of two subnetworks: 1) noise estimation subnetwork; 2) denoising subnetwork. In the remainder of this section, more details about each subnetwork and the loss function will be described. ### Noise Estimation Subnetwork To capture noise features, three multiscale modules with different kernel sizes are employed. Fig.2 shows the architecture of multiscale module. Each one consists of six blocks, and the output of each module can be defined as follows: \\[\\textbf{M}_{i}=cat[B_{1},B_{2},\\dots,B_{6}], \\tag{1}\\] where \\(M_{i},i=1,2,3\\) denotes the \\(i\\)th multi-scale module, and \\(B_{j},j=1,2,\\dots,6\\) denotes the \\(j\\)th block, and \\(cat(\\cdot)\\) is the concatenate operator in the channel dimension. As shown in Fig.1, the kernel sizes of the three multiscale modules are \\(3\\times 3\\), \\(5\\times 5\\), and \\(7\\times 7\\), respectively. The outputs of three multiscale modules are concatenated together for the next step. We use the pyramid pooling module developed in [11] to further obtain the multiscale description. In order to fuse multiscale features for noise estimation, we concatenate four feature maps on the channel dimension to generate a feature map \\(P\\) : \\[\\textbf{P}=cat[f_{1},f_{2},f_{3},f_{4}], \\tag{2}\\] where \\(f_{1},f_{2},f_{3},f_{4}\\) denote four feature maps. Afterwards, aiming to learn the relationship between the channels of feature maps, the attention mechanism is employed: Firstly, squeeze \\(P\\) in the spatial domain to generate a vector \\(V\\in\\mathbb{R}^{4C\\times 1\\times 1}\\) by global max pooling. Then, \\(V\\) is fed into two fully-connected layers to generate the weight vector \\(S\\). Ultimately, channel-wise multiply is employed between the feature map \\(P\\) and weight vector \\(S\\): \\[\\textbf{n}_{i}=f(P_{i},S_{i})\\quad i=1,2,\\dots,4C, \\tag{3}\\] where \\(n_{i}\\) denotes the \\(i\\)th channel of noise level estimation \\(N\\), and \\(f(\\cdot)\\) refers to channel-wise multiply between feature \\(P_{i}\\in\\mathbb{R}^{H\\times W}\\) and weight vector \\(S_{i}\\). ### Denoising Subnetwork In the denoising subnetwork, we employ a 16-layer UNet framework which takes both noise level estimation and noisy image as input to get the denoised image. All filter size of the network is \\(3\\times 3\\) and the convolution layers are activated by ReLU function except the last one. The network obtains the denoised image \\(D\\) by learning the residual mapping of the noisy image \\(Y\\) as follows: \\[\\textbf{D}=Y+f(Y,N;W_{unet}), \\tag{4}\\] where \\(W_{unet}\\) denotes the network parameters of the denoising subnetwork. Figure 1: Framework of the proposed image denoising method Figure 2: Architecture of the multiscale module ### Loss Function In this paper, feature extracting and denoising are in an end-to-end framework. Thus, mean squared error (MSE) loss and perceptual loss are employed as the basic loss function to guide the learning of the denoising model. The MSE loss is formulated as follows: \\[\\mathcal{L}_{M}=MSELoss(G,D)=\\frac{1}{\\mathit{CHW}}\\sum_{t=1}^{CHW} \\ (G_{t}-D_{t})^{2}, \\tag{5}\\] where \\(G\\) and \\(D\\) denote the ground truth and the denoised image respectively, \\(CHW\\) denotes the number of pixels, \\(G_{t}\\) and \\(D_{t}\\) denote the ground truth at pixel \\(t\\) and the denoised image at pixel \\(t\\) respectively. Furthermore, the perceptual loss is formulated as follows: \\[\\mathcal{L}_{P}=\\ell^{\\phi,j}(G,D)=\\frac{1}{C_{j}H_{j}W_{j}}\\|\\ \\phi_{j}(G)-\\phi_{j}(D)\\ \\|_{2}^{2}, \\tag{6}\\] where \\(\\phi_{j}(\\cdot)\\) denote the \\(j\\)th layer of VGG-19. Therefore, the perceptual loss obtains more detailed information through the feature reconstruction of CNN. In addition, we exploit asymmetric loss as a noise estimation loss, which is introduced from CBDNet [7] to measure the noise estimation. Over-denoising or under-denoising are penalized by the asymmetric loss. The asymmetric loss is formulated as follows: \\[\\mathcal{L}_{asymm}=\\sum_{i}|\\alpha-\\mathbb{I}_{(N_{i}-N_{i}^{ \\prime})<0}|\\cdot(N_{i}-N_{i}^{{}^{\\prime}})^{2}, \\tag{7}\\] where \\(\\mathbb{I}_{e}\\) denotes the indicator function, \\(N_{i}\\) and \\(N_{i}^{{}^{\\prime}}\\) denote the noise level estimation at pixel \\(i\\) and the ground truth at pixel \\(i\\), respectively. \\(\\alpha\\) is empirically set to 0.25. To sum up, the total loss function is given by: \\[\\mathcal{L}=\\mathcal{L}_{M}+\\mathcal{L}_{P}+\\lambda_{asymm}\\mathcal{L}_{asymm}, \\tag{8}\\] where \\(\\lambda_{asymm}\\) denotes the parameters for the asymmetric loss. ## 3 Experiments and Analysis ### Experimental Setup We conduct the training process on the ICVL dataset, which is comprised of 201 images. The images in ICVL dataset were collected at \\(1392\\times 1300\\) spatial resolution over 31 spectral channels. In order to expand the training dataset, each training image was cropped into multiple patches of size \\(64\\times 64\\times 31\\). Furthermore, we use Pavia Center dataset to fine-tune the model. As for the testing part, we evaluate our model in Pavia University dataset. These two datasets were acquired by the ROSIS sensor and after processing, the size of Pavia Center image is \\(1096\\times 715\\times 102\\) while the size of Pavia University image is \\(610\\times 340\\times 103\\). The Adam optimizer with batch size of 64 is employed to optimize the proposed method. The learning rate was initialized to \\(10^{-4}\\) and we use a weight decay of 0.0005. The network was trained with 100 epochs based on the above settings. As for noise settings, the adding noise was referred as two cases: 1) adding AWGN with noise levels of 30, 50, 70; 2) randomly adding AWGN with the noise level ranging from 10 to 70. ### Results and Analysis We compare the proposed method with other four closed related denoising methods: BM4D [5], TDL [8], LRMR [9] and LRTV [10] on Pavia University dataset. In order to evaluate the performance of these denoising methods, we use three quantitative evaluation indexes: PSNR, SSIM and SAM. Generally speaking, the higher values of PSNR and SSIM mean the better denoising effect, while the lower value of SAM means the better denoising effect. Table 1 lists the quantitative assessment results of our method and other closely related denoising methods in different noise levels. As shown in the table, our method achieves better performance in quantitative evaluation. Fig.3 shows the visual comparison between our method and other denoising methods. The comparison showed that our method can effectively remove the noise while preserving the details of the image. Figure 3: Denoising results at \\(10^{th}\\) band of image under noise level \\(\\sigma=30\\) on Pavia University dataset. ## 4 Conclusion In this paper, we propose a novel blind denoising method for HSIs based on multistream denoising network (MSDNet), which consists of noise estimation subnetwork and denoising subnetwork. The MSDNet can estimate the noise level autonomously, and then realize blind denoising and improve the performance of HSI denoising. The comparison with other methods indicates that the proposed method achieves better denoising performance under different noise levels. In the future, we are committed to investigating hyperspectral images denoising under mixed noise, such as impulse noise and stripe noise. On the other hand, we will work on improving the accuracy of noise level estimation to improve the denoising process. ## References * [1] Q. Yuan, L. Zhang, and H. Shen, \"Hyperspectral image denoising employing a spectral-spatial adaptive total variation model,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 50, no. 10, pp. 3660-3677, 2012. * [2] Y. Zhao and J. Yang, \"Hyperspectral image denoising via sparse representation and low-rank constraint,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 53, no. 1, pp. 296-308, 2015. * [3] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, \"Image denoising by sparse 3-d transform-domain collaborative filtering,\" _IEEE Transactions on Image Processing_, vol. 16, no. 8, pp. 2080-2095, 2007. * [4] S. Gu, L. Zhang, W. Zuo, and X. Feng, \"Weighted nuclear norm minimization with application to image denoising,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2014. * [5] M. Maggioni, V. Katkovnik, K. Egiazarian, and A. Foi, \"Nonlocal transform-domain filter for volumetric data denoising and reconstruction,\" _IEEE Transactions on Image Processing_, vol. 22, no. 1, pp. 119-133, 2013. * [6] K. Zhang, W. Zuo, and L. Zhang, \"FFDNet: Toward a fast and flexible solution for CNN-based image denoising,\" _IEEE Transactions on Image Processing_, vol. 27, no. 9, pp. 4608-4622, 2018. * [7] S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, \"Toward convolutional blind denoising of real photographs,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019. * [8] Y. Peng, D. Meng, Z. Xu, C. Gao, Y. Yang, and B. Zhang, \"Decomposable nonlocal tensor dictionary learning for multispectral image denoising,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2014. * [9] H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, \"Hyperspectral image restoration using low-rank matrix recovery,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 8, pp. 4729-4743, 2014. * [10] W. He, H. Zhang, L. Zhang, and H. Shen, \"Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 54, no. 1, pp. 178-188, 2016. * [11] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, \"Pyramid scene parsing network,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, July 2017. \\begin{table} \\begin{tabular}{c c|c c c c c c} \\hline Noise & Index & Noisy & BM4D & TDL & LRMR & LRTV & Proposed \\\\ \\hline \\hline \\multirow{3}{*}{\\(\\sigma\\) = 30} & PSNR & 18.59 & 33.43 & 33.54 & 27.14 & 29.29 & **34.93** \\\\ & SSIM & 0.193 & 0.854 & 0.849 & 0.578 & 0.694 & **0.915** \\\\ & SAM & 0.875 & 0.173 & 0.152 & 0.407 & 0.326 & **0.102** \\\\ \\hline \\multirow{3}{*}{\\(\\sigma\\) = 50} & PSNR & 14.16 & 31.70 & 31.81 & 24.01 & 26.25 & **32.46** \\\\ & SSIM & 0.083 & 0.784 & 0.788 & 0.436 & 0.531 & **0.874** \\\\ & SAM & 1.093 & 0.207 & 0.146 & 0.613 & 0.498 & **0.127** \\\\ \\hline \\multirow{3}{*}{\\(\\sigma\\) = 70} & PSNR & 11.23 & 30.12 & 30.38 & 22.07 & 24.18 & **30.68** \\\\ & SSIM & 0.045 & 0.731 & **0.751** & 0.232 & 0.298 & 0.743 \\\\ & SAM & 1.207 & 0.238 & 0.186 & 0.695 & 0.545 & **0.154** \\\\ \\hline \\multirow{3}{*}{blind} & PSNR & 17.14 & 31.16 & 26.86 & 24.63 & 28.86 & **32.81** \\\\ & SSIM & 0.186 & 0.734 & 0.532 & 0.469 & 0.595 & **0.869** \\\\ \\cline{1-1} & SAM & 1.048 & 0.330 & 0.496 & 0.777 & 0.454 & **0.124** \\\\ \\hline \\end{tabular} \\end{table} Table 1: Quantitative assessment results of different methods under several noise levels on Pavia University dataset. “Blind” means corrupted by Gaussian noise with unknown \\(\\sigma\\) at each band.
Hyperspectral images (HSIs) have been widely applied in many fields, such as military, agriculture, and environment monitoring. Nevertheless, HSIs commonly suffer from various types of noise during acquisition. Therefore, denoising is critical for HSI analysis and applications. In this paper, we propose a novel blind denoising method for HSIs based on Multi-Stream Denoising Network (MSDNet). Our network consists of the noise estimation subnetwork and denoising subnetwork. In the noise estimation subnetwork, a multi-scale fusion module is designed to capture the noise from different scales. Then, the denoising subnetwork is utilized to obtain the final denoising image. The proposed MSDNet can obtain robust noise level estimation, which is capable of improving the performance of HSI denoising. Extensive experiments on HSI dataset demonstrate that the proposed method outperforms four closely related methods. Yan Gao\\({}^{1,2}\\), Feng Gao\\({}^{1,2,*}\\), Junyu Dong\\({}^{1,2}\\)+\\({}^{1}\\)College of Information Science and Engineering, Ocean University of China \\({}^{2}\\) Institute of Marine Development, Ocean University of China Hyperspectral image, image denoising, multi-scale fusion, noise estimation. Footnote †: This work was supported in part by the National Key Research and Development Program of China under Grant 2018AAA0100602, in part by the National Natural Science Foundation of China under Grant U1706218, and in part by the Key Research and Development Program of Shandong Province under Grant 2019GHY112048. (Email: [email protected])
Write a summary of the passage below.
364
arxiv-format/2108_01407v2.md
# GalaxAI: Machine learning toolbox for interpretable analysis of spacecraft telemetry data Ana Kostovska\\({}^{1,2,\\dagger}\\), Matej Petkovic\\({}^{1,2,\\dagger}\\), Tomaz Stepisnik\\({}^{1,2,\\dagger}\\), Luke Lucas\\({}^{3}\\), Timothy Finn\\({}^{4}\\), Jose Martinez-Heras\\({}^{5}\\), Pance Panov\\({}^{1,2}\\), Saso Dzeroski\\({}^{2}\\), Alessandro Donati\\({}^{4}\\), Nikola Simidjievski\\({}^{1,2,6}\\), Dragi Kocev\\({}^{1,2}\\) \\({}^{1}\\)_Bias Variance Labs, Ljubljana, Slovenia_ \\({}^{2}\\)_Jozef Stefan Institute, Ljubljana, Slovenia_ \\({}^{3}\\)_LSE Space GmbH, Gilching, Germany_ \\({}^{4}\\)_ESOC, European Space Agency, Darmstadt, Germany_ \\({}^{5}\\)_Solein Engineering, Darmstadt, Germany_ \\({}^{6}\\)_University of Cambridge, Cambridge, UK_ \\(\\dagger\\)_- Contributed equally and should be considered as joint first authors._ _Corresponding authors: [email protected], [email protected], [email protected], [email protected]_ ## I Introduction Spacecraft operate in extremely challenging and unforgiving environments. This calls for careful planning of their operations and close monitoring of their status and health [1]. The spacecraft's monitoring includes analysing housekeeping telemetry data that measure and describe the spacecraft's status, its activities, and its environment. These include temperature values at different locations, radiation values, power consumption estimates, status/command execution of active onboard equipment, performed computational activities [2, 3, 4, 5, 6, 7, 8]. Analysing telemetry data is complex and nontrivial since typically such data is: high dimensional (the number of features easily reaches the thousands); multimodal (data measured from different onboard components at different times); heterogeneous (the variables describing the status can be of different data types); with temporal dependence (the housekeeping data are typically multidimensional time series); has missing values (different sampling periods and timings, not always all values of all variables are retrieved); and contains obvious outliers (extreme abnormal values caused by errors in data conversion or transmission) [2]. Based on the analysis of these telemetry data, the spacecraft mission-planning and operations teams make decisions about the spacecraft's next operations - what activities it will perform (in terms of its mission) and when it will perform them. In this paper, we present a machine learning toolbox for efficient and interpretable end-to-end data analysis of spacecraft telemetry data. We showcase its potential by analysing telemetry data of two spacecraft operated by the European Space Agency: Mars Express and INTEGRAL. Mars Express (MEX), a long-lasting mission of the European Space Agency, has been exploring Mars since 2004. It is responsible for a wealth of scientific discoveries, including evidence of the presence of water (above and below the surface of the planet), an ample amount of three-dimensional renders of the surface, and a complete map of the chemical composition of Mars's atmosphere. The scientific payload of MEX consists of seven instruments, which together with the onboard equipment have to be kept within their operating temperature ranges (from room temperature for some instruments, to temperatures as low as \\(-180^{\\circ}C\\) for others). In order to maintain these predefined operating temperatures, the spacecraft is equipped with an autonomous thermal system composed of 33 heater lines and coolers that consume a significant amount of the total generated electric power - leaving a fraction to be used for science operations. Therefore, given the age and the current condition of MEX, monitoring and optimally planning this consumption has a direct consequence on the longevity of the spacecraft and its mission [9, 10, 11, 12, 13]. INTEGRAL is a space observatory designed to monitor and detect gamma-rays with high sensitivity. Since its launch in 2002, it has been responsible for detecting iron quasars,investigating high energy gamma-ray burst as evidence of black-holes, supernovae remnants and active galactic nuclei (AGNs), as well as providing imaging and spectroscopic observations of astronomical events in both the \\(X\\)-ray range and optical wavelengths. During its 64-hour orbit around Earth (with an apogee of \\(\\sim\\)140 000 km and perigee of \\(\\sim\\)6 000 km), INTEGRAL passes through the Van Allen radiation belts, where radiation levels are high enough to potentially damage the onboard equipment. While the spacecraft is equipped with radiation sensors, these operate autonomously and are used for emergency instrument shutdowns, which are followed by lengthy recovery procedures. Accurately modeling and predicting the spacecraft's position w.r.t these radiation belts is important. This allows for better control over activation/deactivation of onboard instruments and ultimately leading to optimal scientific output [14]. GalaxAI aims at addressing these tasks - by allowing for robust, accurate, and interpretable data analysis, it facilitates better and more informative mission planning. In the context of the MEX case study, this translates to better prediction of the thermal power consumption and therefore better estimation of the total available power, ultimately prolonging the mission itself. For INTEGRAL, on the other hand, better prediction of the Van Allen belts crossings would mean less recovery time and more time available for science. All of these predictive capabilities, coupled with visualisations and explainable model-outputs, are integrated into GalaxAI - a versatile toolbox for better and informed decisions regarding the spacecraft's present and future status. The paper is organized as follows. Section II briefly describes the (software) architecture of GalaxAI. Next, Section III presents the machine learning pipelines available in GalaxAI. Section IV discusses the graphical user interface of the toolbox. Finally, Section V concludes the paper. ## II Description of GalaxAI GalaxAI follows a two-layer design, consisting of a back-end and a front-end layer (Figure 1). The cornerstone of GalaxAI, its machine learning (ML) framework, is implemented as a part of the back-end layer. Besides modularity and easy maintenance, such implementation also allows certain data/compute intensive ML routines to be automated and executed on dedicated computing infrastructures. The ML framework consists of three major parts: (1) data preprocessing, feature engineering and selection, (2) model construction (learning), and (3) making predictions with a learned model. The first part includes various data preprocessing techniques and feature engineering algorithms designed and employed to pre-process the raw telemetry data pertaining to a particular spacecraft. The second part focuses on learning predictive models suitable for a considered data analysis task. More specifically, GalaxAI employs and supports various state-of-the-art machine-learning libraries including scikit-learn (for more 'traditional' machine learning methods) [15], pytorch [16] (for deep neural networks methods) and CLUS [17] (for constructing predictive clustering trees). Depending on the task at hand, GalaxAI allows for the use of different suitable methods. For instance, in the case of predicting MEX's thermal power consumption, this relates to different methods for multi-output regression such as ensembles predicting clustering trees [18], gradient boosting ensembles [19], and fully-connected neural networks [20]. The third part of GalaxAI focuses on making predictions and visualising the findings. These range from simply plotting the predicted values to a more sophisticated analysis of the utility and relevance of the features used in the model construction phase. Moreover, GalaxAI also allows for running different simulation scenarios and 'what-if' analyses of spacecraft operations under different conditions and analysing the impact of specific sets of commands. GalaxAI is also fully operable from a front-end layer through a Graphical User Interface (GUI). This enables users without any particular expertise in ML and software engineering to execute different parts-of or the complete data analysis pipeline. The interface employs React [21], an open-source javascript framework used for front-end development of primarily web applications running on Node.js runtime environment [22]. This is utilized together with Electron [23], an open-source platform for building desktop/offline applications. For visualizing the model outputs (i.e., predictions, feature importance diagrams, etc.), GalaxAI employs the interactive data visualization library Plotly.js [24]. ## III Machine Learning Pipelines We present two instantiations of GalaxAI, one for each of the spacecraft under consideration: GalaxAI-MEX - focusing on analyses of the thermal power consumption of the Mars Express spacecraft and GalaxAI-INTEGRAL - for analyses of INTEGRAL's entry/exit times from the Van Allen belts. ### _GalaxAI-MEX_ Recall that the back-end consists of three main parts that cover different stages of the pipeline. Here, we first briefly describe the input data as well as the three stages of the GalaxAI-MEX pipeline. #### Iii-A1 Input data GalaxAI-MEX processes six heterogeneous types of data. These include _solar aspect angles (SAA)_, _detailed mission operation plans (DMOP)_, _flight dynamics timeline (FTL)_, _various events (EVT)_, _long-term data (LT)_ and _power data (PW)_. More specifically, SAA give the orientation of the spacecraft with respect to the Sun, while DMOP data give the commands that are issued with the spacecraft, together with the subsystem where each command is issued. FTL data give the pointing-events (towards Mars, the Earth, etc.) and EVT lists various MEX-orbit-related events, such as entering/exiting umbra, passing through the extreme points (apo- and pericenter) of the orbit, etc. Finally, LT data contain the values of some physical quantities that can be computed far into the future (e.g., the distance between Mars and the Sun) and PW data gives the electrical currents through each of the 33 electrical heaters onboard the spacecraft. #### Iii-A2 Data preprocessing Given the heterogeneous raw data, the preprocessing within GalaxAI-MEX includes data alignment, feature construction, aggregation of the power data, and data cleansing. The data is first aligned to a given time-granularity (e.g., one entry per 15 minutes), as the entries from various data files are time-stamped, but are recorded at different (and irregular) spaces. The next step of _feature construction_ creates features (often by joining different data sources) used for learning the predictive models. This step is necessary since the data in its raw format is not directly usable by a machine learning algorithm. The _aggregation of the power-data_ includes computation of the average electrical current (e.g., for every 15 minutes) for each of the thermal consumers. Finally, _data cleansing_ removes/imputes records with missing values from the data. Details describing the feature construction procedure(s) are given by [11]. At the end of this stage, the constructed data set can be readily used for model learning. In addition to the data set itself, a metafile is created that contains all the information necessary for reproducing the learning experiments. #### Iii-A3 Learning models The constructed data set from the previous stage is used here for constructing a predictive model. To this end, GalaxAI-MEX implements several different machine learning methods. Namely, it implements unified wrappers for XGBoost [19], PCT-based ensembles [18], (deep) fully-connected neural networks [20], as well as for all models implemented in the scikit-learn toolbox [15]. Moreover, it implements feature ranking methods to provide a better understanding of the models and the predictions. In particular, it provides three feature ranking scores [25, 26] that calculate the feature importance/relevance: (1) random forest mechanism, (2) GENIE3, and (3) Symbolic scores. The first score can be applied to arbitrary types of machine learning models (since it is based on random permutations of the values, and estimating the difference in the errors caused by it). The GENIE3 and Symbolic scores can be calculated only for tree-based ensemble models. All three scores can be calculated for subsets of data examples selected by the user. The default values of the hyperparameters for most of the machine learning methods were selected in a comprehensive experimental study that used the MEX data between 2008 and 2020. Once a predictive model has been learned it is ready for use in making predictions for unseen (and/or future) data. The learned model is stored together with an additional meta file (containing all the parameters used for learning the model), allowing for shared analysis with other users as well as reproducible analysis. For an extensive study on the performance of these models in the context of predicting MEX's thermal power consumption, we refer the reader to [11, 13]. #### Iii-A4 Making predictions At the final stage, the constructed predictive models are employed for making predictions. GalaxAI-MEX employs various evaluation strategies for estimating the performance of the learned models and the quality of the predictions. Moreover, it includes a mechanism for interpreting model outputs in terms of feature importance diagrams, that depict the influence of the input features on the resulting predictions. These evaluation statistics together with the model predictions are the output of GalaxAI-MEX. ### _GalaxAI-INTEGRAL_ Similarly as before, here we first describe the raw data used in the pipeline, its preprocessing, representation, and preparation for use by machine learning algorithms. Then, we describe the implemented machine learning methods and their use to analyze INTEGRAL data. #### Iii-B1 Input data To determine when the spacecraft enters the Van Allen belts, we rely on the onboard IREM measurements, taken every 8 seconds. Since they can be very noisy, we take median values from bins with a coarser time granularity (5-15 minutes). The times of entries into and exits from the belt are determined through thresholding these IREM measurements. More specifically, when the counts are above 600 electron counts per second, the spacecraft is determined to be inside the belts. The orbit of each revolution of the spacecraft is defined by 12 orbital elements that include (1) perigee time, (2) perigee altitude, (3) apogee time, (4) apogee altitude, (5) longitude, above which perigee is located, (6) semi-major axis, (7) eccentricity, (8) inclination, (9) right ascension of the ascending node, (10) argument of perigee, (11) period, and (12) period difference from the previous revolution. We also take into account the eclipse times, when the spacecraft is shadowed from the Sun by the Earth or the Moon. The orbital elements and eclipse times are available for several months into the future and form the basis from which we engineer features that the models use to predict entry/exit times. #### Iii-B2 Data representation and preprocessing Since we are interested in the orbital position of the spacecraft, all timestamps are first transformed to _phase_ values relative to the current revolution. The phase values range from 0 at perigee to 1 at the next perigee. For this task, we consider two data representations - _positional_ and _per-revolution_. The former, positional representation, is similar to the one proposed by Finn et al. [14]. Here, the data is ordered in a series where examples describe the state of the spacecraft using the orbital elements and the IREM counts (or binary indicators whether INTEGRAL is in the belts or not). Thus one can consider two different tasks: regression (when predicting the IREM counts) or classification (when predicting the binary indicator). In the _per-revolution_ representation, each example describes one revolution of the spacecraft using the 12 orbital elements, together with the times when the spacecraft enters and exits Earth's umbra and penumbra and the penumbra of the Moon. If any of these events do not occur for a given revolution, we use the apogee time as a fill-in value because apogee is furthest removed from the events of interest (belts entry/exit) which happen near perigee. This yields a total of 18 features.In this representation, we can directly predict the entry/exit times (or altitudes) for a given revolution. This gives us two target variables (predictands) and we can treat the problem as a multi-target regression task. Each revolution takes approximately 64 hours. In the positional representation with a 15-minute granularity, there are 264 timestamps (examples) during each revolution, which is a single example in the per-revolution representation. The per-revolution representation is therefore much more compact. By performing a comprehensive set of experiments, we found that it provides better predictions for entry and exit times. #### Iii-B3 Learning models Similarly to GalaxAI-MEX, GalaxAI-INTEGRAL implements several machine learning methods: (1) \\(k\\)-nearest neighbor regressor (KNN), (2) random forest ensembles of regression trees (RF), (3) extreme gradient boosting ensembles of regression trees (XGB), (4) gradient boosting ensembles of regression trees with quantile loss (GB), (5) fully connected neural networks (FCNN), (6) recurrent neural networks (RNN) with gated recurrent units. In particular, for some of the methods (KNN, RF, and GB), GalaxAI-INTEGRAL employs the _scikit-learn_[15] implementations. For XGB, GalaxAI-INTEGRAL uses the _xgboost_ Python library [19]. GalaxAI-INTEGRAL implements both XGB and GB since GB supports quantile regression. This allows models to predict the conditional median instead of the mean, which can help deal with noisy data. The neural network models are implemented in the Pytorch framework [16]. While RNNs, in particular, are well suited for time-series data by design, the remaining methods require additional engineering for taking the temporal aspect into account. To this end, we add additional historical information to each example, i.e., each example has access to the features of the previous \\(n\\) examples and the targets of the previous \\(m\\) examples. We refer to these values as _feature history_ and _autoregression history_, respectively. An important consideration for the methods is the number of target variables: in the positional representation, there is only _one_ target (the IREM count rate or in-belt indicator), while in the per-revolution representation, there are _two_ targets--the entry and exit altitudes/phases. Most of the methods used can handle predicting two target variables with a single model (global approach). The exceptions are XGBand GB, where GalaxAI-INTEGRAL constructs a separate model for each target variable (local approach). Prior to model learning, the data are standardized, but nevertheless, the model predictions, in the end, are inversely transformed to get values on the original scale. In terms of handling missing values, in the learning phase, examples with missing targets are excluded from the learning set. In turn, when evaluating a model, the prediction errors are only calculated on non-missing values. When using autoregression history, missing target values can result in missing feature values as well. In such cases, the missing feature values are imputed with the mean value for that variable in the learning set. ## IV Interaction with data and models The machine learning pipelines that are executable through the GalaxAI toolbox are well-structured, documented, and accessible by a command-line interface, albeit mostly well suited for data science practitioners. Nevertheless, such usage scenarios can create some serious non-trivial challenges when used by engineers and operators who do not have prior experience working with ML-based frameworks. Such challenges include the choice of the predictive model, choosing and setting model parameters, interpretability of the model as well as explainability of its findings. The latter two, in particular, are very important when it comes to increasing the trustworthiness and facilitating the utility of predictive models in practice, especially when working with Figure 3: GalaxAI-INTEGRAL: Exploratory data analysis plots. Figure 2: GUI of GalaxAI-MEX(left-hand side) and GalaxAI-INTEGRAL (right-hand side) GUIs. black-box models such as neural networks. GalaxAI addresses these challenges by employing a user-friendly graphical user interface (GUI) for executing ML pipeline(s), allowing for both visual exploratory data analysis and visualization of the model results. In particular, GalaxAI facilitates seamless execution of the ML pipelines by providing pre-selected learning methods with optimal parameters (selected based on comprehensive experimental study). Next, the interactive nature of the visualizations enables the domain experts to perform exploratory data analysis on the preprocessed data and interpret the obtained models and predictions. Moreover, GalaxAI allows for performing various 'what-if' analysis scenarios by excluding data examples and/or features. The 'what-if' analysis provides additional means to integrate existing expert knowledge into the learning process thus further improving the results. Figure 2 depicts the initial viewscreen of GalaxAI, with respect to GalaxAI-MEX and GalaxAI-INTEGRAL. The users can perform several tasks related to the data analysis pipeline: (i) pre-process and analyse data (ii) use pre-trained models to perform predictions and analyse the results and (iii) execute the complete end-to-end pipeline from loading raw data to learning a predictive model, to analysing the results. For executing each task, the user needs to only provide input data/model required for the specific analysis scenario. For easier usage, most of the parameters are already set by default to their optimal value, but the users can always change them and explore other parameters based on their experience with the domain and the data. #### Iv-A1 Exploratory Data Analysis Within GalaxAI, we have implemented different interactive diagrams (histograms and boxplots) that enable users to explore the data. More specifically, the diagrams allow users to select time ranges for visualization as well as to select several variables at the same time. Figure 3 depicts these diagrams as implemented in GalaxAI-INTEGRAL for selected features' distributions (similar diagrams are also available in GalaxAI-MEX). #### Iv-A2 Predictions Visualization In terms of visualising the model output, GalaxAI implements several diagrams pertaining to visualization of the obtained predictions and Fig. 4: GalaxAI-MEX: Scatterplot visualizing the model predictions for NPWD2562 and NPWD2532 for 25.08.2008 Fig. 5: GalaxAI-MEX: Doughnut charts visualizing the importance of the descriptive features (per feature category). visualization of the influence/importance of the descriptive features. The former involves an interactive scatterplot for visual inspection of the predicted values. Figure 4 shows such a diagram as implemented within GalaxAI-MEX, for visualizing the predicted thermal power consumption of MEX. The diagrams can visualise the predictions of multiple thermal powerlines simultaneously or plot their cumulative predicted value at each time point. The latter gives a more general overview, allowing for a quick assessment of the predictive analysis. Moreover, in scenarios when the test data set, provided at input, contains the true values of the predictands, both the true and predicted values will be displayed coupled with error bars at each time point depicting their discrepancy. Similar diagrams are available also within GalaxAI-INTEGRAL. #### Iv-B3 Feature Importance Visualization GalaxAI allows for visual inspection of the feature influence within the predictive models. More specifically, it implements a special type of interactive 'doughnut' charts (see Figure 5) and pie charts (see Figure 6) for global and local visualization of the importance of the predictive features (calculated as three scores) to the predictive task at hand. These charts provide the means for better explainability of both models and predictions. Namely, a feature is important when a model relies on it for predictions. Thus, by observing the importance of each feature used for learning a model, one can explain, to a certain extent, the model's predictions. In the context of GalaxAI-MEX, the visual feature-importance analysis is performed with 34 doughnut charts (Figure 5) - one for each of the 33 power lines and one that aggregates the feature importance across all power lines. These plots provide global, and compressed, insight into the model's behavior with respect to different feature categories. Alternatively, the user can select each doughnut (corresponding to a powerline) which will result in plotting the 10-most important features (across all feature groups) for predicting that particular powerline (Figure 6). For a more detailed look at individual feature importance diagrams, the user can also select a feature category that will render them in a pie chart (Figure 6). Analogously, these charts are also available in GalaxAI-INTEGRAL, with the detailed feature visualisation available by default. Finally, GalaxAI supports interactive updates of these charts with respect to a selected time-interval of interest (by controlling the slider on the bottom of Figure 4), thus providing even more insight into the behaviour of the predictive model and the resulting prediction. ## V Conclusions Spacecraft monitoring and operation involve many challenging tasks and decisions - most often based on analysis of large volumes of complex, multimodal, and heterogeneous telemetry data. These analyses, in turn, are used for monitoring the spacecraft's health as well as short/long-term operations planning. Therefore they need to be very accurate, but more importantly, they need to provide a better understanding of the spacecraft's status and support the decisions of the mission operators and engineers. In this work, we present GalaxAI - a versatile machine learning toolbox for accurate, efficient, and interpretable Figure 6: GalaxAI-MEX : Charts visualizing the importance of the selected feature groups for NPWD2872. end-to-end data analysis of spacecraft telemetry data. It implements various machine learning pipelines that are well-structured, documented, and accessible by command-line interface useful for data science practitioners. It also offers a user-friendly graphical interface for executing the underlying machine learning pipelines and performing visual exploratory data analysis and model visualizations. We show the utility GalaxAI on two use-cases from two spacecraft: i) analysis and planning of Mars Express thermal power consummation and ii) predictive analysis of INTEGRAL's crossing through the Van Allen belts. The interactive nature of the visualizations enables the domain experts to perform exploratory data analysis and interpret the obtained models and predictions. Moreover, GalaxAI allows for performing various 'what-if' analyses thus providing means to integrate existing expert knowledge into the learning process and, in turn, to further enhance it. While in this work, we showcase two use-cases, GalaxAI is general, modular, and easily extensible to other data analysis tasks, missions, and spacecraft. ## Acknowledgment The authors acknowledge the financial support of ESA through the project GalaxAI: Machine learning for space operations. Also, AK, MP, TS, PP, NS and DK acknowledge the support of the Slovenian Research Agency through the research program No. P2-0103 and research project No. J2-9230. ## References * [1] A. McGovern and K. L. Wagstaff, \"Machine learning in space: extending our reach,\" _Machine Learning_, vol. 84, no. 3, pp. 335-340, 2011. * [2] T. Yairi, N. Takeishi, T. Oda, Y. Nakajima, N. Nishimura, and N. Takata, \"A data-driven health monitoring method for satellite housekeeping data based on probabilistic clustering and dimensionality reduction,\" _IEEE Transactions on Aerospace and Electronic Systems_, vol. 53, no. 3, pp. 1384-1401, 2017. * [3] A. Carlton, R. Morgan, W. Lohmeyer, and K. Cahoy, \"Telemetry fault-detection algorithms: Applications for spacecraft monitoring and space environment sensing,\" _Journal of Aerospace Information Systems_, vol. 15, no. 5, pp. 239-252, 2018. * [4] H. Ahn, D. Jung, and H.-L. Choi, \"Deep generative models-based anomaly detection for spacecraft control systems,\" _Sensors_, vol. 20, no. 7, pp. 1991:1-20, 2020. * [5] C. Wang, N. Lu, Y. Cheng, and B. Jiang, \"A telemetry data based diagnostic health monitoring strategy for in-orbit spacecrafts with component degradation,\" _Advances in Mechanical Engineering_, vol. 11, no. 4, pp. 1-14, 2019. * [6] J.-A. Martinez-Heras and A. Donati, \"Enhanced telemetry monitoring with novelty detection,\" _AI Magazine_, vol. 35, no. 4, pp. 37-46, 2014. * [7] S. K. Ibrahim, A. Ahmed, M. A. E. Zeidan, and I. E. Ziedan, \"Machine learning methods for spacecraft telemetry mining,\" _IEEE Transactions on Aerospace and Electronic Systems_, vol. 55, no. 4, pp. 1816-1827, 2019. * [8] B. Pilastre, L. Boussouf, S. D'Escrivan, and J.-Y. Tourneret, \"Anomaly detection in mixed telemetry data using a sparse representation and dictionary learning,\" _Signal Processing_, vol. 168, p. 107320, 2020. * The Mars Express Power Challenge,\" in _Proceedings of the Sixth International Conference on Space Mission Challenges for Information Technology, SMC-IT 2017_, 2017, pp. 82-87. * [10] M. Breskvar, D. Kocev, J. Levatic, A. Osojnik, M. Petkovic, N. Simidjievski, B. Zenko, R. Boumghar, and L. Lucas, \"Predicting thermal power consumption of the mars express satellite with machine learning,\" in _Proceedings of the 6th International Conference on Space Mission Challenges for Information Technology (SMC-IT)_, 2017, pp. 88-93. * [11] M. Petkovic, R. Boumghar, M. Breskvar, S. Dzeroski, D. Kocev, J. Levatic, L. Lucas, A. Osojnik, B. Zenko, and N. Simidjievski, \"Machine learning for predicting thermal power consumption of the mars express spacecraft,\" _IEEE Aerospace and Electronic Systems Magazine_, vol. 34, no. 7, pp. 46-60, 2019. * [12] R. Boumghar, L. Lucas, and A. Donati, \"Machine learning in operations for the mars express orbiter,\" in _15th International Conference on Space Operations_, Marseille, France, 2018. * [13] M. Petkovic, L. Lucas, D. Kocev, S. Dzeroski, R. Boumghar, and N. Simidjievski, \"Quantifying the effects of gyroless flying of the mars express spacecraft with machine learning,\" in _Proceedings of the 2019 IEEE International Conference on Space Mission Challenges for Information Technology (SMC-IT)_, 2019, pp. 9-16. * [14] T. Finn, R. Boumghar, J. Martinez-Heras, and A. Georgiadou, \"Machine learning modeling methods for radiation belts profile predictions,\" in _Proceedings of the 2018 SpaceOps Conference_. American Institute of Aeronautics and Astronautics, Inc., 2018, pp. 1-11. * [15] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, \"Scikit-learn: Machine learning in Python,\" _Journal of Machine Learning Research_, vol. 12, pp. 2825-2830, 2011. * [16] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, and T. e. a. Killeen, \"Pytorch: An imperative style, high-performance deep learning library,\" in _Advances in Neural Information Processing Systems 32_. Curran Associates, Inc., 2019, pp. 8024-8035. * [17] \"Clus: Predictive clustering toolbox,\" [http://source.ijs.si/ktclus/clus-public](http://source.ijs.si/ktclus/clus-public), accessed: 2021-02-22. - 833, 2013. [Online]. Available: [http://www.sciencedirect.com/science/article/pii/S003132031200430X](http://www.sciencedirect.com/science/article/pii/S003132031200430X) * [19] T. Chen and C. Guestrin, \"XGBoost: A scalable tree boosting system,\" in _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, ser. KDD '16. New York, NY, USA: ACM, 2016, pp. 785-794. [Online]. Available: [http://doi.acm.org/10.1145/2939672.2939785](http://doi.acm.org/10.1145/2939672.2939785) * [20] I. Goodfellow, Y. Bengio, and A. Courville, _Deep Learning_. The MIT Press, 2016. * [21] Facebook Inc., \"React,\" [https://reactjs.org/](https://reactjs.org/), Menlo Park, CA, USA, 2021. * [22] OpenJS Foundation, \"Node.js,\" [https://nodejs.org/](https://nodejs.org/), San Francisco, CA, USA, 2021. * [23] ----, \"Electron.js,\" [https://www.electronjs.org/](https://www.electronjs.org/), San Francisco, CA, USA, 2021. * [24] Plotly Technologies Inc., \"Collaborative data science,\" [https://plot.ly](https://plot.ly), Montreal, QC, 2015. * [25] M. Petkovic, D. Kocev, and S. Dzeroski, \"Feature ranking for multi-target regression,\" _Machine Learning_, vol. 109, pp. 1179-1204, 2020. * [26] M. Petkovic, B. Skrlj, D. Kocev, and N. Simidjievski, \"Fuzzy jaccard index: A robust comparison of ordered lists,\" _CoRR_, vol. abs/2008.02216, 2022.
We present GalaxAI - a versatile machine learning toolbox for efficient and interpretable end-to-end analysis of spacecraft telemetry data. GalaxAI employs various machine learning algorithms for multivariate time series analyses, classification, regression, and structured output prediction, capable of handling high-throughput heterogeneous data. These methods allow for the construction of robust and accurate predictive models, that are in turn applied to different tasks of spacecraft monitoring and operations planning. More importantly, besides the accurate building of models, GalaxAI implements a visualisation layer, providing mission specialists and operators with a full, detailed, and interpretable view of the data analysis process. We show the utility and versatility of GalaxAI on two use-cases concerning two different spacecraft: i) analysis and planning of Mars Express thermal power consumption and ii) predicting of INTEGRAL's crossings through Van Allen belts. machine learning; interpretable data analysis; Mars Express; INTEGRAL
Provide a brief summary of the text.
187
ieee/28568662_66dc_4ec2_a255_351869532582.md
# GNSS Signal Jamming as Observed From Radio Occultation Dong L. Wu Manuscript received 17 September 2023; revised 28 January 2024; accepted 1 April 2024. Date of publication in 1 April 2024; date of current version 25 April 2024. This work was supported by NASA through Global Navigation Satellite System (GNSS) Research, Living with a Star (LWS), and Commercial SmallSat Data Acquisition (CSDA) programs.The author is with the Climate and Radiation Lab (Code613), NASA Goddard Space Flight Center, Greenbelt, MA 20771 USA (e-mail: [email protected]).Digital Object Identifier 10.1109/ISTARS.2024.3385738 ## I Introduction Jamming on Global Navigation Satellite System (GNSS) signals poses a great threat to the safety service in civilian air traffic [1, 2, 3], and regional police and medical emergency operations. It is relatively easy to jam low-power GNSS signals (\\(-160\\) dBW) and confuse and cause problems for the receivers. Because of the Global Position System (GPS) market expanding from \\(\\text{S}76\\)B in 2021 to \\(\\text{S}300\\)B expected for 2029 [4] and the increased military use of GNSS- navigated weapon systems [5, 6, 7], the jamming on its receivers continues to be a major issue in GNSS applications. State-sponsored jamming from electronic warfare, especially to GPS signals [8, 9], has become common in conflict zones. The International Telecommunication Union has issued a warning on potential impacts from the harmful interference and urges the international community to take necessary steps to mitigate the problem [10]. Different from broad L-band radio frequency interference (RFI) seen from space [11, 12], the jamming on GNSS receivers is an intentional RFI, often concentrated in a narrow band targeted to a specific transmission frequency such as GPS [13]. These jamming or RFI signals must be flagged and removed in order to produce useful science data in GNSS-reflectometry (GNSS-R) observations [14, 15, 16, 17]. Although GNSS-RO is largely based on phase measurements, the RO tracking can be adversely impacted by weak GNSS signals or SNR. Because of atmospheric bending and diffraction effects, the RO SNR decreases with limb tangent height and eventually diminishes in a completely occulted situation. To increase the number of RO observations with deep penetration to the lower atmosphere, the RO antennas require a special design with a higher gain than the POD (precise orbit determination) antennas. In the presence of flex power operation from GNSS or jamming from the ground, the number and quality of RO tracking can be significantly impacted. Detecting jamming from SmallSatCubeSat has been obtained from the GNSS-POD and GNSS-R measurements. Using the L1 CA signals collected from the POD antennas on LEO satellites, Roberts et al. [16] developed an algorithm that can detect and geo-register RFI using variations in a time series of the SNR measurement, and distinguishable from ionospheric effects. The algorithm is particularly good at fast varying RFI or jamming and produces multiyear global maps of hotspots of these sources. For slowly varying high-power sources, the algorithm might have some challenges to distinguish them from other natural variabilities. Furthermore, studying the GNSS-R delay Doppler map measurement noise, Chew et al. [17] obtained GNSS RFI maps from 2017 to 2022 and found the approximate locations of regional hotspots in the subtropical latitudes over land. The technique can be used to track and evaluate jamming sources at a relatively high revisit rate from an 8-SmallSat constellation. The objective of this study is to derive a long-term GNSS jamming record, using the SNR measurements from RO antennas onboard recent LEO SmallSatCubeSat constellations (see Table I). We leverage the RO radiometry algorithm developed for remote sensing of atmospheric water vapor in planetary boundary layer [18], to obtain the enhanced RO L1 SNR from very low tangent heights. Multisatellite records of GNSS jamming are obtained to trace the SNR variability back to 2006 when the COSMIC-1 GNSS-RO constellation was established. ## II GNSS RO Observations As a limb sounding technique, GNSS-RO can penetrate clouds to profile temperature and water vapor in the lower atmosphere. Using the so-called open-loop (OL) operation, the GNSS-RO signal can be tracked far behind the Earth's shadow, sometimes producing a significant SNR at a straight-line height (\\(\\text{H}_{\\text{SL}}\\)) between \\(-100\\) km and \\(-300\\) km below the surface. Different from the closed-loop (CL) operation, in which a phase-locked loop (PLL) circuit is used to track the signal frequency based on previous measurements, in OL operation, the receiver relies on a modeled reference signalfrequency from orbit dynamics, receiver clock drift, and estimated atmospheric bending effects, to track the anticipated RO signal regardless of SNR fluctuations. The CL operation works well in the situations where SNR is strong, but a long period of low SNRs would prevent the PLL from updating the reference signal and result in a loss of tracking the GNSS signal. The introduction of OL operation helps to increase the number of RO soundings because an intermittent loss of tracking often terminates the CL tracking at a low \\(\\rm{H_{SL}}\\). In addition, the OL observations allow a study of jamming signals from the measurements at these low \\(\\rm{H_{SL}}\\). As shown in Fig. 1, RO profiles are generated from setting/rising limb observations from the radio signal sent by GNSS transmitters. There is little bending at the top of the atmosphere but the dense air in the lower atmosphere can refract the radio wave propagation and cause a bending. As a result, the occulted signal at the receiver on a low-Earth orbit (LEO) is from the transmitter behind Earth's shadow. The bending angle information is used to infer the atmospheric refractivity and derive the temperature/humidity profile. Nowadays, there are a growing number of SmallSat/CubeSat constellations that apply the OL tracking for GNSS-RO observations (see Table I). A typical RO SNR(\\(\\rm{H_{SL}}\\)) profile is shown in Fig. 2(a), which is normalized by its free-space value (SNR0) or the mean at \\(\\rm{H_{SL}}>50\\) km. The normalization of SNR (\\(\\rm{S_{RO}}=SNR/SNR_{0}\\)) is necessary for the RO radiometry, as the RO amplitude can vary largely from profile to profile. Depending on where the occultation occurs relatively to the angle with respect to the antenna pattern, SNR\\({}_{0}\\) can differ by a factor of 2-4, even when the GNSS transmitter is the same. The normalized SNR can be used to compare the signal powers from different RO receivers and GNSS transmitters and jamming variations from different periods of time. More details on the \\(\\rm{S_{RO}}\\) algorithm can be found in [18]. ## III Jamming on GNSS RO Signals To identify the jamming power on GNSS signals, we use the \\(\\rm{S_{RO}}\\) values at \\(\\rm{H_{SL}}<-140\\) km. At this deeply occulted height, the \\(\\rm{S_{RO}}\\) measurement is close to the receiver noise, because the GNSS transmitter power is mostly blocked by Earth. The contribution from deep refraction by atmospheric water vapor might have some remnants, which diminishes sharply with \\(\\rm{H_{SL}}\\) and is scattered mostly over oceans [18]. Therefore, at low \\(\\rm{H_{SL}}\\), jamming signals can be detected when they are strong enough above the measurement noise. Fig. 1: Schematic of GNSS-RO observations from different \\(\\rm{H_{SL}}\\). At a very low \\(\\rm{H_{SL}}\\), the RO SNR has little signal from GNSS transmitters, and any enhanced SNR or noise is likely from jamming sources at the surface. Fig. 2: (a) Typical normalized L1 SNR profile (namely, \\(\\rm{S_{RO}}=SNR/SNR_{0}\\)). (b) Typical RO antenna pattern showing the profiles of high SNR and low SNR measurements. \\(\\rm{S_{RO}}\\) are used to handle SNR variability with the normalization. To illustrate the significance of jamming signals over the background measurement noise, we map out the S\\({}_{\\rm RO}\\) averaged at H\\({}_{\\rm SL}<-140\\) km for 2019-2022 using all available GPS-RO data from COSMIC-1, MetOp, TSX, TDX, KOMPSAT-5, and PAZ. Unfortunately, the COSMIC-1 mission was ending in 2019, and the global RO coverage came mostly from the other satellites. Despite the reduced number of samples in recent years, as shown in Fig. 3, jamming signals are clearly seen in the North and Central Africa (CA), the Mediterranean, the Eastern Europe, and the Middle East. The S\\({}_{\\rm RO}\\) enhancement is spread to Russia in 2022, which is likely related to Russo-Ukrainian War. It is clear that these enhanced S\\({}_{\\rm RO}\\) values are significantly greater than the atmospheric residuals or other measurement noise seen mostly over oceans. Fig. 4 shows a distribution of enhanced S\\({}_{\\rm RO}\\) at \\(-140\\) km in January 2022 from COSMIC-2 over the southern Europe and Africa where the hot spot values are significantly above the oceanic background where atmospheric residuals would maximize. The monthly mean S\\({}_{\\rm RO}\\) are calculated separately for GPS and GLONASS signals, showing that the jamming on GPS signals is much stronger than on GLONASS. Two regions are identified to track their long-term variation of jamming power: Mediterranean Sea and Middle East (MSME) and CA. In 2020, significant jamming on GPS is found over Turkey, Syria, Bulgaria, and Somalia, whereas in 2021, strong GPS jamming is Fig. 4: Jamming power distribution. (a) GPS and (b) GLONASS jamming power as observed by COSMIC-2 at H\\({}_{\\rm SL}=\\) -140 km in January 2022, showing hot spots over the MSME and CA. The two regions are highlighted by the red boxes. Fig. 5: Time series of the monthly jamming power in (a) MSME and (b) CA, as observed by GNSS-RO sensors at H\\({}_{\\rm SL}<-140\\) km. The earlier period of MetOp-A/B S\\({}_{\\rm RO}\\) data are excluded in the time series because of the noise data during the OL testing period. The small annual variation in COSMIC-1 S\\({}_{\\rm RO}\\) before 2017 is likely due to the residuals of the deep refraction from atmospheric water vapor, as seen in Fig. 3. Fig. 3: Global maps of annual mean L1 S\\({}_{\\rm RO}\\) averaged from H\\({}_{\\rm SL}<-140\\) km for 2019–2022 using the GPS-tracking RO data acquired by COSMIC-1, MetOp-A/B/C, TSX, TDX, KOMPSAT-5, and PAZ. evident in Azerbaijan, Turkey, Mediterranean Sea, Tunisia, and South Sudan. The time series of RO signals from the MSME and CA regions show that the GPS jamming began to increase substantially since 2017, whereas the jamming on GLONASS only had a moderate increase (see Fig. 5). The increased jamming on GPS signals seems to be consistent with frequent military uses of GNSS-navigated systems in the regions of conflict. The trends of GPS jamming power variations are also consistent among the measurements from independent GNSS-RO receivers, despite differences in receiver type and LEO altitude. Although the jamming is ubiquitous at all local times in these regions, the daytime power appears to be slightly great than the nighttime. Interestingly, the GPS jamming power shows a sharp decrease in the MSME and CA regions since the start of Russo-Ukrainian War (February 2022). The decrease is evident in the COSMIC-2 and MetOp data, although the average \\(\\mathrm{S_{RO}}\\) jamming power is still above the pre-2017 level. Note that the enhanced \\(\\mathrm{S_{RO}}\\) in a monthly mean would require a sustained jamming power and often it is from state-sponsored electronic warfare. The sociopolitical impacts of the Russo-Ukrainian War are complex. Demands for the electrical warfare could shift from one region to the other as conflicts develop, causing changes in a wider region. ## IV Summary In this study, we developed an algorithm to detect and monitor long-term variations of GNSS jamming power, using the normalized GNSS-RO SNR (\\(\\mathrm{S_{RO}}\\)) measurements from deeply occulted heights (\\(\\mathrm{H_{SL}}<-140\\) km). The algorithm was applied to all GPS-RO L1 SNR observations from COSMIC-1, MetOp, TSX, TDX, KOMPSAT-5, and PAZ since 2006. Global distributions of \\(\\mathrm{S_{RO}}\\) at \\(\\mathrm{H_{SL}}<-140\\) km reveal a significant enhancement in the North and CA, the Mediterranean, the Eastern Europe, and the Middle East during 2019-2022. We derived the monthly mean \\(\\mathrm{S_{RO}}\\) from two conflict zones: MSME and CA, where the GPS jamming was frequently used by state-sponsored electronic warfare. The time series of these \\(\\mathrm{S_{RO}}\\) data show that the jamming power started to increase since 2017 but decreased sharply since the Russo-Ukrainian War started. Because of complex socio-political factors, the war's impacts on the GPS jamming in the surrounding regions warrant further investigations. ## Acknowledgment The author would like to thank UCAR COSMIC Data Analysis and Archive Center (CDAAC) for data processing and distribution. ## References * [1] 2022. [Online]. Available: [https://safeairspace.net/](https://safeairspace.net/) * [2] 2019. [Online]. Available: [https://www.timesofisrael.com/flights-at-ben-gurion-airport-suffter-weeks-of-interruption-to-gps-systems/](https://www.timesofisrael.com/flights-at-ben-gurion-airport-suffter-weeks-of-interruption-to-gps-systems/) * [3] 2022. [Online]. Available: [https://safety4sea.com/us-marad-gps-interference-incidents-reported-in-the-eastern-mediterranean-sea/](https://safety4sea.com/us-marad-gps-interference-incidents-reported-in-the-eastern-mediterranean-sea/) * [4] 2022. [Online]. Available: [https://www.databridgemarketresearch.com/reports/global-gps-global-positioning-systems-market](https://www.databridgemarketresearch.com/reports/global-gps-global-positioning-systems-market) * [5] 2021. [Online]. Available: [https://ops.group/blog/wp-content/uploads/2021/05/opinfo_FAa_Information_Note_ Turkey_.-_PKK_UAS_Attack_..19_MAV_2021_..FINAL.pdf](https://ops.group/blog/wp-content/uploads/2021/05/opinfo_FAa_Information_Note_ Turkey_.-_PKK_UAS_Attack_..19_MAV_2021_..FINAL.pdf) * [6] 2022. [Online]. Available: [https://www.flyingmag.com/airlines-report-miss-gps-jamming-in-four-regions](https://www.flyingmag.com/airlines-report-miss-gps-jamming-in-four-regions) * [7] 2020. [Online]. Available: [https://eursaintames.com/Russia-is-jamming-gps-systems-of-powerful-f-22-rators-f-35-jets-in-middle-east/](https://eursaintames.com/Russia-is-jamming-gps-systems-of-powerful-f-22-rators-f-35-jets-in-middle-east/) * [8] M. J. Murrian, L. Narula, P. A. Iannucci, S. A. Budzien, B. W. O'Hanlon, and T. E. Humphreys, \"GNSS interference monitoring from low earth orbit,\" _Signal Process._, 2020. * [9] M. J. Murrian et al., \"First results from three years of GNSS interference monitoring from low Earth orbit,\" _Navigation, J. Inst. Navigation_, vol. 68, no. 4, pp. 673-685, Dec. 2021, doi: 10.1002/naxi.449. * [10] 2022. [Online]. Available: [https://www.itu.int/hub/2022/08/warning-harmful-interference-mss/](https://www.itu.int/hub/2022/08/warning-harmful-interference-mss/) * [11] P. N. Mohammed, M. Aksoy, J. R. Pippmeier, J. T. Johnson, and A. Bringer, \"SMAPL-band microwave radiometer: RFI mitigation prehunch analysis and first year on-orbit observations,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 10, pp. 6035-6047, Oct. 2016. * [12] E. Daganzo-Eusebio, R. Oliva, Y. H. Kerr, S. Nieto, P. Richaume, and S. M. Mecklenburg, \"SMOS radiometer in the 1400-1427-MHz passive band: Impact of the RFI environment and approach to its mitigation and cancellation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 51, no. 10, pp. 4999-5007, Oct. 2013, doi: 10.1109/TGRS.2013.25297197-not/9013353. * [13] R. H. Mitch et al., \"Signal characteristics of civil GPS jammers,\" in _Proc. 24th Int. Tech. Meeting Satell. Division Inst. Navigation_, Portland, OR, USA, Sep. 2011, vol. 5, pp. 1907-1919. * [14] P. Voosen, \"NASA overcomes military's GPS tweaks to peer inside hurricanes,\" _Science_, Jun. 2019, doi: 10.1126/science.aay678. * [15] J. Querol, A. Alonso-Arroyo, R. Onurbia, D. Pascual, H. Park, and A. Camps, \"SNR degradation in GNSS-R measurements under the effects of radio-frequency interference,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 9, no. 10, pp. 4865-4878, Oct. 2016. * [16] T. M. Roberts, T. K. Meehan, J. Y. Tien, and L. E. Young, \"Detection and localization of terrestrial-band RF with GNSS receivers,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, 2022, Art. no. 5801311, doi: 1109/TGRS.2021.3109524. * [17] C. Chew, T. Maximillian Roberts, and S. Lowe, \"RFI mapped by space-borne GNSS-R data,\" _NAVIGATION: J. Inst. Navigation_, vol. 70, no. 4, Dec. 2023, Art. no. navi.618, doi: 10.33012/naxi.618. * [18] D. L. Wu, J. Gong, and M. Ganeshan, \"GNSS-R0 deep refraction signals from moist marine atmospheric boundary layer (MABL),\" _Atmosphere_, vol. 13, 2022, Art. no. 953, doi: 10.3390/atmos13060953. \\begin{tabular}{c c} & Dong L. Wu received the B.S. degree in space physics from the University of Science and Technology of China, Hefei, China, in 1985, and the M.S. and Ph.D. degrees in atmospheric science from the University of Michigan, Ann Arbor, MI, USA, in 1993 and 1994, respectively. He is the Project Scientist of NASA's Total and Spectral Solar Irradiance Sensor (TSIS) mission at Goddard Space Flight Center (GSFC). His research interests include remote sensing of atmospheric clouds and winds. He was the principal investigator (PI) of Goddard's IceCube project (CubeSat flight demonstration of 883-GHz radiometer for cloude measurements). He was a Principal Research Scientist and the Supervisor of Aerosol and Cloud Group, Jet Propulsion Laboratory (JPL), California Institute of Technology, Pasadena, CA, USA, during 1994-2011. He was a Co-Investigative Microwave Limb Sounder (MLS) during 1994-2008, Cloudsat during 2006-2010, Multi-angle Imaging SpectroRadiometer (MIIS) since 2008, and NASA's Global Navigation Satellite Systems (GNSS) since 2007. He has authored and coauthored more than 180 papers on peer-reviewed journals. Dr. Wu was the recipient of the NASA Exceptional Achievement Medal in 2001, 2008, and 2022, JPL Ed Stone award for Outstanding Research Paper in 2006, and Robert H. Goddard Award for Science in 2019.
The jamming is found to increase significantly in recent years, and its impact is evident in Global Navigation Satellite System (GNSS) radio occultation (RO) measurements, such as those from COSMIC-2. This article presents an algorithm that applies the RO radiometry to detect and monitor long-term variations of GNSS jamming power from a deeply occulted height (\\(\\text{H}_{\\text{SL}}=-140\\) km). At these heights, the RO signal amplitude is at its noise level because the GNSS transmitter is far behind the Earth shadow. Thus, any enhanced RO amplitudes from these heights are considered as a jamming signal. The algorithm was successfully applied to two conflict zones: Mediterranean Sea and Middle East and Central Africa, where the Global Position System (GPS) jamming was frequently used by state-sponsored electronic warfare. The time series of normalized RO amplitude in these regions show a steady increase of the GPS jamming power since 2017, but a sharp decrease since the start of Russo-Ukrainian War. Global navigation satellite system (GNSS), jamming, radio occultation (RO), time series.
Summarize the following text.
235
arxiv-format/2303_13404v1.md
# MSFA-Frequency-Aware Transformer for Hyperspectral Images Demosaicing Haijin Zeng\\({}^{1}\\), Kai Feng\\({}^{2}\\), Shaoguang Huang\\({}^{3}\\), Jiezhang Cao\\({}^{4}\\), Yongyong Chen\\({}^{5}\\), Hongyan Zhang\\({}^{6}\\), Hiep Luong\\({}^{1}\\), and Wilfried Philips\\({}^{1}\\) \\({}^{1}\\)IMEC-IPI-UGent, \\({}^{2}\\)NWPU, \\({}^{3}\\)CUG, \\({}^{4}\\)ETH Zurich, \\({}^{5}\\)HIT, \\({}^{6}\\)WHU [email protected] ## 1 Introduction Hyperspectral imaging (HI) captures light across a broad range of spectral bands, including those within the visible and beyond near-infrared spectrum. This provides much higher spectral resolution than the 3 spectra, leading to more accurate material characterization than is achievable through RGB imaging. This capability makes HI a valuable tool in numerous fields, including medical imaging, astronomy, food quality control, remote sensing, precision agriculture and pharmaceuticals. [10, 53, 54, 12]. However, their employment in computer vision is limited due to slow acquisition times attributed to spatial or spectral scanning. To address this issue, snapshot HI systems [3, 26], such as computed tomography [31, 18] and light-field imaging [4, 16], have been introduced recently, which capture both spectral and spatial information rapidly. These snapshot HI systems can be realized by snapshot mosaic HI systems or Multi-Spectral Filter Array (MSFA) cameras [33]. The latter uses an MSFA to acquire spectral information in a single 2D image sensor exposure, similar to RGB cameras. However, MSFA cameras employ larger Color Filter Arrays (CFAs), such as \\(3\\times 3\\), \\(4\\times 4\\), or \\(5\\times 5\\)[27]. The availability of MSFA cameras, designed with tiny Fabry-Perot interferometry filters on top of CMOS or InGaAs sensors to obtain wavelength selectivity via a multiple-beam interference process, has been increasing for researchers and professionals at more accessible prices. Prominent examples of such cameras include the IMEC Figure 1: Overview of previous methods and our frequency-aware HSI demosaicing framework. In contrast to current demosaicing methods that do not differentiate between the facile low-pass components and arduous high-pass details, we propose a frequency-aware demosaicing framework, which employs a customized transformer to reconstruct the hard high-pass components and data-independent but stable traditional interpolation-filtering to recover low-pass parts expeditiously. The proposed approach yields a significant improvement in the reconstruction of details. SNAPSHOT, XIMEA Snapshot USB3, and silios CMS series [3]. However, to make optimal use of the spatial and spectral information provided by MSFA cameras, it is necessary to apply effective spectral demosaicing methods that can estimate a fully defined hyperspectral image (HSI). Demosaicing large MSFAs presents a challenge due to the larger mosaic pattern and weaker inter-channel correlation in comparison to Bayer filter cameras. Although several demosaicing methods have been proposed, they exhibit limited demosaicing capability for high frequency details, resulting in the persistence of periodic artifacts. This may be due to the current CNN-based methods inadequately accounting for long-range dependencies [6] and MSFA periodic information that is also critical for HSI demosaicing. _Motivation of Using Transformer:_ Specifically, during the MSFA imaging process, the entire spectral domain information is sampled and compressed into a single band, resulting in spatial-spectral confusion. Specifically, the nearest neighbors with similar spectral information are stored in a periodic MSFA pattern, causing them to be several pixels away in memory compared to RGB. This confusion occurs both across adjacent bands and throughout the entire spectral domain, and current CNN-based methods are unable to eliminate it. Non-local similarity has been identified as a critical factor in addressing spatial-spectral confusion [36, 44]. However, the receptive field of convolution limits its ability to leverage non-local information. In contrast, the Transformer architecture can exploit long-range non-local similarity and significantly improve reconstruction outcomes [13, 19, 21, 34, 45, 46, 37, 40]. Additionally, current methods struggle with detail recovery, while the Transformer has demonstrated exceptional capability in detecting subtle spatial differences [28, 49, 50]. Motivated by these observations and the inherently high-frequency nature of details in HSI, we propose an efficient HSI demosaicing network that employs a Transformer model and models MSFA information. Our method reconstructs the high-pass and low-pass components of HSI separately. Firstly, we utilize a Fourier zero-padding-based low-pass filter to quickly reconstruct the low-pass components that are easier to recover. Secondly, we introduce a novel MSFA-Frequency-aware Transformer, named _MaFormer_, which focuses on the hard high-frequency details by concurrently modeling non-local dependencies and MSFA information. This enables us to recover the high-frequency details with reduced artifacts, as illustrated in Fig. 1. Finally, we integrate a joint spatial-frequency regularization term into the network, which utilizes both the MSFA pattern and frequency information to improve the reconstruction of details while preserving the fidelity of the low-pass components. In summary, our contributions are three-fold: 1. We propose a novel MSFA-frequency-aware HSI demosaicing framework that amalgamates the benefits of traditional methods with transformer to reconstruct HSI with precise details and fewer artifacts. 2. By simultaneously incorporating non-local and MSFA periodic modeling, we present MaFormer, a tailored transformer designed specifically to demosaic the challenging high-pass HSI. 3. Our FDM-Net outperforms state-of-the-art methods by a large margin, and produces highly accurate details. ## 2 Related Work ### HSI Demosaicing Research on multispectral image demosaicing has been conducted in various studies [1, 5, 20, 24, 25, 38, 39, 41, 42]. Current methods for demosaicing can be categorized into interpolation-based, matrix-factorization/recovery-based, and deep learning approaches. Interpolation and matrix-based methods [1, 5, 38, 42, 39] rely on spectral-spatial priors to reconstruct missing spectral and spatial information. This paper will focus on the latest learning-based approaches for HSI demosaicing. CNNs has gained widespread popularity in various low-level image processing tasks, including image deblurring [15, 32, 50], denoising [14, 52], and super-resolution [9, 47]. Although CNNs have been effectively employed in demosaicing [23, 35, 48], their application has been predominantly limited to the Bayer pattern, which has a predominant green band. In contrast, spectral demosaicing necessitates the representation of multispectral correlations to enable CNN utilization. Consequently, researchers have introduced four distinct methods, namely DsNet [20], SpNet [25], In-Net [41], and MCAN [24]. In particular, In-Net [41] applies a deep network employing a bilinear interpolated MSI cube as input, MCAN [24] proposes an end-to-end network that models joint spatial-spectral correlations in mosaic images. However, their capability to restore high frequency details is still restricted. Additionally, these methods do not account for long-range dependencies. ### Vision Transformer As an dominated architecture in NLP, the Transformer [43] is designed for sequence modeling, by incorporating a self-attention mechanism. It also has demonstrated remarkable performance in various vision-related tasks [11, 21, 34, 37]. However, the use of transformer in image restoration [8, 13] often involves dividing the input image into small, fixed-sized patches and processing each patch independently, which leads to two main issues [7, 37]. Firstly, the restored image may exhibit border artifacts around the edges of each patch, and secondly, border pixels in each patch lose information that could have otherwise contributed to better restoration. Recently, the Swin Transformer [37] hasemerged as a promising solution, incorporating the benefits of both CNNs and Transformers: on the one hand, it inherits the advantage of CNNs in processing large images due to its local attention mechanism; on the other hand, it retains the capability of Transformers in modeling long-range dependencies using the shifted window scheme [34, 37, 7]. ## 3 Method ### Frequency-Aware Demosaicing Network The proposed MSFA-frequency-aware demosaicing network (FDM-Net) for HSI is depicted in Fig. 2 (b). The method was inspired by the recognition that while low pass structural information can be efficiently reconstructed by most demosaicing techniques, the main challenge lies in the recovery of high pass detail and texture information. However, previous methods have not sufficiently differentiated between these two components and instead use a single integrated model to reconstruct both high and low pass components simultaneously. To address this issue, we first decompose the HSI cube into its high pass and low pass components. Then, we customise a MSFA-Conv based Swin Transformer network (MaFormer) by performing non-local and MSFA periodic modeling simultaneously, and a sinc-interpolation block to reconstruct the high-low pass components. Finally, we merge the reconstructed high and low pass components to obtain the final demosaiced HSI. Specifically, given a mosaiced image \\(\\mathcal{Y}\\in\\mathbb{R}^{M\\times N}\\), where \\(M\\), \\(N\\) are the image height and width, respectively. Firstly, the low pass components \\(\\hat{\\mathcal{X}}_{LF}\\) are reconstructed using Fourier zero-padding (Lanczos windowed sinc [22]) with guided-filter, which is a low-pass filter very accurate on smooth data, \\[\\hat{\\mathcal{X}}_{LF}=\\mathrm{Filter}(\\mathrm{Sinc}(\\mathcal{Y}),\\mathrm{Sinc }(\\mathcal{Y})(:,:,0))\\in\\mathbb{R}^{M\\times N\\times C}, \\tag{1}\\] where \\(u(x,y)=\\sum_{m,n}v_{m,n}\\textit{sinc}(x-m)\\), \\(\\textit{sinc}(y-n)\\) is the Sinc interpolation of \\(v(x,y)\\), and \\(\\textit{sinc}(t):=\\omega_{t}\\textit{sinc}(\\pi t)/(\\pi t)\\) for \\(t\ eq 0\\), \\(\\textit{sinc}(0):=1\\), \\(\\omega_{t}=\\frac{n}{\\pi t}\\textit{sin}(\\pi t/n)\\), if \\(\\|t\\|<n\\). Subsequently, we propose a customized transformer for reconstructing the high pass details \\(\\hat{\\mathcal{X}}_{HF}\\). Our primary objective is to develop an effective and efficient module for the recovery of high pass details and textures, which presents a significant challenge. To address this, we select the transformer network, as it has demonstrated outstanding performance in distinguishing even subtle spatial differences by characterizing sequential spatial data [28, 17]. Fig. 3 shows that the reconstructed low pass component contains clear smoothed structural information, while the high pass component learned by our MaFormer effectively captures the details and textures of the HSI. The demosaiced HSI, which is a result of the aggregation of the high pass and low pass components produced by our FDM-Net, is shown to be highly similar to the ground truth. ### Customized High Frequency Transformer The proposed _MaFormer_ is the cornerstone of our FDM-Net, which adopts an overall architecture resembling a U-Net, as illustrated in Fig. 4 (a). Comprising an encoder, a bottleneck, and a decoder, _MaFormer_ employs downsampling and upsampling techniques through transpose convolutions. This architectural choice, which differs from stacking modules layer by layer without scaling, has been shown to enhance the performance of the algorithm and increase the receptive field of the basic CNN and the proposed MSFA convolution, as detailed in Sec. 3.3. However, downsampling inevitably leads to a loss of information, which we address by incorporating residual connections between the encoder and decoder stages. Specifically, the first module of MaFormer is the MSFA-Conv, as illustrated in Fig. 4 (a). Its input is the raw mosaic data \\(\\mathcal{Y}\\in\\mathbb{R}^{M\\times N}\\) that is sampled from the latent HSI \\(\\mathcal{X}\\in\\mathbb{R}^{M\\times N\\times C}\\). Here, \\(M\\) and \\(N\\) denote the height and width of the observed raw data, respectively, and \\(C\\) denotes Figure 3: Illustration of the low pass part and high pass component reconstructed by our FDM-Net. The output of our FDM-Net is generated by adding the low pass and high pass it reconstructed. Figure 2: Overview of the imaging process of MSFA camera and our frequency-aware demosaicing framework: FDM-Net. the number of channels. Firstly, the MSFA-Conv extracts feature maps \\(\\mathcal{X}_{0}\\in\\mathbb{R}^{M\\times N\\times 2C}\\) from \\(\\mathcal{Y}\\). Secondly, the \\(\\mathcal{X}_{0}\\) is fed into three paired STMC and Downsample blocks, resulting in hierarchical feature maps. The Downsample layer is implemented using a \\(2\\times 2\\) convolution without bias, which generates a downsampled feature map with double channels while half spatial resolution. We denote the outputs of these three paired STMC and Downsample groups as \\(\\mathcal{X}_{i},i=1,2,3\\), respectively. Thirdly, the bottleneck processes \\(\\mathcal{X}_{3}\\) using a pure STMC without any sampling. Subsequently, a symmetric decoder is designed as a classical U-Net. It also consists of three STMC blocks and an MSFA Conv, but the downsample layers are replaced with transpose convolutions, which are used to upsample the spatial dimensions of intermediate feature maps. Finally, the high-frequency information, such as details and textures of the latent hyperspectral image, i.e., \\(\\mathcal{\\hat{X}}_{HF}\\in\\mathbb{R}^{M\\times N\\times C}\\), is learned and reconstructed by an MSFA-Conv block. ### MSFA-based Half Swin Transformer The crucial component of the proposed _MaFormer_ is the proposed integrated Swin-Tansformer & MSFA-Conv (STMC) module. Fig. 4 (b) illustrates the STMC module utilized to process the input tensor \\(\\mathcal{X}_{0}\\in\\mathbb{R}^{M\\times N\\times 2C}\\). Firstly, \\(\\mathcal{X}_{0}\\) is linearly projected via a \\(1\\times 1\\) convolution layer, after which it is split into two sub-feature maps along the channel orientation, \\[\\mathcal{X}_{0}=[\\mathcal{X}_{0}^{p}\\in\\mathbb{R}^{M\\times N\\times C}, \\mathcal{X}_{0}^{nl}\\in\\mathbb{R}^{M\\times N\\times C}]. \\tag{2}\\] Then, \\(\\mathcal{X}_{0}^{p}\\) passes through the _Periodic Branch_ to model periodic MSFA information, while \\(\\mathcal{X}_{0}^{nl}\\) passes through the _Non-local Branch_ to model non-local dependencies. #### 3.3.1 Periodic Branch The _Periodic Branch_ employs the _MSFA-Conv_ block to apply a MSFA-driven convolution operator, as shown in Fig. 4 (e). This operator refines the features based on the relative positions of the elements in input, which are determined by MSFA pattern. Specifically, for an element with index \\((i,j)\\), \\(p\\times p\\) MSFA pattern, its relative position is denoted as \\((m,n)=(i\\bmod p,j\\bmod p)\\). The relative position matrix \\(R\\), with element \\((m,n)\\), is then fed into a MSFA attention weights prediction block (MWP), which consists of two \\(1\\times 1\\) convolution layers and one \\(\\mathrm{ReLU}\\) activation function, as shown in Fig. 4 (d). The MWP generates a MSFA-driven convolution kernel with weight \\(W\\) as follows: \\[W=\\mathrm{Conv}\\,1\\times 1(\\mathrm{ReLU}(\\mathrm{Conv}\\,1\\times 1(R))). \\tag{3}\\] The kernel \\(W\\) assigns the same weights to elements with the same relative positions, allowing neighboring elements sampled with the same wavelength to share similar spectral Figure 4: The architecture of our MaFormer. (a) MaFormer consists of an encoder, a bottleneck, and a decoder. MaFormer is built up by STMCs. (b) STMC is composed of a parallel group convolution, which includes a periodic branch and a non-local branch. (c) The swin transformer used in non-local branch. (d) The weights prediction block of MSFA-Conv. (e) The MSFA-driven convolution: MSFA-Conv. distributions. The resulting feature map is \\[F_{0}^{p}=\\mathrm{Conv}\\,8\\times 8(\\mathcal{X},W)\\in\\mathbb{R}^{M\\times N\\times 2C}, \\tag{4}\\] is obtained by convolving the input \\(\\mathcal{X}\\) with the kernel \\(W\\) using an \\(8\\times 8\\) convolution operation. This MSFA-Conv block is designed to effectively and efficiently model periodic information in the input. Then, feature \\(F_{0}^{p}\\) is aggregated by MSFA pooling, instead of using normal pooling, e.g., maximum or average pooling. Specifically, MSFA pooling aggregates the feature points of \\(F_{0}^{mc}\\) with the same relative position to get \\(F_{1}^{p}\\in\\mathbb{R}^{\\frac{M}{2}\\times\\frac{N}{2}\\times 2C}\\), where \\[F_{1}^{p}\\left(i,j,k\\right)=c\\sum_{s=0}^{\\frac{m}{2}-1}\\sum_{t=0}^{\\frac{n}{2} -1}F_{0}^{p}\\left(i+4s,j+4t,k\\right), \\tag{5}\\] \\(c=\\frac{1}{m/4\\times n/4}\\). After that, two \\(3\\times 3\\) Residual Convolutional (RConv) are used to refine the feature map \\(F_{1}^{p}\\), \\[F_{2}^{p}=\\mathrm{RConv}(\\mathrm{RConv}(F_{1}^{p})). \\tag{6}\\] #### 3.3.2 Non-local Branch The _non-local branch_ computes MSA [43] within position-specific shifted windows and cross-window connection, by using swin transformer [37]. Given an input \\(\\mathcal{X}_{0}^{nl}\\in\\mathbb{R}^{M\\times N\\times C}\\) split from \\(\\mathcal{X}_{0}\\), the _non-local_ branch partitions it into \\(K\\times K\\) local windows without overlapping, it reshapes it into \\(\\mathcal{X}_{0}^{nl}\\in\\mathbb{R}^{\\frac{MN}{K^{2}}\\times K^{2}\\times C}\\), where \\(K\\) denotes the window size. To take into account the cross window connection, regular and shifted window partitioning are used alternately here [37], as shown in Fig. 4 (b). Then, the self-attention of each local windows \\(X_{nl}\\in\\mathbb{R}^{K^{2}\\times C}\\) is computed, i.e., \\[Q_{nl}=X_{nl}W_{Q},K_{nl}=X_{nl}W_{K},V_{nl}=X_{nl}W_{V}, \\tag{7}\\] where \\(Q\\in\\mathbb{R}^{K^{2}\\times d},K\\in\\mathbb{R}^{K^{2}\\times d},V\\in\\mathbb{R} ^{K^{2}\\times d}\\) are the _query, key_ and _value_. \\(W_{Q},W_{K},W_{V}\\) are projection matrices, which are shared across different partitioned local windows. Then, the local self-attention matrix of local windows is computed as follows: \\[\\text{Attention}(Q_{nl},K_{nl},V_{nl})=\\mathrm{SoftMax}(\\frac{Q_{nl}K_{nl}^ {T}}{\\sqrt{d}}+B)V_{nl}, \\tag{8}\\] where \\(d\\) is the _query / key_ dimension, \\(B\\in\\mathbb{R}^{K^{2}\\times K^{2}}\\) is the learnable relative parameters depicts positional encoding. Subsequently, the attention feature maps are fed into a LayerNorm(LN) layer, and then pass through two fully connected layers with GELU, i.e., MLP. In addition, the residual connection is also added to each input of LN, as shown in Fig. 4 (c), and the output is reshaped back to size \\(M\\times N\\times C\\). Finally, the outputs of _periodic branch_ in Eq. (6) and _non-local_ branch Eq. (8) are concatenated and then a fully connected layer is used to fuse the information between _periodic branch_ and _non-local branch_, generate the final output \\(X_{1}\\in\\mathbb{R}^{M\\times N\\times 2C}\\) of STMC. Following STMC, the \\(\\mathrm{Downsample}\\) operator samples \\(X_{1}\\) with half spatial resolution and double channels. Overall, our MaFormer combines the non-local modeling ability of Swin Transformer block and MSFA periodic modeling ability of MSFA-Conv. Furthermore, we enhance the integrated periodic and non-local modeling ability by stacking our STMC in a down-sample & up-sample U-Net style, together with transpose convolution. In addition to take into account extra MSFA information, this is also computationally cheaper than global standard MSA, due to the split and concatenation operations within STMC can act as the group convolution with two groups. ### Joint Spatial and Frequency Loss The demosaicing of HSI is an ill-posed inverse problem, where a single observed mosaic image can correspond to multiple HSIs. To mitigate this difficulty, as regularization-based optimization methods [51, 52], we propose the incorporation of a joint spatial and frequency loss as a constraint in the optimization procedure, to decrease the potential solution space. _Firstly_, consider that in low frequency reconstruction phase, our method leverages the use of classical interpolation to reconstruct the low frequency components quickly, and a guided filter to remove noise and refine the reconstructed low frequency parts. To preserve the accuracy of the known high-frequency information, we introduce an MSFA-based \\(L_{1}\\) loss to regularize the sampled pixels, \\[L_{1}^{s}=\\|x-\\hat{x}\\odot M\\|_{1}. \\tag{9}\\] where \\(M\\) is the MSFA sample mask, \\(x,\\hat{x}\\) are the ground truth and demosaiced HSI, respectively. \\(\\odot\\) denotes element-wise multiplication. _Secondly_, based on our observations, the lower frequency component within the high frequency part of the HSI is relatively easier to reconstruct, while the main challenge lies in the reconstruction of the higher frequency component, which contains complex details and textures. To enhance the network's ability to model these challenging cases, we introduce the Focal Frequency Loss (FFL) [29] instead of using frequency loss directly. The FFL focuses the network on the most challenging frequencies during its training, \\[L_{\\mathrm{FFL}}=\\frac{1}{MN}\\sum_{u=0}^{M-1}\\sum_{v=0}^{N-1}w(u,v)\\left|F_{ \\hat{x}}(u,v)-F_{x}(u,v)\\right|^{2}. \\tag{10}\\] where \\(M\\times N\\) is the image size, \\((u,v)\\) denotes the coordinate of a spatial frequency on the frequency spectrum, \\(w(u,v)\\) is the matrix element, i.e., the weight for the spatial frequency at \\((u,v)\\), it is defined as: \\[w(u,v)=\\left|F_{\\hat{x}}(u,v)-F_{x}(u,v)\\right|^{\\alpha},\\]where \\(\\alpha\\) is the scaling factor for flexibility (\\(\\alpha=1\\)), \\[F(u,v)=\\frac{1}{MN}\\sum_{u=0}^{M-1}\\sum_{v=0}^{N-1}f(x,y)\\cdot e^{-i2\\pi(\\frac{u }{MN}+\\frac{vv}{N})}. \\tag{11}\\] The gradient through the spectrum weight matrix only serves as a weight for each frequency. The Focal Frequency Loss (FFL) can be understood as a weighted average of the frequency differences between the reference and demosaiced images. By using the FFL, the loss function is re-weighted to give priority to the reconstruction of the most complex details of textures with challenging frequencies, and to down-weight easier cases. In addition, the focus region is dynamically updated, which improves the immediate challenging frequencies and results in a gradual refinement of the generated images. _Subsequently, \\(L_{1}\\)_ loss is also added to the high frequency part, to ensure complete and global texture information in image domain can be learned by our network, i.e., \\[L_{1}^{c}=\\|x-\\hat{x}_{l}-\\hat{x}_{h}\\|_{1} \\tag{12}\\] where \\(\\hat{x}_{l}\\) and \\(\\hat{x}_{h}\\) are the predicted high frequency cube and low frequency part. _Finally_, our joint frequency and spatial loss function for hyperspectral image demosaicing is formulated as follows: \\[L=\\alpha_{1}L_{1}^{s}+\\alpha_{2}L_{\\mathrm{FFL}}+\\alpha_{3}L_{1}^{c} \\tag{13}\\] where \\(\\alpha_{1}=0.1,\\alpha_{2}=1,\\) and \\(\\alpha_{3}=1\\). \\begin{table} \\begin{tabular}{c c c c c c c c c c} \\hline \\hline Datasets & Method & WB [5] & BTES [38] & PPID [39] & GRMR [42] & In-Net [41] & MCAN [24] & **FDM-Net** & **FDM-Net-L** \\\\ \\hline \\multirow{4}{*}{ARAD 901} & PSNR \\(\\uparrow\\) & 29.27 & 29.52 & 36.87 & 29.35 & 44.98 & 41.60 & **52.41** & 51.65 \\\\ & SSIM \\(\\uparrow\\) & 0.956 & 0.947 & 0.969 & 0.960 & 0.993 & 0.988 & **0.998** & 0.997 \\\\ & SAM \\(\\downarrow\\) & 0.093 & 0.089 & 0.090 & 0.096 & 0.011 & 0.020 & **0.004** & 0.004 \\\\ & MRAE \\(\\downarrow\\) & - & - & - & - & 0.014 & 0.022 & **0.005** & 0.005 \\\\ \\hline \\multirow{4}{*}{ARAD 903} & PSNR \\(\\uparrow\\) & 30.95 & 31.06 & 39.10 & 30.99 & 41.09 & 36.92 & **48.78** & 48.63 \\\\ & SSIM \\(\\uparrow\\) & 0.965 & 0.955 & 0.977 & 0.967 & 0.985 & 0.936 & **0.993** & 0.993 \\\\ & SAM \\(\\downarrow\\) & 0.089 & 0.078 & 0.063 & 0.085 & 0.041 & 0.065 & **0.011** & 0.011 \\\\ & MRAE \\(\\downarrow\\) & - & - & - & - & 0.037 & 0.071 & **0.012** & 0.012 \\\\ \\hline \\multirow{4}{*}{ARAD 907} & PSNR \\(\\uparrow\\) & 33.85 & 33.49 & 38.81 & 33.96 & 43.50 & 45.29 & **51.81** & 50.84 \\\\ & SSIM \\(\\uparrow\\) & 0.943 & 0.929 & 0.959 & 0.944 & 0.984 & 0.991 & **0.997** & 0.997 \\\\ & SAM \\(\\downarrow\\) & 0.113 & 0.134 & 0.079 & 0.113 & 0.033 & 0.030 & **0.012** & 0.013 \\\\ & MRAE \\(\\downarrow\\) & - & - & - & - & 0.043 & 0.038 & **0.015** & 0.017 \\\\ \\hline \\multirow{4}{*}{50 HSIs} & PSNR \\(\\uparrow\\) & 31.17 & 30.94 & 35.98 & 31.38 & 42.88 & 43.22 & **49.23** & 48.60 \\\\ & SSIM \\(\\uparrow\\) & 0.912 & 0.892 & 0.937 & 0.922 & 0.981 & 0.986 & **0.996** & 0.995 \\\\ \\cline{1-1} & SAM \\(\\downarrow\\) & 0.158 & 0.176 & 0.121 & 0.150 & 0.034 & 0.034 & **0.013** & 0.014 \\\\ \\cline{1-1} & MRAE \\(\\downarrow\\) & - & - & - & - & 0.043 & 0.044 & **0.017** & 0.018 \\\\ \\hline Average & TIME (s) & 0.11 & 0.12 & 0.75 & 15.18 & 0.014 & 0.015 & 0.054 & 0.029 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Demosaicing results and running time compared with other methods. FDM-Net achieves SOTA results. Figure 5: Visual comparison of **HSI demosaicing** methods (False color, R: 2, G: 11, B:16). Figure 6: Visual comparison of **HSI demosaicing** methods (False color, R: 2, G: 11, B:16). ous methods have either yielded over-smooth results, which compromise fine-grained structures, or have introduced undesired chromatic artifacts and blotchy textures that are not present in the ground truth. Furthermore, Fig. 7 demonstrates the efficacy of FDM-Net in processing real data captured with an IMEC HSI camera. More detailed results and discussion please refer to the **supplementary material**. ### Ablation Study on Pipeline Components In order to examine the necessity of each component within our pipeline, we performed an ablation study on two distinct configurations of our method. These configurations included: 1) blind demosaicing using the MaFormer model, 2) utilizing the FDM-Net model with \\(N_{i}\\) STMC blocks, with \\(N_{i}\\) being fine-tuned for \\(i=1,2,3,4\\), and then we evaluate the performance of these pipelines. The qualitative results of our ablation study are depicted in Fig. 8 and 10, while the quantitative results are presented in Tab. 2 and 3. By comparing the blind demosaicing approach to the frequency-driven demosaicing method, we found that the latter demonstrated improvement in all metrics. This provides evidence that the method described in Sec. 3.1, which separates high and low frequency components, is superior to methods that do not. Additionally, we observed that increasing the value of \\(N_{i}\\) from 1 to 2 resulted in a performance improvement of 1.59dB, but it also increased the running time by 1.86 times. ## 6 Conclusion This paper has proposed a novel HSI demosaicing method that is driven by both high and low frequencies. The proposed method leverages Fourier zero-padding to quickly reconstruct the easy low frequency part, while a customized transformer architecture effectively handles the challenging task of high pass HSI demosaicing. By introducing a joint spatial and frequency loss, our approach enhances high frequency modeling while ensuring stable low frequency reconstruction. Extensive evaluations of the proposed approach on a large testing dataset comprising 50 HSI cubes demonstrate that it achieves SOTA performance, outperforming SOTA 6.01dB. Overall, the results indicate that focusing on the hard high frequency components has the potential to improve the accuracy and reliability of HSI demosaicing in various applications. Further research could explore the possibility of incorporating additional prior. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{High Frequency} & \\multirow{2}{*}{Low Frequency} & \\multirow{2}{*}{Method} & \\multirow{2}{*}{PSNR \\(\\uparrow\\)} & \\multirow{2}{*}{SSIM \\(\\uparrow\\)} & \\multirow{2}{*}{SAM \\(\\downarrow\\)} & \\multirow{2}{*}{MRAE \\(\\downarrow\\)} \\\\ \\cline{1-1} \\cline{5-7} & & & & & & \\\\ \\hline ✗ & ✗ & MaFormer & 47.64 & 0.994 & 0.014 & 0.019 \\\\ ✓ & ✓ & FDM-Net & **49.23** & **0.996** & **0.013** & **0.017** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: The PSNR, SSIM, SAM(lower is better), MRAE(lower is better) scores for the **ablation studies** of frequency. Figure 8: Visual comparison of **ablation studies**, **w/o** means High+Low frequency based demosaicing using MaFormer, **w/o** is blind demosaicing using MaFormer. Zoom in for better view. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline Model Size & STMC Block & PSNR \\(\\uparrow\\) & SSIM \\(\\uparrow\\) & SAM \\(\\downarrow\\) & MRAE \\(\\downarrow\\) & Time(s) \\\\ \\hline Light & \\(N_{i}\\)=1 & 48.60 & 0.995 & 0.014 & 0.018 & **0.029** \\\\ Normal & \\(N_{i}\\)=2 & **49.23** & **0.996** & **0.013** & **0.017** & 0.053 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: The PSNR, SSIM, SAM, MRAE of the **ablation studies** of model size for the proposed FDM-Net, i.e., FDM-Net with normal size, \\(N_{i}=2,i-1,2,3,4\\), and light FDM-Net with \\(N_{i}=1\\). Figure 10: Visual comparison of **ablation studies**, FDM-Net in normal size with \\(N_{i}\\)=2, FDM-Net-L is a light one with \\(N_{i}\\)=1. Figure 7: Illustration of the results on real data captured by our-self with IMEC \\(4\\times 4\\) hyperspectral camera. Figure 9: Illustration of the Spectral Density Curves on ARAD 907 HSI cube. The curve for FDM-Net appears to be the most similar to the curve for GT, indicating that the spectral properties of FDM-Net are more closely aligned with the ground truth than the others. \\begin{table} \\begin{tabular}{l|c c c c|c c c c c|c c c c} \\hline \\hline & \\multicolumn{4}{c|}{**FDM-Net (Ours)**} & \\multicolumn{4}{c|}{**MCAN**} & \\multicolumn{4}{c|}{**InNet**} \\\\ HSIs & **PSNR** & **SSIM** & **SAM** & **MRAE** & **PSNR** & **SSIM** & **SAM** & **MRAE** & **PSNR** & **SSIM** & **SAM** & **MRAE** \\\\ \\hline ARAD1K090116 & **51.648** & **0.997** & **0.004** & **0.005** & 41.596 & 0.988 & 0.020 Figure 11: Visual comparison of **HSI demosaicing** methods (False color, R: 2, G: 11, B:16). ## References * [1] Giancarlo A Antonucci, Simon Vary, David Humphreys, Robert A Lamb, Jonathan Piper, and Jared Tanner. Multi-spectral snapshot demosaicing via non-convex matrix completion. In _2019 IEEE Data Science Workshop (DSW)_, pages 227-231. IEEE, 2019. * [2] Boaz Arad, Radu Timofte, Rony Yahel, Nimrod Morag, Amir Bernat, Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, et al. Ntire 2022 spectral recovery challenge and data set. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 863-881, 2022. * [3] Boaz Arad, Radu Timofte, Rony Yahel, Nimrod Morag, Amir Bernat, Yaqi Wu, Xun Wu, Zhihao Fan, Chenjie Xia, Feng Zhang, et al. Ntire 2022 spectral demosaicing challenge and data set. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 882-896, 2022. * [4] Elena Beletkaia and Jose Pozo. More than meets the eye: Applications enabled by the non-stop development of hyperspectral imaging technology. _PhotonicsViews_, 17(1):24-26, 2020. * [5] Johannes Brauers and Til Aach. A color filter array based multispectral camera. In _12. Workshop Farbbildverarbeitung_, pages 55-64. Ilmenau, 2006. * [6] Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Henghui Ding, Yulun Zhang, Radu Timofte, and Luc Van Gool. Degradation-aware unfolding half-shuffle transformer for spectral compressive imaging. _arXiv preprint arXiv:2205.10102_, 2022. * [7] Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, and Manning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. In _Computer Vision-ECCV 2022 Workshops: Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part III_, pages 205-218. Springer, 2023. * [8] Jiezhang Cao, Yawei Li, Kai Zhang, and Luc Van Gool. Video super-resolution transformer. _arXiv preprint arXiv:2106.06847_, 2021. * [9] Jiezhang Cao, Jingyun Liang, Kai Zhang, Wenguan Wang, Qin Wang, Yulun Zhang, Hao Tang, and Luc Van Gool. Towards interpretable video super-resolution via alternating optimization. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVIII_, pages 393-411. Springer, 2022. * [10] Xun Cao, Tao Yue, Xing Lin, Stephen Lin, Xin Yuan, Qionghai Dai, Lawrence Carin, and David J Brady. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. _IEEE Signal Processing Magazine_, 33(5):95-108, 2016. * [11] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I 16_, pages 213-229. Springer, 2020. * [12] Chein-I Chang. _Hyperspectral data exploitation: theory and applications_. John Wiley & Sons, 2007. * [13] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12299-12310, 2021. * [14] Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, and Chengpeng Chen. Hinet: Half instance normalization network for image restoration. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 182-192, 2021. * [15] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 4641-4650, 2021. * [16] Qi Cui, Jongchan Park, R Theodore Smith, and Liang Gao. Snapshot hyperspectral light field imaging using image mapping spectrometry. _Optics letters_, 45(3):772-775, 2020. * [17] Pedro F Da Costa, Jessica Dafflon, Sergio Leonardo Mendes, Joao Ricardo Sato, M Jorge Cardoso, Robert Leech, Emily JH Jones, and Walter HL Pinaya. Transformer-based normative modelling for anomaly detection of early schizophrenia. _arXiv preprint arXiv:2212.04984_, 2022. * [18] Gabriella M Dalton, Noelle M Collins, Joshua M Clifford, Emily L Kemp, Ben Limpanukorn, and Edward S Jimenez. Monte-carlo modeling and design of a high-resolution hyperspectral computed tomography system with multi-material patterned anodes for material identification applications. In _Developments in X-Ray Tomography XIII_, volume 11840, pages 91-108. SPIE, 2021. * [19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018. * [20] Klaas Dijkstra, Jaap van de Loosdrecht, Lambert RB Schomaker, and Marco A Wiering. Hyperspectral demosaicking and crosstalk correction using deep learning. _Machine Vision and Applications_, 30(1):1-21, 2019. * [21] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020. * [22] Claude E Duchon. Lanczos filtering in one and two dimensions. _Journal of Applied Meteorology and Climatology_, 18(8):1016-1022, 1979. * [23] Thibaud Ehret, Axel Davy, Pablo Arias, and Gabriele Facciolo. Joint demosaicking and denoising by fine-tuning of bursts of raw images. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 8868-8877, 2019. * [24] Kai Feng, Yongqiang Zhao, Jonathan Cheung-Wai Chan, Seong G Kong, Xun Zhang, and Binglu Wang. Mosaic convolution-attention network for demosaicing multispectralfilter array images. IEEE Transactions on Computational Imaging7, pp. 864-878. Cited by: SS2. * [25]T. Amberbir Habtegebrial, G. Reis, and D. Stricker (2019) Deep convolutional networks for snapshot hyperspectral demosaicking. In 2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), pp. 1-5. Cited by: SS2. * [26]N. Hagen and M. W. Kudenov (2013) Review of snapshot spectral imaging technologies. Optical Engineering52 (9), pp. 090901-090901. Cited by: SS2. * [27]R. Hahn, F. Hammerling, T. Haist, D. Fleischle, O. Schwanke, O. Hauler, K. Rebner, M. Brecht, and W. Osten (2020) Detailed characterization of a mosaic based hyperspectral snapshot imager. Optical Engineering59 (12), pp. 125102-125102. Cited by: SS2. * [28]J. He, J. Chen, S. Liu, A. Kortylewski, C. Yang, Y. Bai, and C. Wang (2022) Transfg: a transformer architecture for fine-grained recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, pp. 852-860. Cited by: SS2. * [29]L. Jiang, B. Dai, W. Wu, and C. C. Loy (2021) Focal frequency loss for image reconstruction and synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13919-13929. Cited by: SS2. * [30]D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: SS2. * [31]S. Koundinyan, K. R. Thompson, and A. S. Material identification and classification using machine learning techniques with hyperspectral computed tomography. Technical report Sandia National Lab.(SNL-NM) Albuquerque, NM (United States). Cited by: SS2. * [32]O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang (2019) Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 8878-8887. Cited by: SS2. * [33]P. Lapray, X. Wang, J. Thomas, and P. Gouton (2014) Multispectral filter arrays: recent advances and practical implementation. Sensors14 (11), pp. 21626-21659. Cited by: SS2. * [34]J. Liang, J. Cao, G. Sun, K. Zhang, L. V. Gool, and R. Timofte (2021) Swinir: image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1833-1844. Cited by: SS2. * [35]L. Liu, X. Jia, J. Liu, and Q. Tian (2020) Joint demosaicing and denoising with self guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2240-2249. Cited by: SS2. * [36]Y. Liu, X. Yuan, J. Suo, D. J. Brady, and Q. Dai (2018) Rank minimization for snapshot compressive imaging. IEEE transactions on pattern analysis and machine intelligence41 (12), pp. 2990-3006. Cited by: SS2. * [37]Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo (2021) Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 10012-10022. Cited by: SS2. * [38]L. Miao, H. Qi, R. Ramanath, and W. E. Snyder (2006) Binary tree-based generic demosaicking algorithm for multispectral filter arrays. IEEE Transactions on Image Processing15 (11), pp. 3550-3558. Cited by: SS2. * [39]S. Minoubi, O. Losson, B. Mathon, and L. Macaire (2017) Multispectral demosaicing using pseudo-pamchromatic image. IEEE Transactions on Computational Imaging3 (4), pp. 982-995. Cited by: SS2. * [40]R. M. Rao, J. Liu, R. Verkuil, J. Meier, J. Canny, P. Abbeel, T. Sercu, and A. Rives (2021) Masa transformer. In International Conference on Machine Learning, pp. 8844-8856. Cited by: SS2. * [41]K. Shinoda, S. Yoshiba, and M. Hasegawa (2018) Deep demosaicking for multispectral filter arrays. arXiv preprint arXiv:1808.08021. Cited by: SS2. * [42]G. Tsagkatakis, M. Bloemen, B. Geelen, M. Jayapala, and P. Tsakalides (2018) Graph and rank regularized matrix recovery for snapshot spectral image demosaicing. IEEE Transactions on Computational Imaging5 (2), pp. 301-316. Cited by: SS2. * [43]A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017) Attention is all you need. Advances in neural information processing systems30. Cited by: SS2. * [44]L. Wang, M. Cao, Y. Zhong, and X. Yuan (2022) Spatial-temporal transformer for video snapshot compressive imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: SS2. * [45]W. Wang, E. Xie, X. Li, D. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao (2021) Pyramid vision transformer: a versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 568-578. Cited by: SS2. * [46]Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li (2022) Uformer: a general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17683-17693. Cited by: SS2. * [47]Y. Wei, S. Gu, Y. Li, R. Timofte, L. Jin, and H. Song (2021) Unsupervised real-world image super resolution via domain-distance aware training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13385-13394. Cited by: SS2. * [48]W. Xing and K. Egiazarian (2021) End-to-end learning for joint image demosaicing, denoising and super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3507-3516. Cited by: SS2. * [49]D. Yu, Q. Li, X. Wang, Z. Zhang, Y. Qian, and C. Xu (2023) DSTrans: dual-stream transformer for hyperspectral image restoration. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3739-3749. Cited by: SS2. * [50]S. Waqas Zamir, A. Arora, S. Khan, M. Hayat, F. Shahbaz Khan, and M. Yang (2021)Restormer: Efficient transformer for high-resolution image restoration. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5728-5739, 2022. * [51] Haijin Zeng, Xiaozhen Xie, Haojie Cui, Hamping Yin, and Jifeng Ning. Hyperspectral image restoration via global l 1-2 spatial-spectral total variation regularized local low-rank tensor recovery. _IEEE Transactions on Geoscience and Remote Sensing_, 59(4):3309-3325, 2020. * [52] Haijin Zeng, Xiaozhen Xie, Haojie Cui, Yuan Zhao, and Jifeng Ning. Hyperspectral image restoration via cnn denoiser prior regularized low-rank tensor recovery. _Computer Vision and Image Understanding_, 197:103004, 2020. * [53] Tao Zhang, Ying Fu, and Cheng Li. Hyperspectral image denoising with realistic data. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 2248-2257, October 2021. * [54] Tao Zhang, Ying Fu, Lizhi Wang, and Hua Huang. Hyperspectral image reconstruction using deep external and internal learning. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 8559-8568, 2019.
Hyperspectral imaging systems that use multispectral filter arrays (MSFA) capture only one spectral component in each pixel. Hyperspectral demosaicing is used to recover the non-measured components. While deep learning methods have shown promise in this area, they still suffer from several challenges, including limited modeling of non-local dependencies, lack of consideration of the periodic MSFA pattern that could be linked to periodic artifacts, and difficulty in recovering high-frequency details. To address these challenges, this paper proposes a novel demosaicing framework, the MSFA-frequency-aware Transformer network (FDM-Net). FDM-Net integrates a novel MSFA-frequency-aware multi-head self-attention mechanism (MaFormer) and a filter-based Fourier zero-padding method to reconstruct high pass components with greater difficulty and low pass components with relative ease, separately. The advantage of Maformer is that it can leverage the MSFA information and non-local dependencies present in the data. Additionally, we introduce a joint spatial and frequency loss to transfer MSFA information and enhance training on frequency components that are hard to recover. Our experimental results demonstrate that FDM-Net outperforms state-of-the-art methods with 6dB PSNR, and reconstructs high-fidelity details successfully.
Condense the content of the following passage.
251
arxiv-format/1601_06243v1.md
Super-Resolution Reconstruction of Hyperspectral Images via Low Rank Tensor Modeling and Total Variation Regularization ## 1 Introduction Hyperspectral images (HSIs) are recordings of reflectance of light of some real world scenes or objects including hundreds of spectral bands ranging from ultraviolet to infrared wavelength [1, 2]. The abundant spectral bands of HSIs provide fine spectral feature differences between various materials of interest and enable many computer vision tasks more successfully achievable. However, due to the constraints of imaging hardware, signal to noise ratio (SNR) and time constraints, the acquired hyperspectral images unfortunately have low spatial resolution, which cannot give any active help for high precision processing requirements in many fields including mineralogy, manufacturing, medical diagnostics, and surveillance. Hence, the task of reconstructing a hyperspectral image of high resolution (HR) from an observed low resolution (LR) hyperspectral image or sequence is a valuable research issue. The problem of hyperspectral image super-resolution (HSSR) can be solved by designing various traditional signal processing techniques, including the works [3, 4, 5]. In the recent years, applying prior information of HR auxiliary images into the process of HSSR has been becoming more and more popular [6, 7]. However, such HR images are not always easy to get due to the limitations of remote sensing system. Therefore, super-resolution of single HSI cube has attracted increased interest in many practical scenarios. In this paper, we consider a single HSI cube as a tensor with three modes (width, height, and band) and then discover the hidden spatial-and-spectral structures using tensor modelling for enhancing its spatial resolution. Specifically, the spectral bands of a HSI have strong correlations and each band if considered as a matrix has relatively strong correlation; this spatial-and-spectral correlation can be modelled by a low-rank tensor penalty. Additionally, for each voxel, from the spatial viewpoint its intensity seems to almost equal to those in its neighbourhood, and the same from the spectral viewpoint; we then describe this local spatial-and-spectral smoothness property using 3D total variation. As such, the HSSR task resorts to solving an optimization problem, which can be efficiently solved by combing LLA strategy and ADMM. ## 2 Hssr via Total Variation and Low-Rank Regularizations In this section, we first introduce the observation model. Then, we utilize 3D TV to describe local smoothness of a hyperspectral image, and adopt a tensor folded-concave penalty to characterize global correlation of a hyperspectral image. Finally, a novel regularization model is derived for the HSSR task. ### Observation model The low spatial resolution hyperspectral image can be generated by the following observation model: \\[\\mathcal{I}=DS\\mathcal{X}+\\mathbf{e},\\]where the tensor \\(\\mathcal{I}\\) donates the observed LR image, \\(D\\) is a downsampling operator, \\(S\\) is a blurring operator, \\(\\mathcal{X}\\) is the HR image to be reconstructed and \\(\\mathbf{e}\\) represents the observation noise. Since this is an ill-posed problem, some regularization terms of \\(\\mathcal{X}\\) based on prior knowledge, denoted by \\(\\mathfrak{R}(\\mathcal{X})\\), can be introduced to regularize the solution to refine the solution space: \\(\\hat{\\mathcal{X}}=\\arg\\min_{\\mathcal{X}}\\{\\|DS\\mathcal{X}-\\mathcal{I}\\|^{2}+ \\lambda\\mathfrak{R}(\\mathcal{X})\\}\\), where \\(\\lambda\\) is a scalar parameter to make a trade-off between the fidelity term and the regularization term. ### 3D TV regularization Total variation (TV) [5] is often used to preserve local spatial consistency in image recovery and suppress image noise. Considering the fact that an HR hyperspectral image to be reconstructed is treated as a tensor, and its local spatial-and-spectral consistency, or say, smoothness characterized by 3D total variation, which is expressed as \\(TV(\\mathcal{X})=\\sum_{ijk}|x_{ijk}-x_{ij,k-1}|+|x_{ijk}-x_{i,j-1,k}|,\\) where \\(x_{ijk}\\) is the \\((i,j,k)\\)-th entry of tensor \\(\\mathcal{X}\\). ### Low-rank regularization The spatial-and-spectral correlation of a hyperspectral image implies that each unfolded matrix, if a hyperspectral image represented as a tensor, is low rank. Hence, following the work [8], low-rank property of a three-order tensor can be measured by a weighted sum of three ranks: \\[\\text{Rank}(\\mathcal{X})=\\sum_{i}^{3}\\alpha_{i}\\text{Rank}(\\mathcal{X}_{(i)}), \\tag{1}\\] where \\(\\alpha_{i}\\geqslant 0\\) and satisfies \\(\\sum_{i}^{3}\\alpha_{i}=1\\). Since the optimization problem with rank constraint (1) is intractable, and matrix nuclear norm is exploited as a tight convex surrogate of the matrix rank [9], one can replace the rank function (1) with the following tensor nuclear norm: \\[\\|\\mathcal{X}\\|_{*}=\\sum_{i}^{3}\\alpha_{i}\\|\\mathcal{X}_{(i)}\\|_{*}, \\tag{2}\\] where \\(\\|\\mathbf{Z}\\|_{*}:=\\sum_{k=1}^{\\min(m,n)}\\sigma_{k}(\\mathbf{Z})\\) denotes the nuclear norm of matrix \\(\\mathbf{Z}\\) of size \\(m\\times n\\), and \\(\\mathcal{X}_{(i)}\\) is the \\(i\\)-th unfolded matrix of tensor \\(\\mathcal{X}\\)[8]. Although the convex nuclear norm (2) performs well in various tensor recovery problems, studies such as [9] have shown that the nuclear norm over-penalizes large singular values, and thus leads to the modeling bias in low rank structure estimation. Folded-concave penalty [10] can be used to remedy this modeling bias, as shown in some works [10, 11]. Thus, we shall utilize one of the folded penalties, the minmax concave plus (MCP) penalty, of the form: \\[P_{\\lambda}=\\begin{cases}a\\lambda^{2}/2&\\text{if }|t|\\geqslant a\\lambda\\\\ \\lambda|t|-\\frac{t^{2}}{2a}&\\text{otherwise}.\\end{cases} \\tag{3}\\] Following [11], the folded-concave norm of a matrix \\(X\\) is defined as \\(\\|X\\|_{P_{\\lambda}}:=\\sum_{j=1}^{r}P_{\\lambda}(\\sigma_{j}(X))\\)1, where \\(\\sigma_{j}(X)\\) is the \\(j\\)-th singular value of \\(X\\) and \\(r\\) is the rank. As such, the tensor MCP penalty is defined by applying the MCP penalty function to each unfolded matrix \\(\\mathcal{X}_{(i)}\\): Footnote 1: Note that \\(\\|X\\|_{P_{\\lambda}}\\) is nonconvex with respect to \\(X\\). \\[\\|\\mathcal{X}\\|_{P_{\\lambda}}=\\sum_{i}^{N}\\alpha_{i}\\|\\mathcal{X}_{(i)}\\|_{P_{ \\lambda}}. \\tag{4}\\] ### Proposed model Based on the previous discussions, we now derive the following regularization model for the HSSR task: \\[\\min_{\\mathcal{X}}\\|DS\\mathcal{X}-\\mathcal{I}\\|_{F}^{2}+\\lambda_{1}TV( \\mathcal{X})+\\lambda_{2}\\mathcal{L}(\\mathcal{X}_{(i)}), \\tag{5}\\] where the scalars \\(\\lambda_{1}\\) and \\(\\lambda_{2}\\) are regularization parameters, and \\(\\mathcal{L}(\\mathcal{X}_{(i)})\\) is the low-rank measure function (1) or (4) for \\(\\mathcal{X}\\). ## 3 Optimization algorithm We first rewrite (5) as the following equivalent form by introducing \\(N\\) auxiliary variable \\(\\{\\mathcal{M}_{i}\\}_{i=1}^{N}\\): \\[\\begin{split}&\\min_{\\mathcal{X},\\{\\mathcal{M}_{i}\\}_{i=1}^{N}}\\ \\ \\|DS\\mathcal{X}-\\mathcal{I}\\|_{F}^{2}+\\lambda_{1}TV(\\mathcal{X})+\\lambda_{2} \\mathcal{L}(\\mathcal{M}_{i})\\\\ & s.t\\ \\ \\mathcal{X}_{(i)}=\\mathcal{M}_{i(i)},i=1,2, ,N\\end{split} \\tag{6}\\] Based on ADMM [12], the augmented Lagrangian function is written as follows: \\[\\begin{split}& L(\\mathcal{X},\\mathcal{Y}_{i},\\mathcal{M}_{i})=\\| DS\\mathcal{X}-\\mathcal{I}\\|_{F}^{2}+\\lambda_{1}TV(\\mathcal{X})\\\\ &+\\sum_{i=1}^{N}\\lambda_{2}\\mathcal{L}(\\mathcal{M}_{i(i)})+\\sum_ {i=1}^{N}\\frac{\\rho}{2}\\|\\mathcal{M}_{i(i)}-\\mathcal{X}_{(i)}+\\frac{\\mathcal{ Y}_{i(i)}}{\\rho}\\|_{F}^{2},\\end{split} \\tag{7}\\] where \\(\\{\\mathcal{Y}_{i}\\}_{i=1}^{N}\\) are Lagrangian parameters. We shall break (7) into three subproblems and iteratively update each variable through fixing the other ones. Let \\(k\\) denotes the \\(k\\)th iteration step: **Subproblem 1:** \\[\\begin{split}&\\mathcal{X}^{(k+1)}=arg\\min_{\\mathcal{X}}\\ \\ \\|DS \\mathcal{X}-\\mathcal{I}\\|_{F}^{2}+\\lambda_{1}TV(\\mathcal{X})\\\\ &+\\sum_{i=1}^{N}\\frac{\\rho}{2}\\|\\mathcal{M}_{i(i)}^{k}-\\mathcal{ X}_{(i)}+\\frac{\\mathcal{Y}_{i(i)}^{k}}{\\rho}\\|_{F}^{2}\\end{split} \\tag{8}\\] The well-known gradient method can be easily applied to solve this subproblem. **Subproblem 2:** \\[\\begin{split}&\\{\\mathcal{M}_{i}^{(k+1)}\\}_{i=1}^{N}=arg\\min_{\\{ \\mathcal{M}_{i}\\}_{i=1}^{N}}\\sum_{i=1}^{N}\\lambda_{2}\\mathcal{L}(\\mathcal{M} _{i(i)})\\\\ &+\\sum_{i=1}^{N}\\frac{\\rho}{2}\\|\\mathcal{M}_{i(i)}-\\mathcal{X}_{( i)}^{k}+\\frac{\\mathcal{Y}_{i(i)}^{k}}{\\rho}\\|_{F}^{2}\\end{split} \\tag{9}\\]The solution of this subproblem depends on the choice of the low rank term \\(\\mathcal{L}(\\mathcal{X})\\). We first consider the case of nuclear norm, i.e., \\[\\sum_{i=1}^{N}\\lambda_{2}\\alpha_{i}\\|\\mathcal{M}_{i(i)}\\|_{*}+\\sum_{i=1}^{N} \\frac{\\rho}{2}\\|\\mathcal{M}_{i(i)}-\\mathcal{X}_{(i)}^{k}+\\frac{\\mathcal{Y}_{i( i)}^{k}}{\\rho}\\|_{F}^{2} \\tag{10}\\] According to [8], its close-form solution is expressed as \\[\\mathcal{M}_{i}=fold_{i}[S_{\\lambda_{2}\\alpha_{i}/\\rho}(\\mathcal{X}_{(i)}^{(k+ 1)})-\\mathcal{Y}_{i(i)}^{(k)}] \\tag{11}\\] For a given matrix \\(X\\), the singular value shrinkage operator \\(S_{\\tau}(X)\\) is defined by \\(S_{\\tau}(X):=U_{X}D_{\\tau}(\\Sigma_{X})V_{X}^{T}\\), where \\(X=U_{X}\\sigma_{X}V_{X}^{T}\\) is the singular value decomposition of \\(X\\) and \\([D_{\\tau}(A)]_{ij}=sgn(A_{ij})(|A_{ij}|-\\tau)_{+}\\). While for the MCP case, we adopt the same idea of [10, 11] to solve the resulting nonconvex problem. More precisely, we use the local linear approximation (LLA) algorithm to transform the MCP penalization problem into a series of weighted nuclear norm penalization problem. Then the resulting optimization problem can be solved as well. More precisely, the **subproblem 2** can be written as \\[\\{\\mathcal{M}_{i}^{(k+1)}\\}_{i=1}^{N}=arg\\min_{\\{\\mathcal{M}_{i} \\}_{i=1}^{N}}\\sum_{i=1}^{N}\\lambda_{2}\\alpha_{i}Q_{P_{\\lambda}}(\\sigma( \\mathcal{M}_{i(i)})|\\sigma(\\mathcal{X}^{k})) \\tag{12}\\] \\[+\\sum_{i=1}^{N}\\frac{\\rho}{2}\\|\\mathcal{M}_{i(i)}-\\mathcal{X}_{ (i)}^{k}+\\frac{\\mathcal{Y}_{i(i)}^{k}}{\\rho}\\|_{F}^{2},\\] where \\(Q_{P_{\\lambda}}(\\sigma(X|X_{(i)}^{k}))\\) is the locally linear approximation of \\(\\|X\\|_{P_{\\lambda}}\\) when \\(X^{k}\\) is given. Then the solution of this optimization problem is \\(\\mathcal{M}_{i(i)}=S_{\\alpha_{i}/\\rho,W_{i}}(\\mathcal{X}_{(i)}-\\frac{\\mathcal{ Y}_{i(i)}}{\\rho})\\) and the weight matrix \\(W_{i}\\) is given by \\(W_{i}=Diag((\\lambda-(\\sigma(\\mathcal{X}_{(i)})/a))_{+})\\) for some fixed \\(a>1\\). **Subproblem 3:** \\[\\mathcal{Y}_{i}^{(k+1)}=\\mathcal{Y}_{i}^{(k)}+\\rho(\\mathcal{M}_{i}^{(k+1)}- \\mathcal{X}^{(k+1)}), \\tag{13}\\] where \\(\\rho\\) is a parameter associated with convergence rate with fixed value, i.e., 1.05. ## 4 Experimental Study We now test the proposed method on a HSI dataset. The reference image without noisy bands is a \\(256\\times 256\\times 146\\) hyperspectral image acquired over Moffett field, CA, in 1994 (AVIRIS). The blurring kernel is Gaussian kernel and the LR image is generated by downsampling the original HR image with a factor of 2, i.e., the LR image is of size \\(128\\times 128\\times 146\\). We compare our method with three other popular methods, including the bicubic method described in [13], NARM proposed in [14] and Sparse Representation method by Yang et al. [15]. The reconstructed results of the test HSI for a specific band 100 are shown in Fig.1. One can observe that the Bicubic interpolation blurs the image and the high-frequency spatial details are lost. The other methods provide better reconstruction visual effects. Additionally, our proposed method shown in Fig.1(e) and (f) outperforms the other ones. It is also interesting to note that the folded-concave penalization, i.e., the MCP, outperforms other competing methods. To further evaluate the quality of the proposed reconstruction strategy, several image quality measures have been employed, including peak-signal to noise ratio (PSNR), spectral angle mapper (SAM), and relative dimensionless global error in synthesis (ERGAS). It is known that the larger the PSNR, the better the image quality is; the lower the SAM and ERGAS value are, the smaller spectral distortion. It can be seen from Table 1 that the proposed method with nuclear norm and folded-concave penalties outperforms other competing ones. Again, the MCP penalization provides best reconstruction results, which illustrates the advantage of folded-concave penalty over convex nuclear norm penalty. ## 5 Conclusion In this paper, we propose a novel method for hyperspectral image super-resolution by tensor structural modelling. The proposed method considers the global correlation and local smoothness of a hyperspectral image by combining low-rank and total variation regularizations imposed on a tensor. Experimental results reveal that the proposed methods outperform other compared methods, and especially folded concave penalization is superior over the nuclear norm penalization for the HSSR task. ## References * [1] Y. Gu, Y. Zheng, and J. Zhang, \"Integration of spatial-spectral information for resolution enhancement in hyperspectral images,\" _IEEE Trans Geosci Remote Sens._, vol. 46, no. 5, pp. 1347-1357, 2008. * [2] T. Akgun, Y. Altunbasak, and R. M. Mersereau, \"Super-resolution reconstruction of hyperspectral images,\" _IEEE Trans Image Process_, vol. 14, no. 11, pp. 1860-1875, 2005. \\begin{table} \\begin{tabular}{|c||c||c||c|} \\hline Quantitative Measures & PSNR & SAM & ERGAS \\\\ \\hline Bicubic & 33.0236 & 0.1248 & 126.0507 \\\\ \\hline NRAM & 33.1197 & 0.1297 & 124.3686 \\\\ \\hline Sparse Representation & 35.7409 & 0.1651 & 117.4637 \\\\ \\hline Nuclear Norm Penalty & _36.9567_ & _0.0843_ & _95.0166_ \\\\ \\hline MCP Penalty & **37.8732** & **0.0720** & **88.5562** \\\\ \\hline \\end{tabular} \\end{table} Table 1: Quantitative Measures for Different SRR Methods* [3] R. Y. Tsai and T. S. Huang, \"Multi-frame image restoration and registration,\" _Advanced in Computer Vision and Image Processing_, vol. 1, pp. 317-339, 1987. * [4] S. P. Kim, N. K. Bose, and H. M. Valenzuela, \"Recursive reconstruction of high resolution image from noisy undersampled multiframes,\" _IEEE Transactions on Acoustics, Speech, and Signal Processing_, vol. 38, no. 2, pp. 1013-1027, 1990. * [5] Z. Guo, T. Wittman, and S. Osher, \"L1 unmixing and its application to hyperspectral image enhancement,,\" _Proc. SPIE Conference on Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV_, 2009. * [6] N. Akhtar, F. Shafait, and A. Mian, \"Sparse spatio-spectral representation for hyperspectral image super-resolution,\" _Proc. ECCV 2014, LNCS 8695_, pp. 63-78, 2014. * [7] L. Loncan, S. Fabre, L.B. Almeida, and et al., \"Hyperspectral pansharpening: a review,\" _IEEE Geosci Remote Mag._, vol. 3, no. 3, pp. 27-46, 2015. * [8] J. Liu, P. Musialski, P. Wonka, and J. Ye, \"Tensor completion for estimating missing values in visual data,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 35, pp. 208-220, 2013. * [9] F. Bunea, Y. She, and M. Wegkamp, \"Optimal selection of reduced rank estimators of high-dimensional matrices,\" _Ann. Stat._, vol. 52, no. 4, pp. 1282-1309, 2011. * [10] J. Fan, L. Xue, and H. Zou, \"Strong oracle optimality of folded concave penalized estimation,\" _Ann. Stat._, vol. 41, no. 3, pp. 828-849, 2014. * [11] W. Cao, Y. Wang, C. Yang, X. Chang, Z. Han, and Z. Xu, \"Folded-concave penalization approaches to tensor completion,\" _Neurocomputing_, vol. 152, pp. 261-273, 2015. * [12] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, \"Distributed optimization and statistical learning via the alternating direction method of multipliers,\" _Found. Trends Mach. Learn._, vol. 3, pp. 1-122, 2011. * [13] R. G. Keys, \"Cubic convolution interpolation for digital image processing,\" _IEEE Trans. Acoust. Speech Signal Process._, vol. ASSP-29, no. 6, pp. 1153-1160, 1981. * [14] W. Dong, L. Zhang, R. Lukac, and G. Shi, \"Sparse representation based image interpolation with nonlocal autoregressive modeling,\" _IEEE Trans.Image Process._, vol. 22, no. 4, pp. 1382-1394, 2013. * [15] J. Yang, J. Wright, T. S. Huang, and Y. Ma, \"Image super-resolution via sparse representation,\" _IEEE Trans. Image Process._, vol. 19, no. 11, pp. 2861-2873, 2010. Figure 1: Visual comparison of different super-resolution reconstruction methods.
In this paper, we propose a novel approach to hyperspectral image super-resolution by modeling the global spatial-and-spectral correlation and local smoothness properties over hyperspectral images. Specifically, we utilize the tensor nuclear norm and tensor folded-concave penalty functions to describe the global spatial-and-spectral correlation hidden in hyperspectral images, and 3D total variation (TV) to characterize the local spatial-and-spectral smoothness across all hyperspectral bands. Then, we develop an efficient algorithm for solving the resulting optimization problem by combing the local linear approximation (LLA) strategy and alternative direction method of multipliers (ADMM). Experimental results on one hyperspectral image dataset illustrate the merits of the proposed approach. Shiying He\\({}^{1,3}\\), Haiwei Zhou\\({}^{1}\\), Yao Wang\\({}^{1,3}\\), Wenfei Cao\\({}^{2}\\), and Zhi Han\\({}^{3}\\)+\\({}^{1}\\)School of Mathematics and Statistics, Xi'an Jiaotong University \\({}^{2}\\)School of Mathematics and Information Science, Shaanxi Normal University \\({}^{3}\\)Shenyang Institute of Automation, Chinese Academy of Sciences Footnote †: This work was supported in part by the Natural Science Foundation of China under grant numbers 11501440, 61273020 and 61303168. _(Corresponding author: Yao Wang, email: [email protected].)_ Hyperspectral images, Super-resolution reconstruction, nuclear norm, Folded-concave penalty, 3D total variation.
Write a summary of the passage below.
340
mdpi/0163ff82_2623_468b_a38a_8e02abd5fb94.md
# At-Site Assessment of a Regional Design Criterium for Water-Demand Peak Factor Evaluation Gabriella Balacco 1Dipartimento di Ingegneria Civile, Ambientale, del Territorio, Edile di Chimica, Politecnico di Bari, 70125 Bari, Italy; [email protected] (A.G.); [email protected] (V.I.); [email protected] (A.F.P.) Andrea Gioia Correspondence: [email protected]; Tel.: +39-080-5963791 Vito Iacobellis 1Dipartimento di Ingegneria Civile, Ambientale, del Territorio, Edile di Chimica, Politecnico di Bari, 70125 Bari, Italy; [email protected] (A.G.); [email protected] (V.I.); [email protected] (A.F.P.) Vito Iacobellis 1Dipartimento di Ingegneria Civile, Ambientale, del Territorio, Edile di Chimica, Politecnico di Bari, 70125 Bari, Italy; [email protected] (A.G.); [email protected] (V.I.); [email protected] (A.F.P.) Alberto Ferruccio Piccinini 1Dipartimento di Ingegneria Civile, Ambientale, del Territorio, Edile di Chimica, Politecnico di Bari, 70125 Bari, Italy; [email protected] (A.G.); [email protected] (V.I.); [email protected] (A.F.P.) Received: 12 November 2018; Accepted: 18 December 2018; Published: 24 December 2018 ## 1 Introduction Water demand is the driving force for water distribution networks (WDNs) operation and its correct assessment is an essential and crucial task in the context of their design [1]. The technical literature has tried to define the peak water demand adopting several approaches since many years. Firstly, a deterministic or top-down approach [2] has been adopted, several researchers starting from the water sources and working down to the nodal water demands, defined and proposed peak values within a range depending on climate variability, geographical position etc. [3; 4; 5], or depending on population by means of empirical expressions [6; 7; 8; 9; 10; 11] Recently the path taken by scientific literature is more focused on probabilistic variability, considering the random nature of water demand [12; 13; 14; 15]. Since water demand is characterized by a pulsed nature, the temporal scale adopted for its analysis is very important for a correct peak demand estimation. Water demand is described by a random fluctuation at fine temporal scales. The increase of the time scale leads to neglect major peaks that could arise during the time interval adopted [15]. The effect of the sampling interval has been widely investigated [11; 16] showing that a finest temporal scale (1s) is essential for a single household or few households water peak detection. A larger sampling resolution can be adopted for numerous households and for towns since the entity of these fluctuations decreases with the spatial scale increase [1]. Buchberger and Wu [17] proposed a Poisson Rectangular Pulse (PRP) model to estimate the probability distribution of water demand for final branches of a WDN. Four single-family residences monitored for one year [18] showed how residential water demand can be represented with rectangular pulses thanks to a signal smoothing and pulse separation. Pulses subdivided into deterministic (washing machines, dishwashers, water closets etc.) and random servers (showers, cleaning, cooking etc.) were analyzed in terms of intensity, duration and frequency, however the variance of the daily pulse counts appeared to be too high for a Poisson process. With the aim to go beyondthis limit, Creaco et al. (2014) presented a Poisson model for water demand generation where, unlike the previously mentioned work, mutual dependence of pulse duration and intensity is proposed. Nevertheless, water demand for several users may still arise from the overlapping of several rectangular pulses and more or less complicated Pulse based approaches could be feasible when the purpose is the water peak demand estimation for large residential areas (Krause et al., 2014). On the other hand, the scientific literature (Krause et al., 2014; Krause et al., 2014; Krause et al., 2015; Krause et al., 2015) also shows that the marginal distribution of peak factor data may be satisfactorily represented by a Log-Normal, a Gumbel or a Log-Logistic distribution, all laws that are usually adopted to describe extreme events and which well-fit the recorded time series of water maximum demands. Starting from these considerations and given the availability of randomly (not continuously) sampled flow data at aggregation intervals of 3, 5, and 10 min for about 150 municipalities in Puglia, Balacco et al. (2015) have defined a regional relationship between a fixed peak coefficient quantile and population. Such a study was inspired by Zhang et al. (2012) which derived the Gumbel asymptotic distribution of extreme values of a Poisson Rectangular Pulse (PRP) representation of residential water demand. The approach proposed by Reference (Zhang et al., 2012) offers interesting perspectives with regard to the physical parametrization of the peak factors distribution. On the other hand, it leaves open the problem of assessing the verification of the hypothesis that lies beneath the adopted stochastic structure of the process. In this framework the continuously recorded time series of three towns in Puglia (Southern Italy), Roccaforzata, Palagianello and Palagiano, with population ranging between 1800 and 16,000 where exploited to verify the fit of a Gumbel distribution to recorded data and to check the validity of the regional relationship derived from the pulse based representation of the water use process. ## 2 Case Study Acquedotto Pugliese (Puglia Aqueduct, AQP in the following lines) supplies drinking water and manages the whole WDN of Puglia in Southern Italy. Today AQP is the biggest water supply network in Europe. AQP derives water from the Sele spring (superficial water and groundwater), located in the western hillslope of the Apennine watershed and from another source in contiguous regions. This study exploits the dataset recorded in three small towns, Palagiano, Palagianello and Roccaforzata, located in Puglia (Southern Italy), and including continuous flow data of drinking water demand for two years: 2015 and 2016. The flow data are extracted from the remote-control system of AQP. Records have been collected every 10 min by flow meters positioned on feeding pipes of networks. The field campaigns were performed after intense works for leak reduction thus we assume the presence of a physiological level of leaks. A preliminary analysis of data (Krause et al., 2014) highlighted the daily periodicity, as well as a weekly periodicity in water demand. As is well known, daily variability and peak water demand are strongly influenced by habits and activities of inhabitants, however, an analysis of the daily demand pattern (Figure 1) shows a certain synchronicity of the main daily peak in all the three towns. Observing the daily pattern for Palagiano a singular peak can be observed in the morning (05:00-06:00) probably due to the dominant working activity represented by agriculture. Moreover, a peak demand was always detected in the morning during either the week working days and the weekends (Figure 2), even if for the latter the peak is delayed by about one hour. The highest peak occurs on Sundays, while the water demand becomes more uniform during the remaining hours of the day; instead the maximum water volume is observed on Saturdays. ## 3 Hourly Peaks Frequency Analysis Given the random nature of the factors that influence the peak water demand, it seems suitable to consider the peak coefficients as random variables and then to characterize their behavior through a probabilistic approach. Recent scientific studies, aimed at improving the WDN design procedures, use this type of approach to provide the peak coefficient evaluation for the assigned frequency of occurrence [19]. In particular, they show cases where either the log-normal or the Gumbel law are able to represent the trend of the peak coefficient for a small town of about 1200 inhabitants. Within such a framework, for each of the time series here considered, we estimated the hourly peak coefficient as defined by the following expression: \\[Cph\\ =\\frac{Q_{max}(h)}{Q_{m}} \\tag{1}\\] Figure 1: Typical daily patterns in hourly water demand [23]. Figure 2: Typical daily patterns in hourly water demand for the seven days of the week [23]. where \\(Q_{max}\\)_(h)_ is the daily maximum of the hourly flow rate and \\(Q_{m}\\) is the average daily flow (reported in Table 1). We compared the observed values of each annual series of data available with those deriving from the application of the Gumbel law. Figure 3 shows a good fit between the empirical quantiles of the peak coefficient and the Gumbel distribution in every annual observed time series. The Gumbel parameters, where estimated by means of the classical method of moments assuming the sample values of the mean \\(\\mu\\) and the standard deviation \\(\\sigma\\) reported in Table 1. Figure 3: _Cont._ ## 4 Instantaneous Peaks Frequency Analysis In order to investigate the instantaneous variability of water demand, the instantaneous peak flow factor is defined as: \\[Cpi=\\frac{Q_{max}(i)}{Q_{m}} \\tag{2}\\] where \\(Q_{max}(i)\\) is the instantaneous peak flow and \\(Q_{m}\\) is the average daily flow (reported in Table 1). Considering for each of the towns investigated the observed two years of peak factor data, good performances are obtained by fitting the Gumbel probability distribution (see Figure 4) with parameters obtained through the method of moments. The sample mean (\\(\\upmu\\)) and standard deviation (\\(\\sigma\\)) are reported in Table 2. \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{**Town**} & \\multirow{2}{*}{**Inhabitants**} & \\multirow{2}{*}{**Q\\({}_{\\text{m}}\\) (m\\({}^{3}\\)/s)**} & \\multicolumn{2}{c}{**2015–2016**} \\\\ \\cline{3-5} & & & \\(\\upmu\\) & \\(\\sigma\\) \\\\ \\hline Palagiano & 16,067 & 30.54 & 1.50 & 0.11 \\\\ Palagianello & 7857 & 14.14 & 1.58 & 0.15 \\\\ Roccaforzata & 1827 & 6.81 & 1.32 & 0.14 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Sample mean and standard deviation of the hourly peak coefficients. Figure 3: Hourly peak coefficients and fitted Gumbel distribution. Figure 4: _Cont._ ## 5 Regional Distribution of the Instantaneous Peak Factor Due to the randomness of water demand, recent literature, see for example Reference [24], shows how is today considered less acceptable evaluating the peak factor by using a deterministic approach. This consideration can be confirmed observing sample data and their dispersion reported in Figure 5. Apart from empirical evidence, in this context Zhang et al. [22], developed a theoretical reliability-based methodology for the estimation of an instantaneous peak factor (\\(Cp_{i}\\)) for residential water use, using a probabilistic approach based on the Poisson Rectangular Pulse (PRP) representation, leading to an extreme value distribution of the Gumbel type. Under this hypothesis the water consumption is characterized by a rectangular water pulse of random duration, with mean equal to \\(\\tau\\), mean intensity equal to \\(\\alpha\\) and mean arrival rate of water pulses at a single home equal to \\(\\lambda\\); so \\(\\rho=\\lambda\\times\\tau\\) is daily average utilization factor for a single-family home. Following this approach, the instantaneous peak flow factor is evaluated as follows: \\[Cp_{i}(N/F)=\\psi^{*}\\left(1+\\zeta_{pF}\\sqrt{\\frac{1+{\\theta_{q}}^{2}}{\\psi^{*} \\rho\\ N}}\\right)\\text{ with }\\zeta_{F}=-\\frac{\\sqrt{6}}{\\pi}[0.5772+\\text{lnln}(1/F)] \\tag{3}\\] where N is the number of homes in the neighborhood; \\(\\zeta_{F}\\) is the pth percentile (frequency factor) of Gumbel distribution given by Chow et al. [25] and \\(\\rho\\) is the daily average utilization factor for a single-family home \\(\\theta_{q}\\) is the coefficient of variation of PRP indoor water demand pulse, \\(\\psi^{*}\\) is the dimensionless peak hourly demand factor. It is worth noting that, due to the structure of such equation, the instantaneous peak demand factor, \\(Cp_{i}\\), tends to \\(\\psi^{*}\\), the dimensionless hourly peak factor, for increasing N. In other terms the instantaneous peak factor converges to the hourly peak coefficient for growing population. Considering the 99.9th percentile and assuming \\(\\psi^{*}\\) equal to 1.8, suitable value for Italian towns of large population [23], \\(\\theta_{q}\\) equal to 0.55 as in Zhang et al. [22], a regional behavior (using data extracted from 150 towns in Puglia) of the instantaneous peak flow factor was found by Balacco et al. [15]. \\[Cp_{i}(P)=1.8+\\frac{1.8}{\\sqrt{P}} \\tag{4}\\] Figure 4: Comparison between the instantaneous peak observed data and the Gumbel distribution. where \\(P\\) is the population in thousands. In Figure 5 the regional relationship (green curve) is shown along with the observed values extracted from the measurement campaign conducted on 150 towns in Puglia data (grey rhombs) and with the Peak flow factors (red triangles) evaluated as the 99.9th percentile of the at-site Gumbel distribution (described in Section 4) fitted to the time series recorded in Roccaforzata, Palagianello and Palagiano. The behavior represented in Figure 5 confirms the good behavior of the theoretical regional curve and also allow for other considerations to be discussed in the following section. ## 6 Fitting the Probability Distribution of the Peak Factor to the Observed Local Values By exploiting Equation (3) we derived the peak factor probability distribution \\(F(Cp_{i})\\), for a town with N number of homes; considering that such expression can be easily turned into town population (P) if an average number of inhabitants for home is known. \\[F(Cp_{i})=\\exp\\left[-\\exp\\left(-\\left(\\frac{\\pi(Cp_{i}/\\psi^{*}-1)}{\\sqrt{6 \\left(1+{\\theta_{q}}^{2}\\right)/\\left(\\psi^{*}\\rho N\\right)}}+0.5772\\right) \\right)\\right] \\tag{5}\\] We derived also the theoretical relationships between the parameters of the theoretical \\(Cp_{i}\\) distribution and the mean and standard deviation of the population of the peak factors: \\[\\psi*=\\mu \\tag{6}\\] \\[\\theta_{q}=\\sqrt{\\frac{\\sigma^{2}\\rho N}{\\mu}-1} \\tag{7}\\] Then, in order to apply such a model to our data we assumed \\(\\rho\\) equal to 0.045, as in Reference [22], and we evaluated N from the population by considering that in Italy the average number of people for home is 2.6. Finally, we evaluated the remaining two parameters \\(\\theta_{q}\\) and \\(\\psi^{*}\\) as a function of the sample mean, and standard deviation of instantaneous peak factors reported in Table 2 by means of Equations (6) and (7). The resulting values of \\(\\theta_{q}\\) and \\(\\psi^{*}\\) are reported in Table 3. Figure 5: Comparison between \\(Cp_{i}\\) sample values (grey rhombs), regional curve (in green) and 99.9th percentile peak factors (red triangles) obtained from the at-site Gumbel distributions. Values shown in Table 3 seem to suggest an interesting dependence of the coefficient of variation of the PRP process on population, that any way need to be further assessed in future research involving time series recorded in other towns. Moreover, the estimated \\(\\psi^{\\star}\\) value is always lower than the value of 1.8 adopted in Equation (4) assumed equal to the asymptotical hourly peak factor for a growing population. In both cases such variability could be due to sample inter-annual variability of the average peak factor. Nevertheless the 99.9th percentile estimates provided by the regional relationship in Equation (4) still looks a robust and reliable value to be suggested for design purpose. ## 7 Conclusions In the last few decades, the improvement of living conditions and the large infrastructure investments of industrialized countries has led to a significant increase in water consumption; in particular due to the growth of population, increasing energy demand. Improving living standards, changes in the global food system and land use, freshwater demand is significantly increasing in many areas of the world. Systems of water supply are also expected to be affected by fluctuations of climate and changes in variability of temperature and precipitation. In this context global or local variations of spatial and temporal dynamics of the water cycle may greatly increase the gaps between water supply and water demand. In this study instantaneous flow data of water consumption for three towns located in Puglia Region (Southern Italy) were exploited and collected at time steps of 10 min for two years: 2015 and 2016. As expected, an analysis of the two years observed data revealed the existence of patterns in which it is possible to identify daily periodicities in hourly water demands, as well as weekly periodicities in daily water demands. Moreover, the frequency analysis conducted on the instantaneous peak factors confirmed that the Gumbel distribution is suitable to represent the stochastic behavior of the peak water demand; in particular, exploiting the approach proposed by Zhang et al. [22], we derived a physically based regional relationship able to provide a robust evaluation for the design value of the instantaneous peak factor depending on population and suitable for refinements based on deeper at-site investigations about variability of water demand. The authors' contribution is equal. This research received no external funding. The Authors thank AQP for providing water demand data and in particular Antonio Carbonara for his kind and absolute willingness to provide information and details on the data provided. In addition, the Authors thank the anonymous Reviewers for their valuable comments. The authors declare no conflict of interest. ## References * Creaco et al. [2018] Creaco, E.; Signori, P.; Papiri, S.; Ciaponi, C. Peak demand assessment and hydraulic analysis in WDN design. _J. Water Resour. Plan. Manag._**2018**, 144, 6. [CrossRef] * Bentley et al. [2007] Bentley, S.; Walski, T.; Chase, D.; Savic, D.; Grayman, W.; Beckwith, S.; Koelle, E. _Advanced Water Distribution Modelling and Management_; Haestad: Waterbury, CT, USA, 2007. * Fair and Geyer [1958] Fair, G.M.; Geyer, J.C. _Elements of Water Supply and Waste-Water Disposal_; John Wiley & Sons Inc.: London, UK, 1958. \\begin{table} \\begin{tabular}{c c c c c} \\hline **Town** & **Inhabitants** & **N** & \\(\\theta_{q}\\) & \\(\\psi^{\\star}\\) \\\\ \\hline Palagiano & 16,067 & 6180 & 1.10 & 1.56 \\\\ Palagianello & 7857 & 3022 & 0.94 & 1.74 \\\\ Roccaforzata & 1827 & 703 & 0.43 & 1.54 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Parameters of the \\(Cp_{i}\\) distribution according to the data series recorded at Roccaforzata, Palagianello and Palagiano. * _Linaweaver et al. (1967)_ Linaweaver, F.P.; Geyer, J.C.; Wolff, J.B. _A Study of Residential Water Use_; John Hopkins University: Baltimore, MD, USA, 1967. * _Barnes et al. (1981)_ Barnes, D.; Bliss, P.J.; Gould, B.W.; Vallentine, H.R. _Water and Wastewater Engineering Systems_; Pitman Books Ltd.: London, UK, 1981. * Harmon (1918) Harmon, W.G. Forecasting Sewage at Toledo under Dry Weather Conditions. _Eng. News Rec._**1918**, _80_, 1233. * _Babbitt (1928)_ Babbitt, H.E. _Severage and Sewage Treatment_, 3rd ed.; Wiley: New York, NY, USA, 1928; pp. 20-33. * _Metcalf and Eddy (1935)_ Metcalf, L.; Eddy, H.P. _American Severe Practice, Volume III: Design of Sewers_, 3rd ed.; McGraw-Hill: New York, NY, USA, 1935; Volume I. * Johnson (1942) Johnson, C.F. Relation between Average and extreme sewage flow rates. _Eng. News Rec._**1942**, _129_, 500-501. * Rich (1980) Rich, L.G. _Low Maintenance Mechanically Simple Wastewater Treatment Systems_; McGraw-Hill: New York, NY, USA, 1980. * Tricarico et al. (2007) Tricarico, C.; De Marinis, G.; Gargano, R.; Leopardi, A. Peak residential water demand. _Water Manag._**2007**, _60_, 115-121. [CrossRef] * _Blokker et al. (2010)_ Blokker, E.J.M.; Vreeburg, J.H.G.; van Dijk, J.C. Simulating Residential Water Demand with a Stochastic End-Use Model. _J. Water Resour. Plan. Manag. ASCE_**2010**, _136_, 19-26. [CrossRef] * _Omaghomi et al. (2014)_ Omaghomi, T.; Buchberger, S. Estimating water demands in buildings. _Procedia Eng._**2014**, _89_, 1013-1022. [CrossRef] * _Creaco et al. (2015)_ Creaco, E.; Farmani, R.; Kapelan, Z.; Vamvakeridou-Lyroudia, L.; Savic, D. Considering the mutual dependence of pulse duration and intensity in models for generating residential water demand. _J. Water Resour. Plan. Manag._**2015**, _141_. [CrossRef] * _Balacco et al. (2017)_ Balacco, G.; Carbonara, A.; Gioia, A.; Iacobellis, V.; Piccinni, A.F. Evaluation of Peak Water Demand Factors in Puglia (Southern Italy). _Water_**2017**, \\(9\\), 96. [CrossRef] * _Gato-Trinidad and Gan (2012)_ Gato-Trinidad, S.; Gan, K. Characterizing maximum residential water demand. _Urban Water_**2012**, _122_, 15-24. * _Buchberger and Wu (1995)_ Buchberger, S.G.; Wu, L. Model for instantaneous residential residential water demands. _J. Hydraul. Eng._**1995**, _121_, 232-246. [CrossRef] * _Buchberger and Wells (1996)_ Buchberger, S.G.; Wells, G.J. Intensity, duration and frequency of residential water demands. _J. Water Resour. Plan. Manag._**1996**, _122_, 11-19. [CrossRef] * _Gargano et al. (2017)_ Gargano, R.; Tricarico, C.; Granata, F.; Santopietro, S.; de Marinis, G. Probabilistic Models for the Peak Residential Water Demand. _Water_**2017**, \\(9\\), 417. [CrossRef] * _Buchberger and Nadimpalli (2004)_ Buchberger, S.G.; Nadimpalli, G. Leak estimation in water distribution systems by statistical analysis of flow readings. _J. Water Resour. Plan. Manag._**2004**, _130_, 321-329. [CrossRef] * _Pallavicini and Magini (2007)_ Pallavicini, I.; Magini, R. Experimental analysis of residential water demand data: Probabilistic estimation of peak coefficients at small time scales. In _Water Management Challenges in Global Change_; Ulanicki, B., Vairavamoorthy, K., Butler, D., Bounds, P.L.M., Memon, F.A., Eds.; Taylor & Francis Group: London, UK, 2007; ISBN 978-0-415-45415-5. * _Zhang et al. (2005)_ Zhang, X.; Buchberger, S.; Van Zyl, J. A theoretical explanation for peaking factors. In Proceedings of the ASCE EWRI Conferences, Anchorage, AK, USA, 15-19 May 2005. * _Balacco et al. (2018)_ Balacco, G.; Carbonara, A.; Gioia, A.; Iacobellis, V.; Piccinni, A.F. Investigation of Peak Water Consumption Variability at Local Scale in Puglia (Southern Italy). _Proceedings_**2018**, _11_, 674. [CrossRef] * _Gargano et al. (2016)_ Gargano, R.; Di Palma, F.; De Marinis, G.; Granata, F.; Greco, R. A stochastic approach for the water demand of residential end users. _Urban Water J._**2016**, _13_, 569-582. [CrossRef] * _Chow et al. (1988)_ Chow, V.T.; Maidment, D.R.; Mays, L.W. _Applied Hydrology_; McGraw-Hill Inc.: New York, NY, USA, 1988; p. 572.
In this study an analysis of the water supply variability for three towns in Puglia (Southern Italy), Roccaforzata, Palagianello and Palagiano, was carried out, based on time series continuously recorded over two years. The towns' population ranges between 1800 and 16,000 inhabitants and the flow data, collected with time steps of 10 min, are referred to drinking water in an urban environment. The frequency analysis was conducted on the hourly and instantaneous peak factors and confirmed that the Gumbel distribution is able to represent the stochastic behavior of the peak water demand. A physically based formulation of the distribution parameters was exploited in order to investigate the regional distribution of the peak factor for towns with a different population. p _Water_**2019**, _11_, 24; doi:10.3390/w11010024
Provide a brief summary of the text.
175
arxiv-format/2403_01978v1.md
# Leveraging Anchor-based LiDAR 3D Object Detection via Point Assisted Sample Selection Shitao Chen\\({}^{*}\\), Haolin Zhang\\({}^{*}\\) and Nanning Zheng\\({}^{\\dagger}\\), * Shitao Chen (E-mail: [email protected]) and Haolin Zhang (E-mail: [email protected]) contributed equally to this work. \\(\\dagger\\) Corresponding author: Nanning Zheng (E-mail: [email protected]).Authors are with The National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi, China. ## I Introduction Background: In automated systems, such as autonomous driving and intelligent transportation, it is crucial to perceive and understand 3D surroundings, wherein 3D/bird's-eye view (BEV) object detection serves as a pivotal computer vision task. Owing to the accurate depth measurement capabilities of LiDAR sensors, 3D object detection using LiDAR-captured points has received widespread attention in recent years. According to learning objectives [1], LiDAR-based 3D object detection can be categorized into anchor-based and anchor-free methods. Anchor-based methods [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] rely on predefined anchor boxes with fixed shapes, aiming to make predictions based on these anchor boxes and the input point clouds. In contrast, anchor-free methods [20, 21, 22, 23, 24, 25, 26, 27, 28, 29] focus on directly predicting 3D objects without using predefined anchor boxes. Intuitively, the anchor-based models were presumed to be more easily learned than anchor-free models due to the prior knowledge of anchor boxes, including the uniform shapes within the same category and preset anchor positions covering the entire range of detection. However, with the development of the anchor-free methods [21, 30, 31], their performance has gained momentum over the anchor-based methods, as evident in public 3D object detection benchmarks [32, 33]. **Motivation**: Due to the deployment-friendly nature, some anchor-based methods (e.g. PointPillar [5]), accelerated by CUDA, have gained popularity in industrial applications. In recent years, researchers have proposed numerous methods that can enhance the accuracy of anchor-based detectors. Some methods employ multi-modalities fusion [34, 35, 36, 37, 38, 39], feature representations enhancement [40, 41] or second-stage refinement network cascade [42, 43, 40, 44]. These methods greatly elevate the performance of anchor-based LiDAR 3D object detectors. However, they often introduce additional data throughput or network parameters, thereby increasing the computational complexity of models and the costs of model deployment. Some other methods utilize general optimization techniques such as elaborating data augmentation [45], employing knowledge distillation [46, 47] or designing novel loss functions [48, 49, 50]. These methods do not fundamentally leverage the potential of anchor-based detectors themselves. To essentially refresh anchor-based LiDAR 3D object detectors, Fig. 1: Illustration of anchor sample selection in different modalities (blue boxes indicate objects, best viewed in color). Anchor sample selection is required to assign samples into three subsets: positive, negative, and ignored. (a) Sample selection in image domain: the ambiguity of sample selection according to different measurement thresholds (\\(\\mathcal{T}_{pos}\\) and \\(\\mathcal{T}_{neg}\\)) is small, as the sample selection metric \\(\\mathcal{S}\\) = IoU\\({}_{box}\\) primarily reflects the range of object pixel features contained by the anchor. (b) Sample selection in pointcloud domain: due to the sparsity nature of the LiDAR pointcloud, the same IoU\\({}_{box}\\)-based sample selection scheme encounters greater ambiguity. For example, although anchor No.1 was selected as a positive sample with a high IoU\\({}_{box}\\) it lacked sufficient object point features compared to the ignored anchor No.2. Similarly, compared to the ignored anchor No.3, selecting anchor No.4, despite with more object point features, as a negative sample solely due to its lower IoU\\({}_{box}\\) is ambiguous. (c) To mitigate the ambiguity of sample selection in pointcloud domain, the goal is to design a novel form of the sample selection metric (\\(\\mathcal{S}^{\\prime}\\)) that better suits the learning of object point features. and narrow the performance gap between anchor-based and anchor-free detectors, it is necessary to analyze the limitations of existing anchor-based detectors. **Analysis and Inference**: What restricts the basic performance of anchor-based LiDAR 3D object detection methods? To answer this question, we delve deeper into reviewing and analyzing the existing LiDAR-based detectors' learning objectives, by which whether a detector belongs to anchor-based or anchor-free is fundamentally determined. The major difference in learning objectives between anchor-based and anchor-free methods lies in the selection of positive and negative training samples. As shown in Figure 1 (a) and (b), the widely-used anchor-based LiDAR 3D object detection methods [41, 44, 4, 5, 40] all adopt a training sample assignment strategy based on Intersection over Union of boxes (IoU\\({}_{box}\\)), similar to 2D anchor-based object detection [51, 52] in the image domain. IoU\\({}_{box}\\) is an efficient means to define the degree of overlap between two bounding boxes, thereby depicting the spatial similarity between an anchor sample and an object. Given the dense pixel distribution in the image space, IoU\\({}_{box}\\) also predominantly signifies the number of semantic features encompassed by the anchor, thereby establishing a clear learning objective and facilitating sufficient feature learning. However, in the pointcloud domain, the LiDAR-captured points typically exhibit sparsity due to factors such as the field of view angle, number of scan lines, and occlusion. The sparsity of points results in incomplete object representations. Consequently, IoU\\({}_{box}\\) fails to measure the completeness of the semantic features contained within the anchor sample, which introduces ambiguity into the learning objective when relying only on IoU\\({}_{box}\\) to assign training samples in anchor-based LiDAR 3D object detectors. Without the involvement of anchors, the selection of training samples in anchor-free LiDAR 3D object detection methods does not exhibit significant ambiguity. Depending on the specific model framework, there are four typical categories of the sample selection strategy in anchor-free methods: grid-based [20, 21, 26, 30, 31], point-based [24, 25], range-based [53, 54], and set-to-set assignment [18, 28, 55]. Although these methods differ in processing details, they offer little ambiguity in learning objectives. This is achieved by constraining positive samples to be located within or near the object center, converting sparse pointclouds into dense pseudo-images, or establishing one-to-one sample matching relationships. This indicates that a clear selection metric is important, and the core is to ensure that the selection metrics can unequivocally assess the sample quality. From the above analysis, two inferences are drawn as follows: (1) The relatively high density of image pixels guarantees that IoU\\({}_{box}\\) can effectively define the quality of an anchor sample with small ambiguity, as it reflects the spatial and semantic similarity between the anchor box and the object box. However, this is not always the case with sparse LiDAR pointclouds. (2) Anchor-free LiDAR 3D object detection methods execute sample selection based on the objects themselves or pseudo-dense representations, leading to clearer sample selection measurement and less ambiguous feature learning compared to anchor-based methods. **Problem Definition**: The existing IoU\\({}_{box}\\)-based sample assignment strategy in anchor-based LiDAR 3D object detection introduces ambiguity in the learning objective, and hence limits the basic performance of anchor-based methods from the optimization. The primary challenge of releasing the ambiguity hinges on how to define a clearer selection metric for dividing positive and negative samples, which is the goal of this work as shown in figure 1(c). **Proposed Solutions**: To address the ambiguity of sample selection and leverage the performance of anchor-based LiDAR 3D object detectors, a series of solutions were proposed in this work. Firstly, a specific sample quality measurement, IoU\\({}_{point}\\) (Intersection over Union of points in two cubes), is proposed by further analyzing the limitations of IoU\\({}_{box}\\)-based sample selection in anchor-based LiDAR 3D object detection. Statistic analysis and case studies are conducted to illustrate the defects of a single IoU\\({}_{box}\\) measurement and the significance of incorporating IoU\\({}_{point}\\) measurement. Secondly, a novel training sample selection approach, termed _Point **A**ssisted **S**ample **S**election (PASS), is proposed. PASS integrates IoU\\({}_{box}\\) and IoU\\({}_{point}\\) to provide a clearer assessment of anchor samples, facilitating unambiguous feature learning. Finally, extensive experiments are conducted on two widely-used datasets (the KITTI dataset [56] and Waymo Open Dataset [32]), demonstrating that the application of PASS promotes the performance of anchor-based 3D object detectors. These results underscore the effectiveness of the proposed solutions. The contributions of this work are summarized into three parts: (1) The learning ambiguity of IoU\\({}_{box}\\)-based sample selection is pointed out for the first time in the research field of anchor-based LiDAR 3D object detection. To better capture the semantic features contained within anchor samples in LiDAR pointcloud domain, a novel sample quality measurement, IoU\\({}_{point}\\), is proposed. Statistical analysis and case studies are conducted for illustration and analysis. (2) A tailed training sample assignment method, named PASS, is proposed. This method offers a clearer selection metric between positive and negative anchor samples during model optimization, effectively reducing learning ambiguity. (3) Comparative experiments on several large-scale datasets validate the effectiveness of our proposed method. From a practical perspective, PASS can be plug-and-play to train any anchor-based LiDAR 3D object detector, without introducing additional model parameters and inference time-cost. The paper continues as Section II that provides an overview of related research. Section III reviews and analyzes the existing IoU\\({}_{box}\\)-based sample selection, followed by an illustration of the proposed IoU\\({}_{point}\\) measurement and its significance. Section IV elaborates on the proposed training sample assignment method PASS. In Section V, the effectiveness of the proposed method is evaluated through qualitative and quantitative analysis of 3D object detection experiments. Section VI outlines the conclusion. ## II Related Work LiDAR sensors provide accurate depth estimation and fine-grained 3D object surfaces, surpassing the capabilities of 2D cameras. LiDAR 3D object detection methods, with the input of LiDAR pointcloud, normally predict the 3D bounding boxes and classes of surrounding objects. These methods are categorized into anchor-based and anchor-free approaches according to their learning objectives, specifically the paradigm of training sample determination. Anchor-based LiDAR 3D object detection methods require the pre-definition of prior 3D anchor boxes within the 3D space. Some methods adopt a one-stage framework, such as VoxelNet [2] and SECOND [4], which involves converting LiDAR pointcloud into 3D voxels and applying 3D convolutions for feature extraction and object detection. PointPillars [5] simplifies the 3D voxels into 2D pillars for efficiency. PillarNet [6] is proposed to improve these one-stage anchor-based detectors with a novel design of a pillar-based 3D detection network. Some other methods introduce a two-stage network to build a from-coarse-to-fine framework [27, 41, 42, 44, 57, 58]. The first stage predicts the 3D proposals based on prior anchors, and the subsequent stage aims to refine the proposals to output the final detection results. Despite variations in network or algorithm design among anchor-based methods, their learning objectives remain the same: classifying the object based on features around prior anchors and regressing the bounding boxes from the positive anchor samples. These methods commonly rely on Intersection over Union of boxes (IoU\\({}_{box}\\)) to assign anchors to the positive, ignored, or negative set. Such an IoU\\({}_{box}\\)-based measurement makes little ambiguity in image 2D object detection [59], since IoU\\({}_{box}\\) roughly measures the spatial and semantic similarity at the same time in the context of dense pixels. Some studies [60, 61, 62] have even extended the IoU\\({}_{box}\\)-based measurement to further improve the effectiveness of model training. However, such a selection method meets a great ambiguity in the LiDAR pointcloud domain. Although IoU\\({}_{box}\\) can approximately measure the spatial similarity between the two boxes, the completeness of object features contained by the anchor cannot be accurately represented due to the uneven distribution and sparsity of LiDAR points. This ambiguity potentially leads to the fact that some anchors with even more semantic features may be discarded because of the only IoU\\({}_{box}\\)-based measurement, while low-quality anchors may be learned as positive samples, affecting the accuracy and training convergence of models. Anchor-free LiDAR 3D object detection methods exhibit less ambiguity in learning objectives, compared to anchor-based methods. PIXOR [20] uses grid cells in ground truths as the measurement of positive training samples. Such an inside-based assignment strategy is also adopted in [26]. Center-Point [21] builds a Gaussian kernel at each center of objects to indicate a positive label. This center-based assignment strategy is applied to the latest anchor-free detectors [30, 31]. Point-based assignment strategy [24, 25] is also popularly adopted in anchor-free detectors. The points are first segmented into foreground and background, where the foreground points inside or near the ground truths are selected as positive samples for training. These assignment methods all share a common advantage: through object points or object centers, a clear positive and negative division boundary is defined, which reduces the ambiguity of network learning. In addition, transformer-style detectors [18, 28, 55] introduce a set-to-set assignment approach, which performs a mapping from each positive sample to an object by Hungarian matching, keeping a one-to-one unambiguous manner. Range-based assignment strategy [53, 54] selects the positive samples inside the objects, which does not bring a big ambiguity because it works on the relatively dense range images generated by LiDAR pointcloud. Based on the above review and analysis, the learning objective of existing anchor-based LiDAR 3D object detection methods is not fully reasonable because of the ambiguity of IoU\\({}_{box}\\)-based anchor sample selection in LiDAR pointcloud. Resolving this ambiguity may help improve the effectiveness of feature learning and the detection performance of anchor-based detectors. ## III Analysis of Sample Selection in Anchor-based LiDAR 3D Object Detection This section begins by providing the preliminary of existing IoU\\({}_{box}\\)-based sample selection scheme. Subsequently, a hypothesis of the ambiguity of IoU\\({}_{box}\\)-based sample selection in LiDAR pointcloud domain is presented and explained. Following a new IoU\\({}_{point}\\)-based selection measurement is proposed and illustrated. Finally, the static analysis and case studies are carried out to further demonstrate the ambiguity of IoU\\({}_{box}\\)-based sample selection and the importance of inducting an IoU\\({}_{point}\\)-based assessment. ### _Preliminary of Existing IoU\\({}_{box}\\)-based Sample Selection_ 3D object detection aims to predict the cuboids (3D bounding boxes) that enclose the ground truth objects. Each cuboid can be represented as \\([x^{g},y^{g},z^{g},l^{g},w^{g},h^{g},\\theta^{g}]\\), where \\(\\{x^{g},y^{g},z^{g}\\}\\) indicate the 3D center coordinates, \\(\\{l^{g},w^{g},h^{g}\\}\\) denote the length, width and height of the cuboid, and \\(\\theta^{g}\\) means the orientation of the object. Anchor-based LiDAR 3D object detection relies on pre-defined cubids with fixed shapes, represented as \\([x^{a},y^{a},z^{a},l^{a},w^{a},h^{a},\\theta^{a}]\\). They are placed in the 3D point cloud space as the prior anchor boxes. These anchors are assigned to the positive, negative, or ignored set via the training sample selection procedure. The learning objective entails recognizing the class of objects and predicting their center, shape, and orientation from the positive anchors. The negative anchors are classified as background. The ignored anchors remain untrained, acting as fuzzy samples that delineate the positive and negative boundaries. Existing state-of-the-art anchor-based LiDAR 3D object detection methods (e.g. [4, 41, 42, 58, 44, 5]) rely on IoU\\({}_{box}\\)-based training sample selection. For a pair of ground truth box\\({}_{gt}\\) and anchor box\\({}_{anchor}\\), a selection metric \\(\\mathcal{S}^{(g,a)}\\) is directly determined by IoU\\({}_{box}\\) as Eq. 1. \\[\\mathcal{S}^{(g,a)}=\\text{IoU}_{box}=\\frac{\\mathcal{I}_{volume}\\left(\\text{ box}_{gt},\\text{box}_{anchor}\\right)}{\\mathcal{U}_{volume}\\left(\\text{ box}_{gt},\\text{box}_{anchor}\\right)}\\in[0,1] \\tag{1}\\] Where \\(\\mathcal{I}_{volume}()\\) represents the volume intersection of cuboids, and \\(\\mathcal{U}_{volume}()\\) represents the volume union of cuboids. For each object category, a positive threshold \\(\\mathcal{T}_{pos}\\) and a negative threshold \\(\\mathcal{T}_{neg}\\) are pre-defined. Each anchor is assigned according to \\(\\mathcal{S}^{(g,a)}\\) by Eq. 2. \\[\\begin{cases}\\text{box}_{anchor}\\text{\\textrightarrow}\\text{positive}\\ \\ if\\ \\mathcal{S}^{(g,a)}>\\mathcal{T}_{pos}\\\\ \\text{box}_{anchor}\\text{\\textrightarrow}\\text{negative}\\ \\ else\\ if\\ \\mathcal{S}^{(g,a)}< \\mathcal{T}_{neg}\\\\ \\text{box}_{anchor}\\text{\\textrightarrow}\\text{ignored}\\ \\ otherwise\\end{cases} \\tag{2}\\] where \\(\\{\\mathcal{T}_{pos}\\),\\(\\mathcal{T}_{neg}\\}\\) is manually set. In existing anchor-based methods' settings, \\(\\{\\mathcal{T}_{pos}\\),\\(\\mathcal{T}_{neg}\\}\\) are generally set to {0.6,0.45} for the car class, and {0.5,0.35} for the pedestrian class and the cyclist class. ### _Hypothesis and Proposed Io\\(U_{point}\\)-based Measurement_ In image object detection, pixels serve as the primitive semantic units for feature learning. Since image pixels are relatively dense, IoU\\({}_{box}\\) can be broadly interpreted as a measure of both spatial and semantic similarity between the anchor and the ground truth object, leading to minimal ambiguity in training sample selection and subsequent feature learning, as depicted in Fig.1(a). Comparatively, as shown in Fig. 1(b), LiDAR points, as the main source of input data and original semantic features for LiDAR 3D object detection, are often sparse and partially absent. In pointcloud domain, IoU\\({}_{box}\\) can still reflect the spatial approximation of the anchor box to the ground truth box. However, it fails to depict the semantic approximation of the anchor to the ground truth due to the sparse distribution of scanned points associated with the object. Consequently, a hypothesis emerges regarding such inconsistency: IoU\\({}_{box}\\) potentially brings about ambiguity when it acts as the only measurement for training sample assignment in LiDAR anchor-based 3D object detection. An intuitive way to verify and solve this ambiguity is taking into account the similarity of pointcloud as an additional measurement alongside IoU\\({}_{box}\\). Referring to the form and characteristics of IoU\\({}_{box}\\), a new measurement IoU\\({}_{pointcloud}\\), as seen in Eq.3, that depicts the overlap of two sets of pointcloud is proposed. \\[\\text{IoU}_{point}=\\frac{\\mathcal{I}_{\\#points}\\left(\\text{box}_{gt},\\text{ box}_{anchor}\\right)}{\\mathcal{U}_{\\#points}\\left(\\text{box}_{gt},\\text{box}_{ anchor}\\right)}\\in[0,1] \\tag{3}\\] Where \\(\\mathcal{I}_{\\#points}()\\) represents the number of points enclosed in the intersection of cuboids, and \\(\\mathcal{U}_{\\#points}()\\) represents the number of points included in the union of cuboids. ### _Statistic analysis and case studies_ To further analyze the effect of sample selection in anchor-based LiDAR 3D object detection, the statistics on approximately one thousand training anchor samples for cars from the KITTI dataset [56] are drawn as in figure 2. For better illustration and explanation, some typical cases are presented visually in figure 3. In figure 2(a), each scatter represents an anchor, and the horizontal and vertical coordinates represent IoU\\({}_{box}\\) and IoU\\({}_{point}\\) between the anchor and its corresponding ground truth, respectively. We plotted the red dotted line to represent the threshold \\(\\mathcal{T}_{pos}\\) for positive samples and the yellow dotted line to represent the threshold \\(\\mathcal{T}_{neg}\\) for negative samples. The existing IoU\\({}_{box}\\)-based sample selection method separates these anchors directly by comparing the \\(\\mathcal{S}\\) = IoU\\({}_{box}\\) to \\(\\{\\mathcal{T}_{pos}\\),\\(\\mathcal{T}_{neg}\\}\\). However, by observing the highlighted regions as numbered in purple circles, the hypothesized ambiguity happens. For instance, some anchors, even with high IoU\\({}_{point}\\) indicated in the area of the No.1 circle, are assigned as the negative samples because of the relatively low IoU\\({}_{box}\\). Conversely, anchors with low IoU\\({}_{point}\\) in the area of the No.3 circle are assigned to the ignored set only because of the slightly higher IoU\\({}_{box}\\). Such ambiguity is further elucidated through comparisons between case(c) and case(d) in figure 3. Moreover, some anchors with high IoU\\({}_{box}\\) but low IoU\\({}_{point}\\) are assigned to the positive set as in the area of the No.4 circle. However, the anchors with similar IoU\\({}_{box}\\) but very high IoU\\({}_{point}\\) are assigned to the ignored set as in the area of the No.2 circle. Such ambiguity is clarified by comparing case(a) and case(b) in figure 3. It can be seen that solely relying on IoU\\({}_{box}\\) fails to provide a comprehensive assessment of the Fig. 3: Visualized case studies of ambiguity associated with IoU\\({}_{box}\\)-based sample selection (best viewed in color). The green boxes and blue boxes represent the anchor samples and ground truths respectively. Points in the intersection of the anchor sample and ground truth are indicated as red points. Points that belong to ground truth but do not belong to the intersection are indicated as blue points. Points that belong to the anchor but do not belong to the intersection are represented as green points. Fig. 2: Statistics of training anchor samples distribution over the KITTI dataset. (a) The scatter diagram of the relation between IoU\\({}_{box}\\) and IoU\\({}_{point}\\). (b) The histogram diagram of IoU\\({}_{point}\\) of those ignored anchor samples determined by IoU\\({}_{box}\\)-based sample selection scheme. (c) The histogram of IoU\\({}_{point}\\) of those negative anchor samples determined by IoU\\({}_{box}\\)-based sample selection scheme. (d) The histogram of IoU\\({}_{point}\\) of those positive anchor samples determined by IoU\\({}_{box}\\)-based sample selection scheme. quality of an anchor sample, especially regrading to the completeness of the object features. Consequently, IoU\\({}_{box}\\)-based sample selection method creates inherent learning ambiguity for anchor-based LiDAR 3D object detectors. The histogram diagrams of IoU\\({}_{point}\\) of ignored, negative, and positive anchor samples determined by IoU\\({}_{box}\\)-based sample selection scheme are shown in figure 2(b), (c), and (d), respectively. From these distributions, it is evident that the proportion of the anchors with selection ambiguity mentioned above is non-negligible. Addressing this ambiguity is crucial, and introducing IoU\\({}_{point}\\) into the sample selection measurement is the main idea of our solution, which will be detailed in the next section. ## IV Proposed Solution: Point Assisted Sample Selection Based on the aforementioned analysis, this section introduces a novel method for sample selection in anchor-based LiDAR 3D object detection, named Point Assisted Sample Selection (PASS). This method incorporates the IoU\\({}_{point}\\) metric alongside the IoU\\({}_{box}\\) metric used in prior works to eliminate sample assignment ambiguity. It is empirically unreasonable to use the average of IoU\\({}_{box}\\) and IoU\\({}_{point}\\) as the selection measurement. This is because, for some objects with very sparse points, the value of IoU\\({}_{point}\\) is significantly smaller than that of IoU\\({}_{box}\\), potentially resulting in an insufficient number of positive training samples. Consequently, first a upper boundary \\(\\mathcal{B}_{upper}\\) and a lower boundary \\(\\mathcal{B}_{lower}\\) are defined based on the selection thresholds {\\(\\mathcal{T}_{pos}\\),\\(\\mathcal{T}_{neg}\\)} and a hyperparameter \\(\\mathcal{K}\\), as shown in Eq. 4. The value of \\(\\mathcal{K}\\) governs the dynamic margin of boundaries. {\\(\\mathcal{B}_{upper}\\),\\(\\mathcal{B}_{lower}\\)} determines a specified interval, covering the positive and negative selection thresholds, as the adjustment range to which PASS is applied. \\[\\begin{cases}\\mathcal{B}_{upper}=\\mathcal{T}_{pos}+\\dfrac{1}{\\mathcal{K}} \\left(\\mathcal{T}_{pos}-\\mathcal{T}_{neg}\\right)\\\\ \\mathcal{B}_{lower}=\\mathcal{T}_{neg}-\\dfrac{1}{\\mathcal{K}}\\left(\\mathcal{T}_ {pos}-\\mathcal{T}_{neg}\\right)\\end{cases} \\tag{4}\\] The previous IoU\\({}_{box}\\)-based measurement \\(\\mathcal{S}\\)=IoU\\({}_{box}\\) retains its significance in PASS, as it provides an approximate assessment of the spatial similarity between the anchor and the ground truth. IoU\\({}_{point}\\), serving as a measure of point feature similarity, can be considered as an assisted metric to adjust \\(\\mathcal{S}\\). In PASS, the new selection measurement, denoted as \\(\\mathcal{S}^{\\prime}\\), is defined as shown in Eq.2. In practical applications, it is observed that many background points originating from the ground plane are emcompassed within the 3D bounding boxes of objects or anchors, causing unreasonable values of IoU\\({}_{point}\\). To mitigate this issue, before the calculation of IoU\\({}_{point}\\), removing points from the ground plane is recommended. For this purpose, the RANSAC-based plane segmentation algorithm from the open-source 3D data processing library Open3D1 is employed in this work. Footnote 1: Open3D [https://github.com/isl-org/Open3D](https://github.com/isl-org/Open3D) \\[\\begin{split}\\mathcal{S}^{\\prime}&=\\alpha\\mathcal{S} +\\beta(\\text{IoU}_{point}\\mathcal{B}_{upper}+\\left(1-\\text{IoU}_{point} \\right)\\mathcal{B}_{lower})\\\\ &=\\alpha\\text{IoU}_{box}+\\beta(\\text{IoU}_{point}\\mathcal{B}_{ upper}+\\left(1-\\text{IoU}_{point}\\right)\\mathcal{B}_{lower})\\end{split} \\tag{5}\\] where \\(\\alpha\\) and \\(\\beta\\) are hyperparameters that weighting the IoU\\({}_{box}\\)-based measurement and IoU\\({}_{point}\\)-assisted measurement, and \\(\\alpha\\)+\\(\\beta\\)=1. IoU\\({}_{box}\\)-based sample selection measurement in current anchor-based LiDAR 3D object detectors can be considered as a special form of PASS by setting \\(\\alpha\\)=1 and \\(\\beta\\)=0. In this work, we simply average the weights of two metrics by set \\(\\alpha\\)=\\(\\beta\\)=\\(\\frac{1}{2}\\). It not only eliminates the redundant hyperparameters, but leads to boundary constraints as follows: For \\(\\forall\\mathcal{S}\\leqslant\\mathcal{T}_{neg}\\), following Eq. 4 and Eq. 5, \\(\\mathcal{S}^{\\prime}\\) can be bounded as Eq. 6. \\[\\begin{split}\\mathcal{S}^{\\prime}&\\leqslant\\frac{1}{2 }\\left(\\mathcal{T}_{neg}+\\mathcal{B}_{upper}\\right)\\text{ since IoU}_{ point}\\leqslant 1\\\\ &=\\frac{1}{2}\\left(\\mathcal{T}_{neg}+\\mathcal{T}_{pos}\\right)+ \\frac{1}{2\\mathcal{K}}\\left(\\mathcal{T}_{pos}-\\mathcal{T}_{neg}\\right)\\\\ &\\leqslant\\frac{1}{2}\\left(\\mathcal{T}_{neg}+\\mathcal{T}_{pos} \\right)+\\frac{1}{2}\\left(\\mathcal{T}_{pos}-\\mathcal{T}_{neg}\\right)\\text{ since }\\mathcal{K}\\geqslant 1\\\\ &=\\ \\mathcal{T}_{pos}\\end{split} \\tag{6}\\] The above inequality proves that the samples that were originally classified into the negative set by IoU\\({}_{box}\\)-based measurement will not cross into the positive set by PASS, that is, they will remain in the negative set or cross over to the ignored set. Similarly, for \\(\\forall\\mathcal{S}\\geqslant\\mathcal{T}_{pos}\\), following Eq. 4 and Eq. 5, \\(\\mathcal{S}^{\\prime}\\) can be bounded as Eq. 7. \\[\\begin{split}\\mathcal{S}^{\\prime}&\\geqslant\\frac{1}{2 }\\left(\\mathcal{T}_{pos}+\\mathcal{B}_{lower}\\right)\\text{ since IoU}_{ point}\\geqslant 0\\\\ &=\\frac{1}{2}\\left(\\mathcal{T}_{pos}+\\mathcal{T}_{neg}\\right)-\\frac{ 1}{2\\mathcal{K}}\\left(\\mathcal{T}_{pos}-\\mathcal{T}_{neg}\\right)\\\\ &\\geqslant\\frac{1}{2}\\left(\\mathcal{T}_{pos}+\\mathcal{T}_{neg} \\right)-\\frac{1}{2}\\left(\\mathcal{T}_{pos}-\\mathcal{T}_{neg}\\right)\\text{ since }\\mathcal{K}\\geqslant 1\\\\ &=\\ \\mathcal{T}_{neg}\\end{split} \\tag{7}\\] The above inequality proves that the samples that were originally classified into the positive set by IoU\\({}_{box}\\)-based measurement will not cross into the negative set by PASS. They will be kept in the positive set or assigned to the ignored set. For \\(\\forall\\mathcal{S}\\) that \\(\\mathcal{T}_{neg}\\leqslant\\mathcal{S}\\leqslant\\mathcal{T}_{pos}\\), following Eq. 4 and Eq. 5, \\(\\mathcal{S}^{\\prime}\\) can be bounded as Eq. 8 and Eq. 9. \\[\\begin{split}\\mathcal{S}^{\\prime}&\\leqslant\\frac{1}{2 }\\left(\\mathcal{T}_{pos}+\\mathcal{B}_{upper}\\right)\\text{ since IoU}_{ point}\\leqslant 1\\\\ &=\\frac{1}{2}\\left(\\mathcal{T}_{pos}+\\mathcal{T}_{pos}+\\frac{1}{K} \\left(\\mathcal{T}_{pos}-\\mathcal{T}_{neg}\\right)\\right)\\\\ &=\\mathcal{T}_{pos}+\\frac{1}{2\\mathcal{K}}\\left(\\mathcal{T}_{pos}- \\mathcal{T}_{neg}\\right)\\\\ &\\leqslant\\mathcal{T}_{pos}+\\frac{1}{\\mathcal{K}}\\left(\\mathcal{T}_ {pos}-\\mathcal{T}_{neg}\\right)\\text{ since }\\mathcal{K}\\geqslant 1\\\\ &=\\ \\mathcal{B}_{upper}\\end{split} \\tag{8}\\]The above inequalities prove that for the samples that were originally classified into the ignored set, \\(\\{\\mathcal{B}_{upper},\\mathcal{B}_{lower}\\}\\) strictly constrain the value of PASS measurement \\(\\mathcal{S}^{\\prime}\\). These constraints ensure that the purpose of PASS is to adjust the assignment of ambiguous samples caused by the IoU\\({}_{box}\\)-based metric while avoiding the introduction of new ambiguity. Algorithm 1 shows the overall process of how to select positive and negative anchor samples via PASS. ``` 0:\\(\\mathcal{G}\\) is a set of ground-truth boxes \\(\\mathcal{A}\\) is a set of all anchor boxes \\(\\mathcal{K}\\) is a hyperparameter of PASS \\(\\mathcal{T}_{pos}\\) is a threshold for selecting positive sample \\(\\mathcal{T}_{neg}\\) is a threshold for selecting negative sample Output:\\(\\mathcal{P}\\) is a set of positive samples \\(\\mathcal{N}\\) is a set of negative samples 1: compute \\(\\mathcal{B}_{lower}\\),\\(\\mathcal{B}_{upper}\\) based on \\(\\{\\mathcal{T}_{pos},\\mathcal{T}_{neg},\\mathcal{K}\\}\\); (See Eq. 4) 2:for each ground-truth box \\(g\\in\\mathcal{G}\\)do 3:for each anchor box \\(a\\in\\mathcal{A}\\)do 4: compute \\(\\mathcal{S}^{(g,a)}=\\text{IoU}_{box}^{(g,a)}\\); (See Eq. 1) 5:if\\(\\mathcal{B}_{lower}\\in\\mathcal{S}^{(g,a)}\\in\\mathcal{B}_{upper}\\)then 6: compute IoU\\({}_{point}^{(g,a)}\\); (See Eq. 3) 7: compute \\(\\mathcal{S}^{\\prime(g,a)}\\) based on \\(\\{\\mathcal{S}^{(g,a)},\\mathcal{U}_{point},\\mathcal{B}_{lower},\\mathcal{B}_{ upper}\\}\\) ; (See Eq. 5) 8: update \\(\\mathcal{S}^{(g,a)}=\\mathcal{S}^{\\prime(g,a)}\\) ; 9: record \\(\\mathcal{S}^{(g,a)}\\rightarrow\\mathcal{S}^{(\\mathcal{G},\\mathcal{A})}\\); 10:else 11: record \\(\\mathcal{S}^{(g,a)}\\rightarrow\\mathcal{S}^{(\\mathcal{G},\\mathcal{A})}\\); 12:endif 13:endfor 14:endfor 15: collect \\(\\mathcal{P}\\) and \\(\\mathcal{N}\\) based on \\(\\{\\mathcal{T}_{pos},\\mathcal{T}_{neg},\\mathcal{S}^{(\\mathcal{G},\\mathcal{A})}\\}\\); (See Sec. III-A) 16:return\\(\\mathcal{P},\\mathcal{N}\\); ``` **Algorithm 1** Point Assisted Sample Selection (PASS) ## V Experiments and Analysis In this section, extensive experiments on multiple datasets are conducted to evaluate the effectiveness of the proposed method. Firstly, the experimental datasets and implementation details are introduced. Secondly, comparative experiments are performed to validate the feasibility of the proposed method. Furthermore, the proposed method is applied to an advanced anchor-based detector, and compared to state-of-the-art detectors on benchmarks. Subsequently, ablation studies are conducted to evaluate the effects of different configurations. Finally, qualitative analysis and discussion are provided. ### _Experimental Setup_ **Datasets description**: Two widely used 3D object detection datasets (the KITTI dataset [56] and Waymo Open Dataset [32]) are chosen as the experimental database. The KITTI dataset consists of 7481 training frames with both LiDAR data and 3D object annotations, and 7518 test frames with only LiDAR data. Following previous work [34, 44], the training frames are further divided into _train_ set with 3712 frames and _val_ set with 3769 frames. The objects are categorized into easy, moderate, and hard according to the box height and the occlusion ratio in the image view. 3D/BEV mean Average Precision (mAP) calculated with 11 or 40 recall positions is used as the official performance evaluation metric. The Waymo Open Dataset includes a total of annotated 1000 sequences (around 200K frames), in which 798 sequences are taken as _train_ set and 202 sequences are used as _val_ set. 3D average precision (AP) and average precision weighted by heading accuracy (APH) are used as evaluation metrics. The dataset evaluates the results in two difficulty levels: LEVEL_1 (L1) for object cuboids with more than 5 LiDAR points, and LEVEL_2 (L2) for object cuboids with 1-5 LiDAR points. **Implementation details**: The experiments on the KITTI dataset are performed on a server with 6 NVIDIA Titan RTX GPUs, and the experiments on the Waymo Open Dataset are conducted on another server with one A800 GPU. Several advanced anchor-based LiDAR 3D object methods, including both one-stage detectors ( (SECOND [4], PointPillars [5], PillarNet [6], PV-RCNN [44]) and two-stage detectors (Focals-Conv [41], Graph-Vo [57], PV-RCNN++ [27]), are chosen as baselines. These models are re-implemented using OpenPCdet2 or their original codebases, and further equipped with our proposed PASS. For a fair comparison, these models' original training settings, parameters, and inference steps (as described in their paper or code) are kept unchanged. The only difference between our models and the baselines is the adoption of the proposed PASS. Additionally, anchor-based PV-RCNN++ with PASS (named PASS-PV-RCNN++) is selected and trained as our proposed new detector, being compared with more state-of-the-art 3D object detectors [21, 23, 24, 25, 40, 42, 43, 58, 63, 64, 65, 66] on benchmarks. In the experiments of verification of the proposed method and comparison with other state-of-the-art, \\(\\mathcal{K}\\) of PASS is set to 5. In the extended experiments, ablation studies on the effect of \\(\\mathcal{K}\\) and the number of stages are carried out. Footnote 2: OpenPCDet [https://github.com/open-mmlab/OpenPCDet](https://github.com/open-mmlab/OpenPCDet) ### _Verification_ To verify the effectiveness of PASS on anchor-based LiDAR 3D object detection, PASS, the comparison experiments of the models with and without PASS are conducted. Table I shows the experimental results on the KITTI _val_ set. The results are reported by the BEV mAP with 11 recall points. IoU thresholds of true positive predictions are set to 0.7 for car detection and 0.5 for pedestrian and cyclist detection. From this table, it is found that PASS can at most elevate the baseline with the mAP improvement of 1.38% on car detection, 3.26% on pedestrian detection, and 3.65% on cyclist detection. Although on some metrics, PASS brings small precision drops, it achieves an average mAP improvement of at least 0.5% and at most 1.38% for all baseline detectors on the moderate level. Table II shows the experimental results on the Waymo Open Dataset _val_ set. Following previous work [44] for fast verification, only 20% training data of _train_ set is used to train both baseline detectors and PASS-based detectors. From the results, PASS guarantees the mean APH (mAPH) improvement of at least 0.3% and at most 2.0% on all baseline models. Especially \\begin{table} \\begin{tabular}{|c|c|c||c|c|c||c|c||c|c|c|c|} \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{\\#Stages} & \\multicolumn{2}{c||}{mAPH} & \\multicolumn{2}{c||}{VEH (AP/APH)} & \\multicolumn{2}{c||}{PED (AP/APH)} & \\multicolumn{2}{c||}{CYC (AP/APH)} \\\\ \\cline{2-11} & & L/1/2 & L & L & L & L & L & L & L & L & L & L \\\\ \\hline \\hline SECOND [4] & One & 60.0/54.3 & **71.0/70.3** & **62.6/62.0** & 65.2/54.2 & 57.2/47.5 & 57.1/55.6 & 55.0/53.5 \\\\ **+ PASS** & One & **61.1/55.3** & 70.3/69.7 & 61.9/61.4 & **66.9/56.0** & **58.8/49.1** & **59.1/57.6** & **56.9/55.4** \\\\ \\(\\Delta\\) & - & **+1.1/+1.0** & -0.7/0.0 & -0.7/0.0 & +1.7/**+1.8** & **+1.6/+1.6** & **+2.0/+2.0** & **+1.9/+1.9** \\\\ \\hline \\hline Pointillars [5] & One & 50.0/50.7 & 70.4/69.8 & 62.7/61.6 & 62.4/63.5 & 58.2/40.6 & 55.5/51.8 & 53.2/49.8 \\\\ **+ PASS** & One & **58.0/52.6** & 70.6/69.9 & 62.3/61.7 & **66.2/47.7** & **58.4/42.0** & 58.9/56.3 & **56.7/54.1** \\\\ \\(\\Delta\\) & - & **+2.0/+1.9** & **+0.2/+0.1** & **+0.1/+0.1** & NA/+1.4 & **+0.2/+1.4** & **+3.6/+2.5** & **+3.5/+4.3** \\\\ \\hline \\hline PV-RCNN [44] & Two & 66.7/60.9 & 75.7/47.7 & 67.4/66.8 & 72.0/61.2 & 63.7/54.0 & 65.9/64.3 & 63.4/61.8 \\\\ **+ PASS** & Two & **67.3/61.5** & **75.5/74.7** & **67.5/66.8** & **72.8/61.7** & **64.5/54.5** & **67.4/56.6** & **64.9/63.2** \\\\ \\(\\Delta\\) & - & **+0.6/+0.6** & **+0.1/NA** & **+0.1/NA** & **+0.7/+0.5** & **+0.8/+0.5** & **+1.5/+1.3** & **+1.5/+1.4** \\\\ \\hline \\hline PV-RCNN++ [44] & Two & 68.8/62.7 & **77.0/76.5** & **69.3/68.8** & 69.9/61.6 & 60.8/55.1 & 70.0/68.7 & 67.5/66.3 \\\\ **+ PASS** & Two & **69.1/63.3** & 76.8/76.3 & 69.2/68.6 & **70.7/61.9** & **62.7/54.7** & **70.6/69.1** & **68.1/66.7** \\\\ \\(\\Delta\\) & - & **+0.3/+0.4** & -0.2/+0.2 & -0.1/+0.2 & **+0.8/+0.6** & **+1.9/+1.6** & **+0.6/+0.4** & **+1.6/+0.4** \\\\ \\hline \\end{tabular} \\(\\#\\)Stages: number of stages. VEH: Vehicle. PED: Pedestrian. CYC: Cyclist. NA: not applicable. Best results are highlighted in **bold**. \\end{table} TABLE II: Verification of applying PASS to anchor-based LiDAR 3D object detectors (use 20% training data) on the Waymo Open Dataset _val_ set. \\begin{table} \\begin{tabular}{|c|c|c||c|c|c||c|c|c||c|c|c|c|} \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{\\#Stages} & \\multicolumn{2}{c||}{Avg.} & \\multicolumn{2}{c||}{Car (IoU=0.7)} & \\multicolumn{2}{c||}{Pedestrian (IoU=0.5)} & \\multicolumn{2}{c|}{Cyclist (IoU=0.5)} \\\\ \\cline{3-14} & & Mod. & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\\\ \\hline \\hline SECOND [4] & One & 69.84 & 89.76 & 87.05 & **85.09** & 57.81 & 52.48 & 48.33 & 81.76 & 69.99 & **66.09** \\\\ \\(\\Delta\\) & One & **71.22** & **89.82** & **87.25** & 85.04 & **60.00** & **55.74** & **50.69** & **82.22** & **70.68** & 65.74 \\\\ \\(\\Delta\\) & - & **+1.38** & **+0.66** & **+0.2** & **-0.05** & **+2.19** & **+3.26** & **+2.36** & **+0.46** & **+0.69** & **-0.35** \\\\ \\hline \\hline Pointillars [5] & One & 05.7 & **89.70** & **87.40** & **83.57** & 61.40 & 56.49 & **51.61** & 82.83 & 67.81 & 63.00 \\\\ **+ PASS** & One & **70.87** & 89.43 & 87.38 & **85.03** & 59.76 & **56.53** & 51.17 & **83.29** & **68.69** & **63.76** \\\\ \\(\\Delta\\) & - & **+0.30** & -0.27 & -0.02 & **+1.36** & -0.67 & **+0.03** & -0.44 & **+0.36** & **+0.88** & **+0.76** \\\\ \\hline \\hline PillarNet [6] & One & 68.53 & 89.73 & 86.89 & 84.27 & 56.79 & 52.40 & 48.79 & **84.05** & **66.29** & **62.30** \\\\ **+ PASS** & One & **69.64** & **89.85** & **87.43** & **85.19** & **60.06** & **55.29** & **51.81** & 83.65 & 66.21 & 61.92 \\\\ \\(\\Delta\\) & - & **+1.11** & **+0.12** & **+0.54** & **+0.92** & **+3.27** & **+2.89** & **+3.02** & -0.40 & -0.08 & -0.38 \\\\ \\hline \\hline PV-RCNN [44] & Two & 72.91 & 89.89 & 87.55 & 86.37 & **67.21** & 58.67 & 54.50 & 87.64 & **72.50** & 69.07 \\\\ **+ PASS** & Two & **73.41** & **90.11** & **88.02** & **87.45** & 66.53 & **60.34** & **55.21** & **91.05** & 71.88 & **69.33** \\\\ \\(\\Delta\\) & - & **+0.50** & **+0.22** & **+0.47** & **+1.08** & **+1.05** & **+1.67** & **+4.69** & **+2.41** & **+0.62** & **+0.26** \\\\ \\hline \\hline Focals-Conv [41] & Two & 73.56 & **90.18** & **88.11** & **87.52** & 68.17 & 60.81 & 55.18 & 85.08 & 71.75 & 67.08 \\\\ **+ PASS** & Two & **74.35** & 89.98 & 87.41 & 86.87 & **68.98** & **61.60** & **56.39** & **87.64** & **74.05** & **70.73** \\\\ \\(\\Delta\\) & - & **+0.79** & -0.20 & -0.70 & -0.65 & **+0.81** & **+0.79** & **+1.21** & **+1.36** & **+2.30** & **+3.65** \\\\ \\hline \\hline Graph-vo [57] & Two & 75.66 & 90.25 & 88.76 & 87.83 & 70.53 & 54.06 & 58.21 & 86.96 & **74.17** & **72.72** \\\\ **on pedestrian and cyclist detection, detectors with PASS show great performance improvement. For example, PointPillar with PASS achieves AP/APH improvement of around 3.5-4.5% on cyclist detection. PV-RCNN++ with PASS achieves AP/APH improvement of around 0.6-1.9% on pedestrian detection. The evaluation results prove that our proposed method can leverage the average performance of anchor-based LiDAR 3D object detection. ### _Comparison with State-of-the-Art_ By applying PASS to the anchor-based version of PV-RCNN++, PASS-PV-RCNN++ is proposed. First, the total training data of the KITTI dataset is re-separated into 80% training data and 20% validation data. PASS-PV-RCNN++ is optimized on train data. The model with the best performance on validation data is chosen as the final detector. We evaluate the detector on the KITTI test data with the official online benchmark. It is noted that the official KITTI leaderboard is ranked by performance on moderate mAP. Since there is no anchor-based version of PV-RCNN++ on the KITTI test benchmark, we also train a PV-RCNN++ detector using the same training settings and submit it to the online benchmark. The performance of both our proposed detector and other state-of-the-art detectors submitted to online benchmark with three classes results are shown in Table III. The results are computed by 3D mAP with 40 recall positions. Our proposed detector (PASS-PV-RCNN++) achieves the best average moderate mAP and cyclist moderate mAP among all anchor-based competitors, and even surpasses those advanced anchor-free detectors \\begin{table} \\begin{tabular}{|c|c||c|c|c||c|c|c||c|c||c|c|} \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{\\(\\mathcal{K}\\)} & \\multicolumn{2}{c||}{mAP} & \\multicolumn{2}{c||}{Car (IoU=0.70.5)} & \\multicolumn{2}{c||}{Pedestrian (IoU=0.50.25)} & \\multicolumn{2}{c||}{Cyclist (IoU=0.50.25)} \\\\ \\cline{2-13} & & Avg. & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\\\ \\hline \\hline Baseline & -64.44 & 86.50/96.57 & 77.56/94.44 & 74.70/93.68 & 52.59/73.14 & 47.01/69.26 & 42.87/65.38 & 80.00/87.85 & 61.52/70.55 & 57.23/66.63 \\\\ + PASS & 2 & 65.43 & 87.28/**97.36** & 77.99/94.55 & 75.05/93.72 & 53.47/**76.04** & 47.80/**71.63** & 44.15/**68.01** & 81.87/88.97 & 62.74/71.65 & 58.60/67.49 \\\\ + PASS & 3 & 64.95 & 87.62/95.57 & 78.32/94.52 & 75.07/93.43 & 53.98/**74.06** & 49.24/70.93 & 45.43/67.76 & 77.70/88.21 & 60.64/71.88 & 56.61/68.25 \\\\ + PASS & 4 & 65.58 & 86.68/96.73 & 77.76/94.47 & 74.97/93.43 & 54.84/89.73/93.49 & 19.70/45.48 & 41.81/66.44 & 80.09/**90.22** & 62.94/73.51 & 86.33/68.90 \\\\ + PASS & 5 & **67.16** & 87.47/96.93 & **78.53/94.39** & **75.60**/93.62 & **57.72**/73.65 & **51.28/70.76** & **46.53**/67.21 & **38.56/89.26 & 63.96/70.95 & 59.53/66.91 \\\\ + PASS & 6 & 65.88 & 87.33/95.67 & 78.34/**94.58** & 75.39/**93.74** & 54.82/74.17 & 48.94/70.26 & 44.64/66.74 & 81.87/89.63 & 62.73/73.17 & 58.88/68.93 \\\\ + PASS & 7 & 66.76 & 87.29/95.78 & 78.31/94.42 & 75.45/93.58 & 56.06/73.79 & 50.09/69.46 & 45.55/65.82 & 82.30/89.10 & **65.06/74.18** & **60.69/70.07** \\\\ + PASS & 8 & 64.65 & 87.03/95.54 & 77.82/94.36 & 75.06/93.41 & 54.11/73.20 & 48.55/69.04 & 44.14/65.04 & 78.17/88.39 & 60.65/70.77 & 56.36/66.58 \\\\ \\(\\Delta_{avg}\\) & -1.33 & **-0.78** & -0.37 & **-0.37** & **-0.37** & **-0.38** & **-0.44** & **-0.53** & **-0.51** & **-0.37** & **-0.37** & **-0.37** & **-0.36** & **-0.36** \\\\ \\(\\Delta_{avg}\\) & -1.43 & **-0.78** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** & **-0.37** \\\\ \\(\\Delta_{avg}\\) & -1.42 & **-0.79** & **-0.47** & **-0.49** & **-0.06** & **-0.45** & **-0.37** & **-0.42** & **-0.47** & **-0.42** & **-0.41** & **-0.79** & **-0.37** & **-0.37** & **-0.37** & **-0.37** \\\\ \\hline \\end{tabular} Avg.: Average. Mod.: Moderate. \\(\\Delta_{avg}\\)/\\(\\Delta_{avg}\\): the average/maximum mAP difference. The best results are highlighted in **bold**. \\end{table} TABLE V: Ablation study on the effect of hyperparameter \\(\\mathcal{K}\\) of PASS on the KITTI _val_ set. \\begin{table} \\begin{tabular}{|c|c|c|c|c||c|c||c|c|c|c|} \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{\\(\\#\\)Stages} & \\multirow{2}{*}{Anchor} & mAPH & \\multicolumn{2}{c||}{VEH (AP/APH)} & \\multicolumn{2}{c||}{PED (AP/APH)} & \\multicolumn{2}{c||}{CYC (AP/APH)} & \\multicolumn{2}{c||}{Ref.} \\\\ \\cline{6-13} & & & L1/L2 & L1 & L2 & L1 & L2 & L1 & L2 & L1 & L2 \\\\ \\hline CenterPoint [21] & One & No & 73.5/67.6 & 76.6/6.0 & 68.9/68.4 & 79.0/73.4 & 71.0/65.8 & 72.1/71.0 & 69.5/68.5 & CVPR’21 \\\\ Graph-Cc [57] & Two & No & 77.8/71.6 & 60.8/60.1 & 72.3/71.19 & 82.9/73.5 & 75.0/69.7 & 77.2/76.0 & 74.7/3.3 & ECCV’22 \\\\ PV-RCNN++ [27] & Two & No & 75.9/69.5 & 79.3/78.8 & 72.3/76.0/72.8 & 81.3/76.3 & 73.2/68.0 & 73.7/27.2 & 71.2/70.2 & IJCV’23 \\\\ GD-MAE [66] & Two & No & 77.6/71.6 & 80.2/79.8 & 72.4/72.0 & 83.1/76.7 & 75.5/69.4 & 77.2/76.2 & 74.4/73.4 & CVPR’23 \\\\ \\hline \\hline SECOND [4] & One & Yes & 63.17/5.72 & 72.3/71.7 & 63.9/63.3 & 68.7/58.2 & 70.7/51.3 & 60.6/59.3 & 58.3/57.1 & Sesnors’18 \\\\ PointPillars [5] & One & Yes & 63.5/73.8 & 72.1/71.5 & 63.6/63.3 & 68.7/65.6 & 82.0/53.0 & 64.6/59.3 & 61.9/59.9 & CVPR’19 \\\\ PV-RCNN [44] & Two & Yes & 69.6/63.3 & 77.7/56.9 & 69.6/68.4 & 75.0/65.6 & 66.5/7.6 & 67.6/68.64 & 65.4/64.0 & CVPR’20 \\\\ Part-\\(A^{2}\\) [42] & Two & Yes & 70.6/36.3 & 77.1/76.5 & 68.5/68.0 & 75.2/66.6 & 62.5/58.6 & 68.6/67.4 & 66.1/64.9in the table list. Compared to the baseline PV-RCNN++, PASS enables the baseline to gain an average moderate improvement of 0.87%. Furthermore, PASS-PV-RCNN++ is trained on the whole training data of Waymo Open Dataset and tested on the _val_ set. Table IV shows the detection performance of both PASS-PV-RCNN++, PV-RCNN++, and other state-of-the-art methods. Our proposed detector (PASS-PV-RCNN++) ranks first on mAPH of object detection, AP/APH of vehicle detection, and AP/APH of cyclist detection among all anchor-based detectors, and surpasses the baseline PV-RCNN++ on mAPH with a margin of 1.0%/0.9%. Additionally, it exceeds some of the anchor-free detectors (e.g. CenterPoint) on vehicle detection and achieves the comparative performance of vehicle detection compared to anchor-free version of PV-RCNN++ (PV-RCNN++\\({}^{\\dagger}\\)). These comparison experimental results demonstrate that PASS can lead anchor-based LiDAR 3D object detection method to a new state-of-the-art, and shorten the performance gap between anchor-based detectors and anchor-free detectors. ### _Abalation Study_ **The effect of \\(\\mathcal{K}\\)**: As \\(\\mathcal{K}\\) is the only hyperparameter in the proposed PASS, here the effect of \\(\\mathcal{K}\\) is exploited based on the baseline model PointPillars [5]. Different values of \\(\\mathcal{K}\\) in [2, 3, 4, 5, 6, 7, 8] are used to train the detector. As defined in Eq.4, small \\(\\mathcal{K}\\) represents a large margin of dynamic boundary, while large \\(\\mathcal{K}\\) indicates a small margin of dynamic boundary. Table V shows the experimental results. It can be seen that the detector achieves the best performance when \\(\\mathcal{K}\\) is set to 5. Overall, \\(\\mathcal{K}\\) is quite robust to the improvement of the detector. **The effect of the number of stages**: Following Graph-Vo [57], a two-stage anchor-based 3D object detector is separated into detectors without the second stage (SECOND [4]) and with the second stage (overall Graph-Vo). The effect of PASS on the different stages of the anchor-based 3D object detector is ablated in Table VI. It is observed that PASS can continuously boost the performance of the detector from only the first stage to the first stage plus the second stage, which shows that the number of stages of the 3D object detector will not restrict the application of PASS. ### _Qualitative Analysis_ To discover the effect of the proposed sample selection method, case studies on training sample selection are conducted on KITTI _train_ set. Some typical cases are visualized in figure 4, including both nearby objects with dense points and far objects with sparse points. For case (a) and case (b), two anchors all contain a large number of 3D points on the same object. Based on the IoU\\({}_{box}\\)-based sample selection, the anchor sample in case (a) will be divided into the positive set, while the anchor sample in case (b) will be divided into the ignored set. However, PASS will keep both two in the positive set for consistent learning. For case (c) and case (d), two anchors contain fewer object point features. The IoU\\({}_{box}\\)-based sample selection will divide them into the negative set and the ignored set separately. In contrast, PASS consistently classified two anchor samples into the negative set, eliminating the ambiguity of anchor assignment. The cases (e)(f) and (g)(h) are similar to the previous examples. These cases show that PASS can better assign training samples and bring unambiguous optimization objectives to the learning of 3D object detectors. Furthermore, the performance of the proposed PASS-based 3D object detector in complex traffic scenes is tested and visualized. Figure 5 presents the qualitative results of the PASS-PV-RCNN++ on the KITTI _test_ set. The detected bounding boxes of car, pedestrian, and cyclist are colored in green, blue, and yellow, respectively. The first row shows the referenced RGB images and the second row shows the pointclouds and 3D detection results. As observed, our PASS-based detector can make high-quality predictions in multiple challenging scenes, including intersections, narrow road, dense crowds, and mixed traffic. In figure 5(a), the proposed detector captures cars with different poses at intersections. However, the detector misidentified the pole in the pointcloud as a pedestrian, which reflects the difficulty of the scene, and also reflects the problem of object similarity caused by sparse points. In figure 5(b), the proposed detector precisely detects the occluded vehicles, conducive to the safe passage of the ego vehicle through such narrow roads. It is noted that a faraway car with sparse points is identified by the detector. In figure 5(c), crowded pedestrians in the scene are grabbed by the proposed detector. However, for a pedestrian with an unusual posture, the detector mistakenly recognized it as a cyclist, which mirrors the difficulty brought by complicated points deformation to 3D object detection. In figure 5(d), the proposed detector can detect multi-classes objects in the mixed traffic environment. It provides complete perception results for the hybrid traversal of the ego vehicle. Especially for some faraway small objects (e.g. the far cyclist in the scene), accurate category identification and 3D location positioning are all achieved by the PASS-based detector. Fig. 4: Qualitative results on training sample selection (best viewed in color). \\(\\mathcal{S}\\) and \\(\\mathcal{S}^{\\prime}\\) are calculated by the IoU\\({}_{box}\\)-based and the PASS-based sample selection measurement, respectively. The green boxes and blue boxes represent the anchor samples and ground truths respectively. Points in the intersection of the anchor sample and ground truth are indicated as red points. Points that belong to ground truth but do not belong to the intersection are indicated as blue points. Points that belong to the anchor but do not belong to the intersection are represented as green points, and the rest of the background points are represented as **black points**. ### _Extended Experiment on PointPillars_ PointPillars [5], as a widely-deployed anchor-based LiDAR 3D object detector, has garnered significant attention in both research and industrial domains. Numerous methods [43, 50, 67] have endeavored to improve PointPillars and experimented on the KITTI _test_ set. In our study, we apply PASS to PointPillars and submit the detection results of the PASS-based PointPillars (PASS-PointPillars) to the KITTI _test_ benchmark. Table VII shows the car detection performance comparison of various versions of PointPillars detector. Remarkably, the proposed PASS-PointPillars achieves the best on moderate 3D mAP and easy BEV mAP among all competitors, and surpasses the baseline model with a large margin (e.g. 2.14% improvement on easy 3D mAP). It is noteworthy that, the proposed PASS only works on the optimization of the PointPillars and introduces no extra time-cost in its inference. Specifically, it maintains a detection speed of 75.4Hz and 43.1Hz when TensorRT-based deployed and tested on Jetson Xavier TX and PC with a single 2080Ti GPU, respectively. ### _Discussion_ **Manual hyperparameters**: Although PASS offers a new perspective of anchor sample selection in anchor-based LiDAR 3D object detection, it still relies on multiple manual hyperparameters (e.g. the location and shape of anchors \\([x^{a},y^{a},z^{a},l^{a},w^{a},h^{a},\\theta^{a}]\\), the selection thresholds \\(\\{\\mathcal{T}_{pos},\\mathcal{T}_{neg}\\}\\), and \\(\\mathcal{K}\\) in PASS, etc.). These hyperparameters make the model learning inflexible. Future research could delve into these manual hyperparameters, attempting to achieve more efficient model learning with dynamically self-adaptive parameters or hyperparameter-free schemes. **Learning efficiency**: PASS improves the performance of current anchor-based LiDAR 3D object detectors without extra inference burden. However, it does result in a training time cost increase of 1.5 to 3 times, which is unfriendly to these models with a large number of network parameters. Hence, it is worth exploring a more concise and efficient training sample selection approach in future research. **Characteristics of LiDAR pointcloud**: Currently, many network and algorithm designs for LiDAR-based 3D vision tasks draw inspiration from or are directly adopted from camera-based 2D/3D vision methodologies. Although these designs prove effective in the image domain, they may not be inherently suitable for LiDAR pointcloud due to the different characteristics between images and pointcloud. It is worth thinking about how to design tailored methods or frameworks for 3D vision tasks based on LiDAR pointcloud. ## VI Conclusion In this work, the ambiguity of IoU\\({}_{box}\\)-based sample selection in existing anchor-based LiDAR 3D object detection methods is pointed out and thoroughly analyzed. To address this issue, a novel anchor sample selection method called PASS is proposed. PASS can be applied to any anchor-based LiDAR 3D object detector as a plug-and-play optimization part without introducing extra inference time-cost. The experimental results on multiple widely-used datasets consistently demonstrate that, with the application of PASS, anchor-based LiDAR 3D object detectors gain a great average mAP improvement, and achieve new state-of-the-art performance. ## Acknowledgments This work was supported by the National Key R&D Program of China (Grant No. 2022YFB2502900) and the National Natural Science Foundation of China (Grant No. 62088102). Fig. 5: Qualitative results on challenging scenes from the KITTI _test_ set (best viewed in color). The top row displays the reference images, and the bottom row shows the corresponding LiDAR pointcloud with detection results from the proposed PASS-PV-RCNN++. The green, blue, and yellow boxes represent the prediction of car, pedestrian, and cyclist respectively. Certain cases that are worth further discussing (see Section V-E) are highlighted by zoomed-in. ## References * [1]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [2]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [3]J. Mao, Y. Xue, M. Niu, H. Bai, J. Feng, X. Liang, H. Xu, and C. Xu (2021) Voxel transformer for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3164-3173. Cited by: SSI. * [4]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [5]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [6]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [7]J. Mao, Y. Xue, M. Niu, H. Bai, J. Feng, X. Liang, H. Xu, and C. Xu (2021) Voxel transformer for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3164-3173. Cited by: SSI. * [8]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [9]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [10]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [11]J. Mao, Y. Xue, M. Niu, H. Bai, J. Feng, X. Liang, H. Xu, and C. Xu (2021) Voxel transformer for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3164-3173. Cited by: SSI. * [12]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [13]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [14]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [15]J. Mao, S. Shi, X. Wang, and H. Li (2021) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [16]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [17]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [18]J. Mao, S. Shi, X. Wang, and H. Li (2021) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [19]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [20]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [21]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [22]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [23]J. Mao, S. Shi, X. Wang, and H. Li (2021) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [24]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [25]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [26]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [27]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [28]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [29]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving with dynamic information processing systems. In European Conference on Computer Vision, pp. 405-420. Cited by: SSI. * [30]J. Mao, S. Shi, X. Wang, and H. Li (2021) 3D object detection for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1-55. Cited by: SSI. * [31]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [32]J. Mao, S. Shi, X. Wang, and H. Li (2023) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [33]J. Mao, S. Shi, X. Wang, and H. Li (2021) 3D object detection for autonomous driving: a comprehensive survey. International Journal of Computer Vision, pp. 1-55. Cited by: SSI. * [34]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [35]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [36]J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2723-2732. Cited by: SSI. * [37]Y. Wang, A. Fathi, A. Kundu, D. A. Ross, C. Pantofaru, T. Funkhouser, and J. Solomon (2020) Pillar-based object detection for autonomous driving. In European Conference on Computer Vision, pp. 18-34. Cited by: SSI. * [38]Y. Wang and J. M. Solomon (2021) Object dgcnn: 3d object detection using dynamic graphs. Advances in Neural Information Processing Systems34, pp. 20745-20758. Cited by: SSI. * [39]Y. Wang, A. Fathi, A. Kundu, D. A. Ross, C. Pantofaru, T. Funkhouser, and J. Solomon (2020) Pillar-based object detection for autonomous driving. In European Conference on Computer Vision, pp. 18-34. Cited by: SSI. * [40]Y. Wang, A. Fathi, A. Kundu, D. A. Ross, C. Pantofaru, T. Funkhouser, and J. Solomon (2020) Pillar-based object detection for autonomous driving. In European Conference on Computer Vision, pp. 18-34. Cited by: SSI. * [41]Y. Wang, W. Luo, and R. Urtasun (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1-5. Cited by: SSI. * [42]Y. Wang, A. Fathi, A. Kundu, D. A. Ross, C. Pantofaru, T. Funkhouser, and J. Solomon (2020) Pillar-based object detection for autonomous driving. In European Conference on Computer Vision, pp. 18-34. Cited by: SSI. * [43]Y. Wang, W. Luo, and R. Urtasun (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1-57. Cited by: SSI. * [44]Y. Wang, A. Fathi, A. Kundu, D. A. Ross, C. Pantofaru, T. Funkhouser, and J. Solomon (2020) Pillar-based object detection for autonomous driving. In European Conference on Computer Vision, pp. 18-34. Cited by: SSI. * [45]Y. Wang, W. Luo, and R. Urtasun (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1-56. Cited by: SSI. * [46]Y. Wang, A. Fathi, A. Kundu, D. A. Ross, C. Pantofaru, T. Funkhouser, and J. Solomon (2020) Pillar-based object detection for autonomous driving. In European Conference on Computer Vision, pp. 18-34. Cited by: SSI. * [47]Y. Wang, W. Luo, and R. Urtasun (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1-56. Cited by: SSI. * [48]Y. Wang, A. Fathi,Conference on Computer Vision and Pattern Recognition_, 2022, pp. 5428-5437. * [42] S. Shi, Z. Wang, J. Shi, X. Wang, and H. Li, \"From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network,\" _IEEE transactions on pattern analysis and machine intelligence_, vol. 43, no. 8, pp. 2647-2664, 2020. * [43] Z. Li, F. Wang, and N. Wang, \"Lidar r-cnn: An efficient and universal 3d object detector,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 7546-7555. * [44] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, \"Pv-rcnn: Point-voxel feature set abstraction for 3d object detection,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 10 529-10 538. * [45] P. An, J. Liang, J. Ma, Y. Chen, L. Wang, Y. Yang, and Q. Liu, \"Rs-aug: Improve 3d object detection on lidar with realistic simulator based data augmentation,\" _IEEE Transactions on Intelligent Transportation Systems_, vol. 24, no. 9, pp. 10 165-10 176, 2023. * [46] H. Cho, J. Choi, G. Baek, and W. Hwang, \"itkd: Interchange transfer-based knowledge distillation for 3d object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 13 540-13 549. * [47] S. Zhou, W. Liu, C. Hu, S. Zhou, and C. Ma, \"Unidistill: A universal cross-modality knowledge distillation framework for 3d object detection in bird's-eye view,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 5116-5125. * [48] H. Sheng, S. Cai, N. Zhao, B. Deng, J. Huang, X.-S. Hua, M.-J. Zhao, and G. H. Lee, \"Rethinking jou-based optimization for single-stage 3d object detection,\" in _European Conference on Computer Vision_. Springer, 2022, pp. 544-561. * [49] Q. Ming, L. Miao, Z. Ma, L. Zhao, Z. Zhou, X. Huang, Y. Chen, and Y. Guo, \"Deep dive into gradients: Better optimization for 3d object detection with gradient-corrected iou supervision,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 5136-5145. * [50] H. Zhang, D. Yang, J. Isaacs, Z. Nain, J. H. Park, and H.-Y. Jung, \"3d harmonic loss: Towards task-consistent and time-friendly 3d object detection on edge for v2x orchestration,\" _IEEE Transactions on Vehicular Technology_, vol. 72, no. 12, 2023. * [51] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster r-cnn: Towards real-time object detection with region proposal networks,\" _Advances in neural information processing systems_, vol. 28, 2015. * [52] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, \"Ssd: Single shot multibox detector,\" in _European Conference on Computer Vision_. Springer, 2016, pp. 21-37. * [53] G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, and C. K. Wellington, \"Lascent: An efficient probabilistic 3d object detector for autonomous driving,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 12 677-12 686. * [54] L. Fan, X. Xiong, F. Wang, N. Wang, and Z. Zhang, \"Rangpeed: In defense of range view for lidar-based 3d object detection,\" in _Proceedings of the IEEE/CVF international conference on computer vision_, 2021, pp. 2918-2927. * [55] I. Misra, R. Girdhar, and A. Joulin, \"An end-to-end transformer model for 3d object detection,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 2906-2917. * [56] A. Geiger, P. Lenz, and R. Urtasun, \"Are we ready for autonomous driving? the kitti vision benchmark suite,\" in _2012 IEEE conference on computer vision and pattern recognition_. IEEE, 2012, pp. 3354-3361. * [57] H. Yang, Z. Liu, X. Wu, W. Wang, W. Qian, X. He, and D. Cai, \"Graph r-cnn: Towards accurate 3d object detection with semantic-decorated local graph,\" in _European Conference on Computer Vision_. Springer, 2022, pp. 662-679. * [58] I. Koo, I. Lee, S.-H. Kim, H.-S. Kim, W.-j. Jeon, and C. Kim, \"Pg-rcnn: Semantic surface point generation for 3d object detection,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2023, pp. 18 142-18 151. * [59] S. Ren, K. He, R. Girshick, and J. Sun, \"Faster r-cnn: Towards real-time object detection with region proposal networks,\" _Advances in neural information processing systems_, vol. 28, 2015. * [60] H. Li, Z. Wu, C. Zhu, C. Xiong, R. Socher, and L. S. Davis, \"Learning from noisy anchors for one-stage object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 10 588-10 597. * [61] S. Zhang, C. Chi, Y. Yao, Z. Lei, and S. Z. Li, \"Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 9759-9768. * [62] B. Zhu, J. Wang, Z. Jiang, F. Zong, S. Liu, Z. Li, and J. Sun, \"Autoassign: Differentiable label assignment for dense object detection,\" _arXiv preprint arXiv:2007.03496_, 2020. * [63] Y. Zhang, Q. Hu, G. Xu, Y. Ma, J. Wan, and Y. Guo, \"Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 18 953-18 962. * [64] Q. He, Z. Wang, H. Zeng, Y. Zeng, and Y. Liu, \"Svga-net: Sparse voxel-graph attention network for 3d object detection from point clouds,\" in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol. 36, no. 1, 2022, pp. 870-878. * [65] Z. Liu, X. Zhao, T. Huang, R. Hu, Y. Zhou, and X. Bai, \"Tanet: Robust 3d object detection from point clouds with triple attention,\" in _Proceedings of the AAAI conference on artificial intelligence_, vol. 34, no. 07, 2020, pp. 11 677-11 684. * [66] H. Yang, T. He, J. Liu, H. Chen, B. Wu, B. Lin, X. He, and W. Ouyang, \"Gdm-race: generative decoder for mae pre-training on lidar point clouds,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 9403-9414. * [67] E. Erezile, E. Vartsever, M. Liu, Z. Yang, H. Zhang, P. Topcam, M. Listl, Y. K. Cayli, and A. Knoll, \"3d object detection with a self-supervised lidar scene flow backbone,\" in _European Conference on Computer Vision_. Springer, 2022, pp. 247-265.
3D object detection based on LiDAR point cloud and prior anchor boxes is a critical technology for autonomous driving environment perception and understanding. Nevertheless, an overlooked practical issue in existing methods is the ambiguity in training sample allocation based on box Intersection over Union (IoU\\({}_{box}\\)). This problem impedes further enhancements in the performance of anchor-based LiDAR 3D object detectors. To tackle this challenge, this paper introduces a new training sample selection method that utilizes point cloud distribution for anchor sample quality measurement, named Point Assisted Sample Selection (PASS). This method has undergone rigorous evaluation on two widely utilized datasets. Experimental results demonstrate that the application of PASS elevates the average precision of anchor-based LiDAR 3D object detectors to a novel state-of-the-art, thereby proving the effectiveness of the proposed approach. The codes will be made available at [https://github.com/XJTU-Haolin/Point_Assisted_Sample_Selection](https://github.com/XJTU-Haolin/Point_Assisted_Sample_Selection). 3D object detection, Sample selection, Anchor assignment, LiDAR pointcloud.
Give a concise overview of the text below.
228
arxiv-format/2203_16330v2.md
Cointegration of SARS-CoV-2 Transmission with Weather Conditions and Mobility during the First Year of the COVID-19 Pandemic in the United States. + Footnote †: We thank the support of NSF #1761839, #1852042, #2149956, an internal CEACSE, and the Office of Vice Chancellor of Research at the University of Tennessee at Chattanooga. Hong Qin Dpt. of Comp. Sci. & Eng. U. of Tennessee Chattanooga, TN, U.S.A. [email protected] Corresponding author Syed Tareq Dpt. of Civil & Chem. Eng. U. of Tennessee Chattanooga, TN, U.S.A. [email protected] William Torres Dpt. of Comp. Sci. Adelphi U. Garden City, NY, U.S.A. [email protected] Megan Doman Dpt. of Comp. Sci. & Eng. U. of Tennessee Chattanooga, TN, U.S.A. [email protected] Cleo Falvey Ctr. for Comp. & Integ. Biol. Rutgers State U. of New Jersey Camden, NJ, U.S.A. [email protected] Jamare Moore Dpt of Biology Norfolk State U. J.S.A. [email protected] Mengjun Xie Dpt. of Comp. Sci. & Eng. U. of Tennessee Chattanooga, TN, U.S.A. [email protected] Mengjun Xie Dpt. of Comp. Sci. & Eng. U. of Tennessee Chattanooga, TN, U.S.A. [email protected] ## I Introduction Coronavirus infectious diseases are often seasonal and are influenced by weather [1, 2]. The emergence of the SARS-CoV-2 has quickly established its presence in the population, and COVID-19 has been predicted to last for years to come [3]. Therefore, it is of great interest whether the SARS-CoV-2 pandemic would become a seasonal infectious disease. Because aerosols and droplets are major ways of transmission for coronaviruses, the weather is expected to influence their transmission, and low humidity and low temperatures are expected to be associated with increased viral transmission [2, 4, 5]. Standard correlation methods have previously been used to examine the potential relationship between weather conditions and COVID-19 transmission. For example, based on the Pearson correlation method, daily COVID-19 cases in the U.S.A. were weakly correlated with air temperature and social distancing indices[6]. Based on Spearman rank correlation tests, air temperature and other weather conditions correlate with daily COVID-19 cases and deaths in Madrid, Spain [7]. Based on an additive model and piecewise linear regression, ambient temperature correlated with COVID-19 cases in 122 cities in China [8]. A significant Spearman correlation was reported between average daily temperature and reported COVID-19 cases in Jakratta, Indonesia [9]. Based on descriptive statistics, U.S. states with absolute humidity at a lower range tend to have high numbers of daily COVID-19 cases [10]. Standard regression methods can lead to spurious correlations in time series data, and for this reason, the cointegration method was developed to examine the relationship between time series [11]. It is worth emphasizing that COVID-19 daily cases and weather conditions are time series data. Hence, we chose to use cointegration to examine the relationship between COVID-19 transmission, weather conditions, and confounding mobility data. The mobility data, generated on cellphone usage, were designed to describe social distancing practices in communities. We also chose to use the effective reproductive number, Rt, also known as the time-varying reproduction number to describe the transmission of SARS-CoV-2, instead of the daily cases. The Rt represents the average number of secondary infections caused by each new infection. Estimation of Rt can account for reporting delays in the reported cases of COVID-19, daily cases fluctuation, and reporting delays [12]. The initial wave of the pandemic is expected to be heavily influenced by the susceptibilities of the population to a new pathogen and is hence less likely to be informative on weather conditions [1]. Therefore, we excluded the initial wave and started our analysis from May 1, 2020, to February 15, 2021, for 290 days. We stopped our analysis on February 15, 2021, when about 5% of the US population had received the second dose of the COVID-19 vaccine. ## II Materials and Methods ### _Data Sources_ We obtained confirmed cases for each county in the U.S.A. from the COVID-19 Data Repository by the Center for Systems Science and Engineering at Johns Hopkins University [13]. We retrieved the Apple mobility report from its website on November 16, 2021. We retrieved the Google community mobility report from its website on November 17, 2021. We wrote a Python script to parse out the driving mobility for each county in the U.S.A. We found 2638 counties in the Apple mobility report, and 2837 counties in the Google mobility report. We found 2834 counties are shared by the three data sets of COVID-19 cases, Apple and Google mobility reports. Among these counties, many did not have good-quality cases numbers during the 290-day window of our analysis and were later dropped from further analysis (detailed in results and Table 1). We downloaded weather data from Copernicus Climate Change Service Climate Data Store in GRIB format [14] in November 2021. The ERA5-Land hourly data include 2-meter temperature and 2-meter dew point temperature in Kelvin. The 2-meter temperature (T2m) is the temperature at 2 meters above the surface of the Earth. The 2-meter dew-point temperature is the temperature to which air at 2 meters above the surface of the Earth would have to be cooled for saturation to occur and is a measure of the humidity of the air. Both 2-m temperature and dew-point temperature were estimated by interpolating between the lowest model level and the Earth's surface, taking account of the atmospheric conditions. ERA5-Land data is provided in a latitude-longitude grid of 0.1 degree x 0.1 degree, corresponding to a resolution of about 9 km. The ERA5 data is released up to date three months prior to the date that we retrieved the data. We wrote a Python script to parse weather conditions for specification geographic locations from the ERA5-Land hourly data in GRIB format. For each county, we parsed a 0.6x0.6 degree area for each county with the reported latitude and longitude as the center, and estimated the average measurement at 16:00 on each day from each county. Both T2m and dewpoint are measured in Kelvin. ### _Time Series Analyses._ We applied the Augmented Dickey-Fuller (ADF) test to the stationarity of time series data, using the _adf.test_ function from the R _tseries_ package [15]. Time series with an ADF p-value of less than 0.05 are considered stationary data and were excluded from the cointegration test. We applied the Johansen test to examine the long-term cointegration of two or three non-stationary time series with the _ca.jo_ function of R _urca_ package [16]. We used a critical value of 0.01 to select the number of cointegration vectors r, and the first non-rejection of the null hypothesis is taken as an estimate of r. We explore the lag from two days to 20 days. The minimum lag of _ca.jo_ test function is two days. We exclude the data sets that resulted in numeric errors during the cointegration test, likely due to frequent missing values. ### _Computing._ It is computationally intensive to extract weather conditions from thousands of locations to cover the period of the pandemic from the downloaded GRIB file. It is also computationally intensive to estimate Rt for thousands of counties in the U.S.A. We ran hundreds of parallel jobs using a high-performance computing cluster, Tennessee-117, at the SimCenter at the University of Tennessee at Chattanooga (UTC). Job management scripts were written to handle the computing jobs. For Johansen cointegration tests at all counties, we ran R scripts at a Lambda Linux workstation provided by the College of Engineering and Computer Science at UTC. Results of the cointegration test are presented in columns. Rows indicate different values of the lag parameter in the Johansen test. Inside of the parentheses in the first row of each column, n represents the total number of counties with available nonstationary data sets. The highest percentages of cointegrated counties in each test are highlighted in bold. For two-factor tests, a cointegration rank of 2 was chosen at the critical value of 0.01 for the Johansen test. For three-factor tests, a rank of 3 was chosen. \\begin{table} \\begin{tabular}{l c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Lag} & \\multirow{2}{*}{Rt} & \\multirow{2}{*}{Rt} & \\multirow{2}{*}{Rt} & \\multirow{2}{*}{Rt} & \\multirow{2}{*}{Rt} & \\multirow{2}{*}{T2m} & \\multirow{2}{*}{Devpoint} & \\multirow{2}{*}{T2m} & \\multirow{2}{*}{Devpoint} \\\\ & & & & & & & & \\\\ & (n=1642) & & & & & & & \\\\ \\hline 2 & **65.4\\%** & **93.1\\%** & 72.2\\% & 49.9\\% & **39.3\\%** & **51.9\\%** & **28.2\\%** & 39\\% \\\\ \\hline 3 & 27.9\\% & 65.7\\% & **83.7\\%** & **61.9\\%** & 13.4\\% & 35.9\\% & 17.8\\% & **42.6\\%** \\\\ \\hline 4 & 7.4\\% & 30.3\\% & 70.4\\% & 57.7\\% & 8.0\\% & 18.3\\% & 6.8\\% & 25.0\\% \\\\ \\hline 5 & 2.2\\% & 12\\% & 43.0\\% & 41.4\\% & 3.9\\% & 14.5\\% & 3.7\\% & 10.3\\% \\\\ \\hline 6 & 0.1\\% & 1.1\\% & 10.3\\% & 26.2\\% & 1.5\\% & 3.1\\% & 0.6\\% & 3.7\\% \\\\ \\hline \\hline \\end{tabular} Results of the cointegration test are presented in columns. Rows indicate different values of the lag parameter in the Johansen test. Inside of the parentheses in the first row of each column, n represents the total number of counties with available nonstationary data sets. The highest percentages of cointegrated counties in each test are highlighted in bold. For two-factor tests, a cointegration rank of 2 was chosen at the critical value of 0.01 for the Johansen test. For three-factor tests, a rank of 3 was chosen. \\end{table} Table 1: Percentages of counties with cointegrated time-series factors. ## III Results Below, we will first illustrate the cointegration analyses using two examples, then present the results at the county level, and finally, show the spatial patterns at the state level. _Cointegration can Reveal the Potential Relationship between Virus Transmission, Weather, and Mobility Variables._ The effective reproductive rate, weather conditions, and daily mobility reports are all time series. For time series, standard regression methods tend to lead to spurious correlations. Cointegration commonly occurs when a linear combination of several nonstationary time series variables results in a stationary signal whose mean and variance do not change over time. We used data sets from two counties as examples to illustrate the principle of cointegration analysis (Figure 1A and 1B). In both counties, we can see major peaks and valleys for Rt, dewpoint, and Apple driving mobility variables (The first three sub-figures in Figure 1A and 1B). The cointegrated signals are linear combinations of the three time series variables based on the Eigenvector of the Johansen test (the last sub-figure in Figure 1A and 1B). The cointegrated signals become relatively flat with fluctuating noises over time, and can be considered as stationary based on ADF test (p-value =0.04 and 0.01 for McKenzie, ND and Clay, MN, respectively). _Weather and Mobility Variables Cointegrate with the Effective Reproductive Rate at the County Level._ We examined the cointegration of the effective reproductive rate Rt to weather variables (T2m and dewpoint), mobility variables (Apple driving and Google workplace mobility estimates) at all available counties in the U.S.A (Table 1). Because cointegration tests are only valid for non-stationary input, all of the time series data were first tested for stationarity. We filtered out counties in which the input factors are stationary based on ADF test with a p-value less than 0.05. We studied the effect of the lag parameter for the Johansen cointegration test from two days to 20 days. The two-day lag is the minimum allowed lag in the Johansen test. We did not observe any qualitative changes in results when the lag parameter is longer than 6 days. To cointegrate with Rt, we found that the optimal lag is 2 days for T2m and dewpoint, and 3 days for Apple driving and Google workplace mobility (Table 1). We found that dewpoint cointegrated more frequently with Rt than T2m, with a cautionary note that the 175 nonstationary dewpoint data set was much smaller than the 1642 nonstationary T2m data set. We observed that Apple driving mobility cointegrates with Rt more frequently than Google workplace mobility. The three-factor cointegration results also support that dewpoint and Apple driving mobility are more informative on Rt than the other two variables. ### _Regional Patterns of Cointegration Results at the State Level._ To examine the regional patterns in cointegration of weather and mobility to Rt, we visualized the percentages of cointegrated counties in each state (Figure 2 and 3). The optimal lag parameters based on these state-level visualization are consistent with the overall analysis in Table 1. Clusters of states with similar percentages of cointegrated counties can be observed in the Rt and T2m test, and in the Rt and Apple driving mobility test (first and second column of Figure 2). Clustering of states with similar effects are not obvious in the Rt and Google workplace test, and are inconclusive in the Rt and dewpoint test because many states do have available nonstationary dewpoint data sets. Clusters of states with similar effects are more conspicuous in the three-factor cointegration tests (Figure 2). Overall, the regional patterns for the cointegration of temperature to Rt are more conspicuous than that of dewpoint. Both Apple driving and Google workplace mobility generally share comparable regional patterns for cointegration with Rt, but there is clear difference between these mobility proxies in some states. For example, in Alaska, Google workplace mobility cointegrate with Rt better than Apple driving mobility, in the sense that Alaska appears to be redder in the fourth column than the corresponding cells in the third column in Figure 2. ## IV Discussion and Conclusion Overall, our research detected strong cointegration of Rt, mobility and dewpoint with two or three days of lag. However, we are aware of the limitation of the cointegration method. Because cointegration tests require non-stationary time series, we have to exclude many data sets that could not pass the ADF stationarity test. The 290-day window of the present study maybe not long enough to address long-term cointegration effect. However, longer time windows for studying pandemics in the U.S. have additional complexities. During the second year of the pandemics in the USA, vaccination rates in the population were likely to have a significant role in reducing the spread of the virus [17]. In addition, new variants, such as Delta and Omicron, are known to cause new peaks of daily cases [18]. There is also evidence that mobility data become noise and unreliable to proxy social distancing practice as the pandemic went on [19]. Therefore, our choice of the 290-day enabled us to focus on the roles of a relatively small set of confounding factors, especially for weather and mobility variables. Dewpoint is a proxy of humidity. Interestingly, our results suggest that humidity cointegrates with Rt better than temperature. It is expected SARS-CoV-2 becomes more transmissible in drier air. It is expected most SARS-CoV-2 transmission occurs indoors. Indoor temperatures are often controlled in the U.S., however, indoor humidity is generally less controlled in most households in the U.S. In one study, indoor and outdoor absolute humidity are highly correlated in Greater Boston, Massachusetts [20]. Interestingly, we found dewpoint and temperature cointegrate with Rt as effectively as mobility measurements. It was argued that weather conditions have much less effect on virus transmission than human mobility and social distancing [21]. It is plausible that weather and human behavior are correlated, in the sense that people tend to stay indoors in cold weather, during which period low humidity associated with winter acted synergistically to drive up viral transmission [22]. We are aware that the cointegration relationship does not necessarily lead to comparable importance in the real world. Our model is a simplification of a complex pandemic, studying a subset of factors that influence the spread of novel pathogens. Other factors have already been shown to impact Rt, but were not included, such as vaccination masking protocols, effective Figure 2: Percentage of counties in which the effective reproductive rate (Rt) cointegrates with the temperature at two meters (T2m), dewpoint, Google workplace mobility (G-workplace), and Apple driving mobility (A-driving) in the US with the lag parameter ranging from two days to six days. Each state is colored based on the percentages of counties with cointegrated two factors as described in the first row. Cointegration of Rt and T2m generally has a lag of 2 days. Cointegration of Rt and dewpoint generally has an optimal lag of 2 days, but is also substantial at a lag of 3 days. Cointegrations of Rt with the two mobility estimates have an optimal lag of 3 or 4 days. The critical value of cointegration tests was chosen at 1%. Augmented Dickey-Fuller (ADF) tests were first applied to all three factors. States in deep red indicate high percentages of counties with cointegrated factors, and states in light green indicate low percentages. Factors were treated as stationary if their ADF test p-values are less than 0.05 and were excluded from the Johansen cointegration test. States with no available non-stationary factors are colored in gray in each column. contact tracing, and the quarantine of infected individuals [22]. Therefore, future studies will be needed to disentangle the roles of weather conditions and mobility on viral transmission. Future studies would be needed to examine the roles of weather conditions and mobility on virus transmission. Overall, we concluded that temperature and dewpoint cointegrated with effective reproductive rate SARS-CoV-2 at comparable levels to those of mobility measurements at the county level in the first year of the pandemic in the U.S.A. Our results align with previous literature that suggests that COVID-19 spread is seasonal and that social distancing measures can curb the spread of novel pandemics. Therefore, our research adds to a growing body of literature that can inform policy decisions and reduce the exponential spread of novel diseases [2]. ## Acknowledgment We thank the helpful comments and discussion from Landen Bauder, Derek Campbell, and many faculty and students who participated in the iCompBio programs. We thank the computing support from the SimCenter and the College of Engineering and Computer Science at the University of Tennessee at Chattanooga. ## References * [1] R. E. Baker, W. Yang, G. A. Vecchi, C. J. E. Metcalf, and B. T. Grenfell, \"Susceptible supply limits the role of climate in the early SARS-CoV-2 pandemic,\" _Science_, vol. 369, no. 6501, pp. 315-319, Jul 17, 2020. * [2] C. J. Carlson, A. C. R. Gomez, S. Bansal, and S. J. Ryan, \"Misconceptions about weather and seasonality must not misguide COVID-19 response,\" _Nature Communications,_ vol. 11, no. 1, pp. 4312, 2020/08/27, 2020. * [3] S. M. Kissler, C. Tedijanto, E. Goldstein, Y. H. Grad, and M. Lipsitch, \"Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period,\" _Science,_ vol. 368, no. 6493, pp. 860-868, May 22, 2020. * [4] S. Riddell, S. Goldie, A. Hill, D. Eagles, and T. W. Drew, \"The effect of temperature on persistence of SARS-CoV-2 on common surfaces,\" _Virology Journal,_ vol. 17, no. 1, pp. 145, 2020/10/07, 2020. Figure 3: Regional patterns can be observed in three-factor cointegration results at the state level. The best cointegrated three factors are effective reproductive rate (Rt), Apple driving mobility (A-driving), and dewpoint at two meters, with an optimal lag of 2 days. Each state is colored based on its percentage of counties in which three factors are co-integrated. Each column represents a cointegration of the three factors in the first row. Procedures are similar to those in Figure 2. S. Chen, K. Prettner, M. Kuhn, P. Geldsetzer, C. Wang, T. Barnighausen, and D. E. Bloom, \"Climate and the spread of COVID-19,\" _Scientific Reports,_ vol. 11, no. 1, pp. 9042, 2021/04/27, 2021. * [6] X. Zhang, V. Maggioni, P. Houser, Y. Xue, and Y. Mei, \"The impact of weather condition and social activity on COVID-19 transmission in the United States,\" _Journal of Environmental Management,_ vol. 302, no. Pt B, pp. 114085, 2022. * [7] M. A. Zoran, R. S. Savastru, D. M. Savastru, M. N. Tautan, L. A. Baschir, and D. V. Tenciu, \"Assessing the impact of air pollution and climate seasonality on COVID-19 multiwaves in Madrid, Spain,\" _Environ Res,_ vol. 203, pp. 111849, Jan, 2022. * [8] J. Xie, and Y. Zhu, \"Association between ambient temperature and COVID-19 infection in 122 cities from China,\" _Sci Total Environ,_ vol. 724, pp. 138201, Jul 1, 2020. * [9] R. Tosepu, J. Gunawan, D. S. Effendy, O. A. I. Ahmad, H. Lestari, H. Bahar, and P. Asfian, \"Correlation between weather and Covid-19 pandemic in Jakarta, Indonesia,\" _Sci Total Environ,_ vol. 725, pp. 138436, Jul 10, 2020. * [10] S. Gupta, G. S. Raghuwanshi, and A. Chanda, \"Effect of weather on COVID-19 spread in the US: A prediction model for India in 2020,\" _Sci Total Environ,_ vol. 728, pp. 138860, Aug 1, 2020. * [11] C. W. J. Granger, and P. Newbold, \"Spurious regressions in econometrics,\" _Journal of Econometrics,_ vol. 2, no. 2, pp. 111-120, 1974/07/01/, 1974. * [12] S. Abbott, J. Helleweel, K. Sherratt, K. Gostic, J. Jickson, H. S. Badr, M. DeWitt, R. Thompson, EpiForcasts, and S. Funk, \"EpiNow2: Estimate Real-Time Case Counts and Time-Varying Epidemiological Parameters,\" 2020. * [13] E. Dong, H. Du, and L. Gardner, \"An interactive web-based dashboard to track COVID-19 in real time,\" _Lancet Infect Dis,_ vol. 20, no. 5, pp. 533-534, May, 2020. * [14] M. J. Sabater. \"ERA5-Land hourly data from 1981 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS),\" November 14, 2021. * [15] A. Trapletti, and K. Hornik. \"tseries: Time Series Analysis and Computation Finance,\" [https://CRAN.R-project.org/package=tseries](https://CRAN.R-project.org/package=tseries). * [16] B. Pfaff, _Analysis of Integrated and Cointegrated Time Series with R_, second ed., New York: Springer, New York, 2008. * [17] D. W. Eyre, D. Taylor, M. Purver, D. Chapman, T. Fowler, K. B. Pouwels, A. S. Walker, and T. E. A. Peto, \"Effect of Covid-19 Vaccination on Transmission of Alpha and Delta Variants,\" _New England Journal of Medicine,_ vol. 386, no. 8, pp. 744-756, 2022. * [18] D. Tian, Y. Sun, H. Xu, and Q. Ye, \"The emergence and epidemic characteristics of the highly mutated SARS-CoV-2 Omicron variant,\" _J Med Virol_, Feb 3, 2022. * [19] R. Gotumukkala, S. Katragadda, R. T. Bhupatiraju, A. M. Kamal, V. Raghavan, H. Chu, R. Kolluru, and Z. Ashkar, \"Exploring the relationship between mobility and COVID- 19 infection rates for the second peak in the United States using phase-wise association,\" _BMC Public Health,_ vol. 21, no. 1, pp. 1669, 2021/09/14, 2021. * [20] J. L. Nguyen, J. Schwartz, and D. W. Dockery, \"The relationship between indoor and outdoor temperature, apparent temperature, relative humidity, and absolute humidity,\" _Indoor Air,_ vol. 24, no. 1, pp. 103-12, Feb, 2014. * [21] Q. Bukhari, J. M. Massaro, R. B. D'Agostino, S., and S. Khan, \"Effects of Weather on Coronavirus Pandemic,\" _Int J Environ Res Public Health,_ vol. 17, no. 15, Jul 27, 2020. * [22] S. Greffe, F. Espinasse, C. Duran, S. Labrune, M. Sirol, B. Mantalvan, M. C. Gramer, C. Babulle, G. Do Rosario, Q. Vauvillier, A. Huet, A. Van der Heidjen, J. Tysebaert, L. F. Kramarz, J. P. Rabes, G. Pellissier, T. Chinet, F. Moreau, and E. Rouveix, \"[Nasopharyngeal carriage of SARS-CoV-2 among health personnel with symptoms suggestive of COVID-19 in a University Hospital in the Paris suburbs],\" _Rev Med Interme,_ vol. 41, no. 8, pp. 510-516, Aug, 2020.
Correlation between weather and the transmission of SARS-CoV-2 may suggest its seasonality. We examined the cointegration of virus transmission with daily temperature, dew dewpoint, and confounding factors of mobility measurements during the first year of the pandemic in the United States. We examined the cointegration of the effective reproductive rate, Rt, of the virus with the dewpoint at two meters, the temperature at two meters, Apple driving mobility, and Google workplace mobility measurements. Dewpoint and Apple driving mobility are the best factors to cointegrate with Rt. The optimal lag is two days for cointegration between Rt and weather variables, and three days for Rt and mobility. We observed clusters of states that share similar cointegration results, suggesting regional patterns. Our results support the correlation of weather with the spread of SARS-CoV-2 and its potential seasonality. Cointegration, COVID-19, SARS-CoV-2, mobility, weather
Write a summary of the passage below.
197
isprs/705e04ee_d501_4f22_98dd_04e6c16bf02f.md
# Automated Extraction of Break Lines in TLS Data of Real Environment K. Kitamura1, N. D'Apuzzo2 N.Kochi1, S. Kaneko3 Footnote 1: Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. Footnote 2: footnotemark: ## 1 Introduction In the concrete real situation, when we use TLS to take in the point clouds, we are often disturbed by the noises, created by humans, cars and plants coming in between the TLS and the objects. There are already many valuable reports written on the extractions of break-lines of building structures out of the point clouds and on the production of their modelling (Stamos, 2002; Chen, 2007; Konno, 2007). Some of them even speculate on sensor noises (Jiang, 1999) or creating noises artificially (Zhang, 2001; Mitra, 2003), but there is no report, so far, which deals effectively with the noises of humans, cars and plants like our present report. In the past, we removed the noises only manually, but now this is of no avail any more with all these recent developments of scanning technology which creates hundreds of millions of point clouds in a short period of time. Besides, the ever growing size of the objects demands more than one scan. So, we have made an automatic system to cope with all the related problems. The result of the indoor and outdoor experiments of our system will be explained in the following order: ## 2 General Outline of Process With our algorithm we extract the planar and break-line in the following order. First: the production of 2dimentional range image out of the point cloud data. Second: the extraction of planar / curved surface as the pre-processing step of the segmentation. Third: the segmentation. Fourth: the extraction of the break-line (See: Figure 1). The parameter in the explanation below is the value obtained experimentally out of several samples. Figure 1: Flow chart of Process ### Normal Vector Here we explain the calculation of the normal vector of the local plane, used in the segmentation and its pre-processing steps. We are calculating the local plane by the Least Square Method, using the 3Dimensional coordinates of the point n\\(\\times\\)n in the neighborhood of the interest point. In our experiment we are making \"n\" 5 or 7. We make this calculation with all the point clouds. The Figure.2 shows how the local plane looks like when we calculate 5\\(\\times\\)5 points. The central point which we use for the calculation of the local plane is shown here as a black Dot (interest point) and the neighbor points, as white points. The arrow mark indicates the direction of Scan to the plane. With both (a) and (b) we are scanning a plane. With (a), since the scan is done in the angle fairly close to the normal vector of the object plane, the distance between the points is very short, whereas with (b), since the scan is done in the angle oblique to the object plane the distance between the points is greater in depth. In both cases, however, since the points lie on a plane, the local plane can be calculated accurately. Now, (c) and (d) are the result of scan on the Edge region. The (c) shows the black Dot at the border of the plane A, which is closer to the scanner, and the plane B which is behind A. We call the part like this \"Jump Edge\". The (d) shows that the black Dot is at the border of the plane A and B, which are, unlike to (c), connected to each other with the Edge as their boundary. The (e) shows a ragged plane, which is neither planar nor curved surface. ### Segmentation and its Pre-Processing Step There are three different methods of segmentation of the point cloud. The Model Based Method put the point clouds on the existing model (Wang, 2003; Min, 2005). The Edge Based Method detect the Edge out of a range image and determine the region surrounded by the Edges as one Region (Zhang, 2001; Bellon, 2002, Sappa, 2001). And the Region Based Method label all the points, which are similar in local plane, with the same label, judging from the normal vectors and Laser Intensity (Stamos, 2002; Yu, 2001; Chen, 2007; Konno, 2007). Both Model Based and the Edge Based Methods are fitted for the segmentation of a big simple form object such as the fazade. However, as we are interested in the segmentation of the detailed features, such as window frame, ceiling, column or pillar, we apply the Region Based Method which uses the local normal vectors. Now, Konno and his group (Konno, 2007), also applying Region Based Method, are extracting the feature lines in order to determine the proper starting point of labeling. But since this requires the feature lines to be clear and exact, they can work only on a planar. Besides, often times in the real survey site, the boundary of the planes cannot necessarily be extracted as an exact feature line. But this could easily undermine the accuracy of the labeling. Again, Stamos and Allen (Stamos, 2002) have also challenge to clear. They try to determine the point, which is used for normal calculation, by the fixed threshold value of the distance from the interest point. But as the Figure 2 (b) indicates, when scan is made in the oblique angle to the local plane, the distance between points gets longer even on the same plane with the result that its difference from the jump edge(c) becomes inevitably blurred and the labeling result would not meet the necessary accuracy. In our system, however, we have cleared all these challenges. As a pre-processing step for labeling we have invented a system by which the starting point should always appear on a planar or curved surface. We are judging whether the starting point is on the planar or curved surface by examining (1) the fitting accuracy (goodness of local plane fitting) at the interest point, (2) the curvature and (3) the distance to the neighbor points. By this examination we can determine the pertinent the starting point of labeling. And the initial labeling is given only to the point which is judged to be on such surface. And calculating the difference of the direction of the normal vector of interest point and that of its neighboring points, we put the same label (number) to them, if the difference is less than the threshold value. Then next, we proceed to expand the labelling (Region Growing Procedure) to the points which have not yet been labeled. The judgment of applicability of this labeling is made by using the distance between the interest point and the local plane which is calculated from the labeled points neighboring to the interest point. And lastly, taking the two different planes, we calculate their intersecting line and determine it as the break-line. ## 3 Range Image Production The measuring sphere of TLS is controlled by the azimuth angle \"h\" of the beam direction on the horizontal plane and by the elevation angle \"v\" on the vertical plane. And by scanning we can obtain for each point the 3 Dimensional coordinates (x, y, z) and Reflection Intensity as well as the color data of RGB. And as in the Figure.3, if we look at this projection also as the projection onto the plane with h and v as its axes, we have the 2 Dimensional image (h, v), which corresponds to the 3 Dimensional coordinates (x, y, z). We call this 2D image \"Range Image\". As a result, for each point we have 3 Dimensional coordinates (x, y, z), the Range Image coordinates (h, v), Reflection Intensity and RGB. Figure 2: The Patterns of Local Plane The Black Dot is the Center of Local Plane The Arrow Shows the Direction of ScanWith some TLS, which provides the h and v at an equal interval in the projecting direction, we can use the data obtained by the TLS directly as the range image data. However, with the data edited by the operator, it happens sometime that some of the coordinates data are dropped from the range image or are not given at an equal interval. In such cases we must reconstruct the range image. For this reason we always make this range image first in our algorithm. We make a regular grid with designated resolution on the projection plane and register the point closest to the mesh intersecting point. With this in hand, we can perform later the calculation of normal vectors and segmentation process like the calculation of 2 Dimensional image. When we scan an object from the different angles, we must make a different range image each time. As to the resolution degree, usually we determine it by the sphere of the projecting direction and the number of the data points. But we can change the resolution if we so wish. At the top of Figure 4 you will find the 3 Dimensional picture of the original point cloud data and at the bottom, the picture of the range image created from these data. ## 4 Planar / Curved Surface Extraction This is a pre-processing step for determining the pertinent starting point of labeling needed for the segmentation in the next chapter. The segmentation is the process to paste the same label (number) on the points belonging to the same planar or curved surface. The determining factor is the difference of the direction of the normal vectors between an interest point and its neighbor points. But if the interest point is on the raged plane (e) as in the Figure 2, or on the Edge or near the Edge (c) (d), the normal vector calculation is liable to error. So, if we make labeling with such point as the starting point the result could be erroneous. We use the next three judging methods, therefore, to eliminate beforehand the points which do not belong to the planar / curved surface and after that we proceed to paste the label. ### Judging by Goodness of Fitting To determine the accuracy of Fitting we used the maximum distance from the local plane which was made Fit at the interest point to each of the neighbor points, which we used for Fitting. If the distance was longer than the threshold value d\\({}_{\\text{max}}\\) which had been set up beforehand, we judged that the fitting made at the interest point had not been correct. The purpose of this process is to eliminate the points on the raged plane or the points on the Edge (Figure 2. (c) (d)). In our experiment we made the threshold value as d\\({}_{\\text{max}}\\) = 0.02[m]. ### Judging by Curvature First at the interest point, we obtain the standard deviation (e\\({}_{\\text{NNx}}\\) e\\({}_{\\text{NNy}}\\), e\\({}_{\\text{NNx}}\\) ) of the size of the normal vector of the neighbor n x n points in the direction of each axis. And using them, we calculate the Curvature C by the equations below. Since near the Edge, as in Figure 2. (d), the direction of normal vector is diversified, this value becomes bigger. Accordingly, the points in such region are eliminated. As to this calculation, first for the whole of the Range Image, we calculate the curvature of each point and obtain the maximum C\\({}_{\\text{max}}\\) and minimum C\\({}_{\\text{min}}\\). And also we calculate the threshold value C\\({}_{\\text{on}}\\) by the equations below. As the Parameter we set \\(\\gamma\\) within the range of 0.0 \\(\\sim\\) 1.0. In our experiment we set n = 5 and \\(\\gamma\\) = 0.5. \\[\\begin{split} c=\\sqrt{{\\sigma_{{N_{\\text{{x}}}}^{2}}}+{\\sigma_{{N _{\\text{{y}}}}^{2}}}+{\\sigma_{{N_{\\text{{z}}}}^{2}}}}^{2}}\\\\ c_{th}=\\left(c_{\\text{max}}-c_{\\text{min}}\\right)\\times\\gamma+c_{ \\text{min}}\\end{split} \\tag{1}\\] Figure 4: Top: Scanned Data (3D) Bottom: Range Image (2D) Figure 3: Coordinates of Range Image and 3 Dimensional Coordinates of an Object. ### Judging by the Distance from the Surrounding Lastly we make the judgment by using the distance between the interest point and the neighbor points. At the jump Edge as shown in the Figure 2.(c), the distance between the points changes greatly. So, we obtain the maximum distance and minimum distance and use their ratio (maximum value / minimum value) for judgment. In our case we made threshold value as R\\({}_{\\rm{th}}\\) = 2.0. And if it was greater than this, we judged the point to be near the Jump Edge and eliminated accordingly. ## 5 Segmentation ### Initial Labeling We make this operation with the points which were judged to be on the Planar / Curved surface in the previous chapter. As explained there, we compare the direction of the normal vector between the interest point and that of its neighbor points. And if the difference is less than the threshold value \\(\\theta_{\\rm{th}}\\), we paste the same label (number). And we search for the point to be processed from the upper left side of the Range Image and make it the starting point. Since we use the range image, we can operate with the same algorithm as the processing of 2 D labeling. Here we have\\(\\theta_{\\rm{th}}\\)=2.0[deg]. ### Region Growing Procedure Now we work on the points which were not labeled by the initial labeling in order to expand the region. In the previous operation the points near the Edge were not labeled. But some are still on the neighboring plane. So, the purpose of this process is to redeemen them by labeling. First we look at the labeled point and unlabeled point adjacent to it. Using the labeled point in the neighborhood (k<k) of the interest point, we calculate the local plane with the point which has the same label. If we have more than one label, we calculate the local plane for each one of them. And then we calculate the distance between the interest point and the local plane of each point of the same label and select the closest label. But if the distance is bigger than the threshold value L\\({}_{\\rm{en}}\\)we ignore the point. Furthermore, if the change (\\(\\theta_{\\rm{th}}\\)) of the direction of the normal vector of the local plane before and after the Region Growing Procedure is within the threshold value, and if the ratio (R\\({}_{\\rm{fin}}\\)) of the distance between the interest point and neighboring points, and the average point interval of the neighbor region which we have used to calculate its local plane, is within the threshold value, we judge that the interest point belongs to the plane of neighboring labels and redeem it. Here we have k = 7, L\\({}_{\\rm{en}}\\) = 0.01[m], \\(\\theta_{\\rm{th}}\\)= 1.5[deg], R\\({}_{\\rm{En}}\\)= 2.0. All other points which have not been labeled are considered to be that of noise. But if we enlarge the value of \\(\\theta_{\\rm{th}}\\) for them, it may be possible to redeem them. ## 6 Break- Line Extraction Picking up the two labeled which are next to each other using range images, we calculated out their intersection line as their break-line. The position of the both ends of the line were located by projecting the points in the close proximity of the intersecting line onto the intersecting line itself. But if two planes are parallel, or if they are separated in the 3D space, they can not have the intersecting line. Therefore, we developed the system to display at the same time the boundary line (the line connecting the points on the contours). Since the boundary line surrounds the labels, it comes out always as an enclosed circuit. ## 7 Result We applied our algorithm to three kinds of object: an indoor object (Figure 5.), an intersection of streets (Figure 6.) and a building (Figure 4. and 7.). The results of their Segmentation are shown with different color for each plane of the same label. For TLS we used Topon GLS-1000. For PC we used CPU: Inter Core2 Duo P8700 2.53GHz, RAM: 2.8GB. The Table 1. shows the number of the points of each data and the time required for processing. The Table 2. shows the original data size (txt file format) and the size after the break-line and boundary are output with the dxf file format. Figure 5. and 6. show how the noises (black marks) of the pedestrians and cars passing between TLS and the buildings, are eliminated. Figure 5. Indoor Sample Top: Segmentation Result: Black shows noise. The area surrounded by dotted line is the noise of pedestrians Bottom: Break-line and BoundaryTable 1 shows each Sample with its number of Points and Processing Time. The Building Sample took long time, because it had over ten million Points. But you can shorten the time by relaxing the resolution density when we produce the range image, as explained in the Chapter 2. Though accuracy diminishes, it would still be clear enough to grasp the outlook. Table 2. shows the data size of input and output. We can see that the size can be compressed down to 1/4 - 1/9. As the data of Building have CAD plan (Figure 7. Bottom), we measured the deviance between the CAD plan, and the boundary and break-lines produced by our algorithm (Figure 7. Top). We selected 32 Points at the corner of a window frame both from the CAD plan and from the PC output data and calculated the deviance by our Topcon software \"Image Master (Old Name is PI-3000 and We have also developed Human Body Measurement using this technology)\" for 3 D measurement (Kochi, 2003, 2009; Kitamura, 2009). Table 3 shows its result. Table 1 shows each Sample with its number of Points and Processing Time. The Building Sample took long time, because it had over ten million Points. But you can shorten the time by relaxing the resolution density when we produce the range image, as explained in the Chapter 2. Though accuracy diminishes, it would still be clear enough to grasp the outlook. Table 2. shows the data size of input and output. We can see that the size can be compressed down to 1/4 - 1/9. As the data of Building have CAD plan (Figure 7. Bottom), we measured the deviance between the CAD plan, and the boundary and break-lines produced by our algorithm (Figure 7. Top: Boundary and Break-lines produced by our algorithm - CAD Plan \\begin{table} \\begin{tabular}{c|c|c|} \\hline Sample Name & Point Number (,000) & Processing Time \\\\ \\hline Indoor & 181 & 9sec \\\\ \\hline Intersection & 1276 & 1 min. 34 sec \\\\ \\hline Building & 10673 & 38 min. 12 sec \\\\ \\hline \\end{tabular} \\end{table} Table 1: Number of Points and Processing Time \\begin{table} \\begin{tabular}{c|c|c} \\hline & x & y \\\\ \\hline RMS & 25.9 & 26.2 \\\\ \\hline MAX & 51.5 & 59.8 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Comparison of the PC output Data and CAD Plan RMS and MAX [mm] of the Deviance on the plan Figure 6: Intersection Sample Top: Range Image Middle: Segmentation Result. The area surrounded by dotted line is the area of the noises by passing cars Bottom: The top Sample after the elimination of noises Figure 7: Top: Boundary and Break-lines produced by our algorithm Bottom: CAD Plan ## 8 Recaptitulation We created an algorithm to segment the plane and to extract the break-line, while eliminating automatically the noises of the pedestrians, cars and plants which come in between the TLS and the objects while scanning. We can see the elimination of noises in the samples of an indoor (Figure 5.) and intersection (Figure 6.). In the past, especially at the location where we had so much noises like at an intersection, we had to spend several hours with manual work. With our new algorithm, however, we can do that only in 1.5 minutes, thus economizing enormous amount of time. As to the deviance in the data of Building, we found it to be 25-26mm in RMS between the CAD data and data obtained by our algorithm. This means, with our algorithm, at its present stage of development, we can already obtain the result of draft level accuracy from the Point Cloud Data. Besides, by transposing the Point Cloud Data to the break-line or boundary line, the data size can be compressed down to 1/4-1/9. This enables us to recognize the exterior features and shape of an object, even without high performance PC, which is very helpful to understand the form on the real site. The processing time of a Building took us nearly 40 minutes, but if we tame the intensity of resolution in making range image, we can reduce the time considerably. Our ultimate goal is the automatized production of the perfect plan out of the Point Clouds. But as we cannot yet totally eliminate the measuring or calculation errors, our algorithm is not self sufficient, at this point, for finalizing the total process. Nevertheless, it is practical enough for making pre-processing steps, such, for example, as to assist the understanding of the object form, or making draft image. And it has proved how efficiently it can simplify and expedite such process. Our next step is to develop the algorithm which can automatically integrate the operations of more than one TLS scan, unifying all the Point Cloud data thus gathered from different angles (stand points). ## References * Bellon and Silva (2002) Bellon, O. R. P., and Silva, L., 2002. New Improvements to Range Image Segmentation by Edge Detection. _IEEE Signal Processing Letters_. * Chen and Stamos (2007) Chen, C., and Stamos, I., 2007. Range Image Segmentation for Modeling and Object Detection in Urban Scene. _The 6th International Conference on 3-D Digital Imaging and Modeling_, Montreal, Canada, August 21-23, * Jiang and Bunke (1999) Jiang, X. and Bunke, H., 1999. Edge Detection in Range Images Based on Scan Line Approximation. _Computer Vision and Image Understanding_ Vol.73, No.2.,. pp.183-199, * Kitamura et al. (2009) Kitamura, K., Kochi, N., Watanabe,H., Yamada,M., Kaneko,S., 2009, Human Body Measurement by Robust Stereo-matching, _9th Conference on Optical 3-D Measurement Techniques_, July 1-4, Viena University of Technology, Vol.2 pp254-263 * Kochi et al. (2003) Kochi,N., Ito,T.,Noma,T., Otani,H., Nishimura,S., Ito,J., 2003, PC-based 3D Image Measuring Station with digital camera an example of its 24 actual application on a historical ruin, _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. XXXIV, Part 5/W12, pp. 195-199. * Kochi (2009) Kochi, N., 2009, Photogrammetry, _Handbook of Optical Metrology : Principles and Applications_, Yoshizawa,T.'(Ed.), Taylor and Francis, NewYork, Chapter 22. * Konno et al. (2007) Konno, T., Konno, K., and Chiba, N., 2007. Ridge Lines Extraction by Hierarchical Planet Segmentation of Measured Point Clouds. _Nicograph_, Vol6, No4, pp.197-206, * Min and Bowyer (2005) Min, J., and Bowyer, K., 2005. Improved range image segmentation by analyzing surface fit patterns. _Computer Vision and Image Understanding_, Vol.97, Issue.2, pp.242-258, * Mitra and Nguyen (2003) Mitra, N., and Nguyen, A., 2003. Estimating Surface Normals in Noisy Point Cloud Data. _Proc. Of the 9th Annual Symposium on Computational Geometry_, * Sappa and Devy (2001) Sappa, A. D. and Devy., M. 2001. Fast Segmentation by an Edge Detection Strategy. _In Proc. 3rd International Conference on 3-D Digital Imaging and Modeling_, Quebec, Canada, 2001. pp292-299 * Stamos and Allen (2002) Stamos, I., and Allen, P. K., 2002. Geometry and Texture Recovery of Scenes of Large Scale. _Journal of Computer Vision and Image Understanding_. * Wang and Suter (2003) Wang, H., and Suter, D., 2003. A model-based range image segmentation algorithm using a novel robust estimator. _In Proc. 3rd International Workshop on SCTV_, * Yu et al. (2001) Yu, Y., Frencz, A., and Malik, J., 2001. Extracting objects from range and radiance images. _IEEE Trans. Visualization and Computer Graphics_, Vol.7, No.4, pp.351-364, * Zhang et al. (2001) Zhang, Y., Sun,. Y., Sari-Sarraf, H., and Abidi,,M., 2001. Impact of Intensity Edge Map on Segmentation of Noisy Range Images. _In Proc. SPIE Conf. On Three-Dimensional Image Capture and Applications III_, Vol.3958m, pp.260-269, Quebec, Canada.
We are making research to find a novel method to extract the break-lines of a building from the point clouds obtained through a Scanner (Terrestrial Laser Scanner: TLS). The algorithm we have developed this time is to segment the plane surfaces and point clouds and to extract the line of two intersecting planes, as the break-line. But since our algorithm is not yet completely free from the errors in calculating of algorithm and measuring of TLS, it is used as yet primarily for pre-processing steps, such as creation of a plan or a modeling or assisting to understand object form or making rough sketch of an object. But even for these preliminary works, up to now, we were obliged to resort to manual works and to the use of various applications. So, now we have developed a novel system to extract all automatically the surface and break-lines, which would greatly improve the efficiency of the whole operation. We have also developed a system to convert the point clouds data into the polygon data of surface and lines, much to the alleviation of the setback due to the heavy data accumulation. Besides, in order to facilitate the measurement using TLS in the real environment, our algorithm makes it possible to segment the planes and to extract the break-lines of objects, while eliminating automatically all the disturbing noises created by pedestrians, cars, trees, and grasses intervening between the TLS and the objects. TLS, Point Cloud, Segmentation, Surface, Extraction, Algorithm, Three-dimensional, Measurement
Provide a brief summary of the text.
291
arxiv-format/2407_17942v1.md
# A Novel Perception Entropy Metric for Optimizing Vehicle Perception with LiDAR Deployment Yongjiang He, Peng Cao, Zhongling Su, and Xiaobo Liu This paper was produced by the IEEE Publication Technology Group. They are in Piscataway, NJ.This work was supported in part by the National Natural Science Foundation of China under Grant No.52172395. National Science Foundation of Sichuan, China No.2022NSFSC0476. (Corresponding author: Xiaobo Liu).Yongjiang He, Peng Cao, and Xiaobo Liu are with the School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, 610031, China (e-mail: [email protected]; [email protected]; [email protected]). Zhongling Su is with the Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China (e-mail: [email protected]). ## I Introduction Light Detection and Ranging (LiDAR) can generate detailed 3D spatial data in real-time, based on which traffic objects can be detected and tracked [1, 2, 3]. To advance the progress of connected and automated vehicles (CAVs), the deployment of LiDARs on vehicles or at roadside units is becoming increasingly prevalent for gathering detailed traffic information. However, differences in detection accuracy among various LiDAR deployments range from 5% to 10%, which can cause significant difficulties in motion planning, vehicle control, obstacle avoidance, and other autonomous driving functions [4, 5, 6]. Therefore, strategically deploying LiDAR systems to maximize their perception capabilities is imperative. In the quest to identify the optimal LiDAR deployment, evaluating the perception performance of different deployments is essential. Consequently, developing an accurate and efficient evaluation metric for LiDAR perception performance is crucial. Current research on LiDAR perception performance evaluation metrics can be divided into two categories. The first category is object detection-based metrics, which evaluate LiDAR perception performance based on object detection algorithms [7, 8]. Metrics such as _Recall_, mean Average Precision (_mAP_), Intersection over Union (_IoU_), and _confidence_ are widely used to evaluate LiDAR performance. These metrics are considered the ground truth for evaluating LiDAR perception performance because they accurately reflect LiDAR's capabilities in vehicle detection, classification, and localization [9, 10, 11]. However, object detection-based metrics rely on complex deep learning and clustering algorithms, which require time-consuming training and significant computing power. The second category is point cloud data-based metrics, which encompass parameters such as point cloud coverage area, numbers, and density [4, 12, 13, 14]. These metrics are derived through straightforward statistical analyses, demanding minimal computational resources to offer a rapid assessment of LiDAR perception performance. However, existing point cloud data-based metrics overlook the impact of vehicle point cloud distribution on perception performance, resulting in an inability to accurately evaluate LiDAR perception performance. Optimizing LiDAR deployment in real-world scenarios is challenging. One major issue is the lack of metrics that can simultaneously generate fast and accurate evaluations based on either object detection or point cloud data. This makes it difficult to effectively assess LiDAR perception performance [15]. Additionally, to enhance LiDAR perception capability, some studies have proposed optimal deployment configurations through simulation tools and optimization methods [16, 17]. However, existing simulation tools are limited to simulating LiDARs with uniformly distributed beams and cannot handle those with uneven beam distribution [6]. Therefore, it is necessary to develop a simulator capable of simulating various types of LiDAR, configuring deployments, and generating point clouds. Finally, previous optimization methods mainly focus on the number and placement of LiDARs, with limited studies addressing the tilt angle of LiDAR deployment [18]. The tilt angle alters the direction of LiDAR beams, influencing the distribution of point clouds on detected vehicles [19]. To maximize LiDAR perception capability, it is essential to develop an optimization model for LiDAR deployment that includes both placement and tilt angle. To address these three challenges, this paper proposes a LiDAR deployment optimization framework, as shown in Fig. 1. The main contributions of this study are summarized as follows:1. We proposed a novel metric called Perception Entropy based on Vehicle Grid Occupancy Probability (PE-VGOP). This metric leverages the distribution of vehicle point clouds to swiftly and accurately assess vehicle detection performance under different LiDAR deployments. 2. We developed a LiDAR deployment simulator based on Gazebo. The simulator can model various types of LiDAR, customize LiDAR deployments, simulate traffic scenarios, and generate point clouds. 3. We developed a LiDAR deployment optimization model focused on placement and tilt angle with the goal of maximizing perception entropy. To solve the proposed optimization model, we designed a differential evolution-based particle swarm optimization algorithm. ## II Related Works ### _Point Cloud-based LiDAR Performance Evaluation_ Previous research on point cloud data-based LiDAR performance evaluation metrics has primarily focused on statistically analyzing the inherent physical properties of point clouds, such as quantity, density, and coverage area [20, 21, 22]. For example, Vijay _et al._[23] divided the road surface into grids and evaluated LiDAR perception performance at different deployments by counting the number of covered grids. Jin _et al_[24] used the density of point clouds in the Region of Interest (RoI) to evaluate the LiDAR perception performance at various heights. Roos _et al_[7] proposed dividing the 3D space occupied by vehicles into voxels and then converting the proportion of occupied voxels into perception entropy to evaluate vehicle detection performance. Similarly, Hu _et al_[6] suggested converting the voxel occupancy probability of point clouds in the ROI into information gain, thus assessing LiDAR perception capability on CAVs. However, existing point cloud data-based metrics do not account for the impact of point cloud distributions on detection results, leading to inaccurate evaluations of LiDAR perception capabilities in many scenarios [20, 21]. For instance, in heavy traffic environments, these metrics might indicate very high LiDAR perception performance due to the large number of point clouds. However, these evaluations are misleading because the point clouds may be concentrated on a limited number of vehicles, causing many other vehicles to remain undetected due to sparse LiDAR points. Therefore, this study aims to propose an evaluation metric that considers the point cloud distributions on vehicles, providing a more accurate assessment of LiDAR perception performance. ### _LiDAR Deployment Optimization_ Existing studies on LiDAR deployment optimization have mainly focused on the number and placement of LiDARs [25, 26, 27]. For example, Jiang _et al_[28] proposed a greedy algorithm based on perceptual gain to optimize the placement of roadside multi-LiDARs and achieve optimal perception; Kim _et al_[29] developed a genetic algorithm to optimize placement of LiDAR on a vehicle, aiming to reduce dead zones and improve point cloud resolution; Ji _et al_[19] proposed an optimization model for the positional relationship and point cloud coverage of roadside multi-LiDARs to optimize their placement on urban roads. Only a few works have focused on the tilt angle in LiDAR deployment optimization. For example, Mou _et al_[26] proposed an optimization model that maximizes grid occupancy in the perception area to optimize LiDAR placement and tilt angle on CAVs. However, including the tilt angle as a parameter in LiDAR deployment dramatically increases the complexity and solving difficulty of the optimization model, because the tilt angle alters the original emission directions and perception space of all LiDAR beams [20]. For instance, Jin _et al_[24] optimized a roadside LiDAR deployment by comparing the perception performance at different installation heights and tilt angles using a traversal approach due to the lack of effective optimization models and solution algorithms. These gaps in models and algorithms result in limited perception capabilities of LiDAR in real-world applications. Therefore, this study aims to propose a LiDAR placement and tilt angle deployment optimization model and design an efficient algorithm to maximize LiDAR's vehicle perception capability. ## III Methodologies ### _Development of LiDAR Deployment Simulator_ This study focuses on mechanical rotating LiDARs, defining the vertical angles of its beams as \\(\\beta\\). Each laser beam, denoted as \\(\\beta_{i}\\in\\beta\\), generates varying horizontal angles (\\(\\alpha_{i}\\in\\alpha\\)) by horizontally rotating the beam at a specified resolution. Thus, a LiDAR can be formalized as a collection of spatial vectors represented by \\(\\beta\\) and \\(\\alpha\\): \\[L(\\alpha,\\beta)=\\begin{bmatrix}\\sin{(\\alpha)}\\cos{(\\beta)}\\\\ \\cos{(\\alpha)}\\cos{(\\beta)}\\\\ \\sin{(\\beta)}\\end{bmatrix} \\tag{1}\\] Fig. 1: LiDAR deployment optimization framework. Eq. (1) represents the perceptual space of the LiDAR. The perception space will change with LiDAR deployments and can be formalized as follows: \\[L^{\\prime}(\\alpha,\\beta)=L(\\alpha,\\beta)\\cdot R_{tilt}+L_{lidar} \\tag{2}\\] where \\(L_{lidar}\\) represents the location of the LiDAR, and \\(R_{tilt}\\) represents the tilt matrix of the LiDAR. The tilt matrix for rotating around the X and Y axes by a tilt angle \\(\\theta\\) is given by: \\[R_{x}(\\theta)=\\begin{bmatrix}1&0&0\\\\ 0&\\cos(\\theta)&-\\sin(\\theta)\\\\ 0&\\sin(\\theta)&\\cos(\\theta)\\end{bmatrix} \\tag{3}\\] \\[R_{y}(\\theta)=\\begin{bmatrix}\\cos(\\theta)&0&\\sin(\\theta)\\\\ 0&1&0\\\\ -\\sin(\\theta)&0&\\cos(\\theta)\\end{bmatrix} \\tag{4}\\] \\[R_{tilt}=R_{x}(\\theta_{x})\\cdot R_{y}(\\theta_{y}) \\tag{5}\\] The tilt matrix for Z-axis rotation does not need to be considered since the rotation of the LiDAR around the Z-axis does not affect the emission angles of the laser beams. Therefore, the point cloud collected by LiDAR in a specific deployment configuration can be represented as: \\[Points=d\\times L^{\\prime}(\\alpha,\\beta) \\tag{6}\\] where \\(d\\) represents the propagation distance of the lasers from the LiDAR unit to the target. To collect point cloud data across diverse LiDAR deployments, we developed a simulator by leveraging the secondary development capabilities of the Gazebo sensor model [30]. As illustrated in Fig. 2, the steps for point cloud data collection using the simulator are as follows: _Step 1_: Define a LiDAR model as in Eq. (1) and set the deployment configurations as Eq. (2) - (5). _Step 2_: Obtain vehicle locations from traffic simulation software or trajectory datasets, and generate corresponding vehicle models in the Gazebo simulator. _Step 3_: Publish laser beams at various angles as _ROS topic_ messages. _Step 4_: Utilize Gazebo's physical sensor model for ray tracing and collision detection to determine the intersection points of laser beams within the vehicle. _Step 5_: Publish the intersection points as _ROS topic_ messages and save them as vehicle point clouds. The developed simulator can customize various LiDAR models and deployment configurations, and collect vehicle point clouds in different scenarios. By evaluating the point clouds collected under different deployment configurations, LiDAR deployment optimization can be achieved. ### _Modeling of the PE-VGOP_ In this section, a novel metric named Perception Entropy based on Vehicle Grid Occupancy Probability (PE-VGOP) is introduced to assess LiDAR perception performance using the collected point clouds. The modeling process is detailed as follows: Initially, the collected point clouds are transformed from the LiDAR coordinate system to the vehicle coordinate system to characterize their distributions on the vehicle. As shown in Fig. 3, the vehicle coordinate system is established as a right-hand system, with the vehicle's forward direction serving as the X-axis and the center of the vehicle's bounding box as the origin. The transformation process can be formalized as follows: \\[P_{veh}=(P_{lidar}-C_{box})\\cdot R_{yaw} \\tag{7}\\] where \\(P_{veh}\\) and \\(P_{lidar}\\) represent point cloud coordinates in the vehicle and LiDAR coordinate systems, respectively. \\(C_{box}\\) represents the center coordinate of the vehicle's bounding box in the LiDAR coordinate system. \\(R_{yaw}\\) denotes the vehicle's rotation matrix, which is calculated as: \\[R_{yaw}=\\begin{bmatrix}\\cos\\left(\\theta_{v}\\right)&-\\sin\\left(\\theta_{v} \\right)&0\\\\ \\sin\\left(\\theta_{v}\\right)&\\cos\\left(\\theta_{v}\\right)&0\\\\ 0&0&1\\end{bmatrix} \\tag{8}\\] where \\(\\theta_{v}\\) represents the vehicle's heading angle. Subsequently, vehicle point clouds are projected from three-dimensional space onto three two-dimensional planes along the X, Y, and Z-axes of the vehicle coordinate system to obtain three orthographic projection perspectives: front view, side view, and top view. Each projection plane is divided into identical grids. The state of each grid, either occupied or unoccupied by point clouds, is represented as \\(v_{i}\\) and is calculated as follows: Fig. 3: Visualization of vehicle coordinate system. Fig. 2: Simulator for LiDAR deployment and vehicle point cloud collection. \\[v_{i}=\\begin{cases}1,&\\text{if }\\exists p_{i}\\in g_{i}\\\\ 0,&\\text{if }\ exists p_{i}\\in g_{i}\\end{cases} \\tag{9}\\] where \\(g_{i}\\) represents the grid \\(i\\) in a projection plane and \\(p_{j}=(x_{j},y_{j},z_{j})\\) represents a point on the projection plane. Additionally, the VGOP in each projection plane is used to represent the quantity, density, and distribution characteristics of the vehicle point clouds, as shown in Fig. 4. These characteristics can be derived through statistical calculations as: \\[\\left\\{\\begin{aligned} P\\left(v^{t}\\right)&= \\frac{\\frac{N_{t}^{t}}{\\sum\\limits_{i=1}^{N_{s}}}}{N_{t}}\\\\ P\\left(v^{s}\\right)&=\\frac{\\sum\\limits_{i=1}^{N_{s}}(v_{i}^{s})}{N_ {s}}\\\\ P\\left(v^{f}\\right)&=\\frac{\\sum\\limits_{i=1}^{N}(v_{i}^{f})}{N_ {f}}\\end{aligned}\\right. \\tag{10}\\] where \\(P(v^{t})\\), \\(P(v^{s})\\), and \\(P(v^{f})\\) represent the VGOP under top view \\(t\\), side view \\(s\\), and front view \\(f\\), respectively. \\(N_{t}\\), \\(N_{s}\\), and \\(N_{f}\\) represent the number of grids in top view, side view, and front view and are calculated as follows: \\[N_{t}=\\frac{l\\times w}{\\mu_{t}},N_{s}=\\frac{l\\times h}{\\mu_{s}},N_{f}=\\frac{w \\times h}{\\mu_{f}} \\tag{11}\\] where \\(\\mu_{t}\\), \\(\\mu_{s}\\), and \\(\\mu_{f}\\) represent the grid dimensions in the top view, side view, and front view, respectively. \\(l\\), \\(w\\), and \\(h\\) represent the length, width, and height of the vehicle, respectively. Finally, the PE-VGOP is employed to quantify the amount of information contained within vehicle point clouds, which is calculated as: \\[E(veh)=-\\sum\\limits_{j\\in\\{t,s,f\\}}P\\left(v^{j}\\right)\\log_{2}\\left(P\\left(v^ {j}\\right)\\right) \\tag{12}\\] where \\(E(v_{i})\\) represents the PE-VGOP of a vehicle. As a alternative metric for evaluating vehicle detection performance, the effectiveness of PE-VGOP is demonstrated in Section IV. ### _LiDAR Deployment Optimization Model_ The optimization objective is defined as maximizing the perception entropy of all vehicles in the perception area with the LiDAR deployment of location \\(L_{lidar}\\) and tilt matrix \\(R_{tilt}\\), which is formulated as: \\[\\begin{split} F(L_{lidar},R_{tilt})^{*}&=\\arg \\max\\sum\\limits_{i=1}^{N}E(veh_{i})\\mathbb{1}\\left(P(v_{i})\\right)\\\\ &-C(1-\\mathbb{1}(P(v_{i})))\\end{split} \\tag{13}\\] \\[\\mathbb{1}\\left(P(v_{i})\\right)=\\begin{cases}1,&\\text{if }P(v_{i})\\geq \\delta\\\\ 0,&\\text{if }P(v_{i})<\\delta\\end{cases} \\tag{14}\\] where \\(\\mathbb{1}\\left(P(v_{i})\\right)\\) represents the indicator function, which takes the value of one when \\(P(v_{i})\\geq\\delta\\) and 0 otherwise; \\(C\\) represents the constant loss; \\(\\delta\\) represents the threshold of VGOP, indicating that a vehicle can be detected when the VGOP exceeds a certain threshold; otherwise, it may result in detection loss. Considering the non-linearity and non-convexity of the optimization model, we developed a differential evolution-based particle swarm optimization algorithm (DE-PSO) to solve the model. The algorithm procedure is shown in Algorithm 1, and the main steps are detailed as follows: ``` 0: Iteration number \\(T\\); particle swarm number \\(N\\); adaptation function \\(F(L_{lidar},R_{tilt})\\); inertia weighting \\(w_{1}\\); differential weighting \\(w_{2}\\); acceleration factors \\(a_{1}\\), \\(a_{2}\\); differential threshold \\(\\gamma\\) 1:Initialize: Positions of particle swarms \\(P_{i}=(L_{i},R_{i})\\); velocities \\(V_{i}=0\\); personal best positions \\(P_{i}^{best}\\); global best position \\(G_{best}\\) 2:for\\(t=1\\) to \\(T\\)do 3:for\\(i=1\\) to \\(N\\)do 4:\\(r_{1}\\), \\(r_{2}\\leftarrow\\) random numbers between 0 and 1 5: update velocity of \\(P_{i}\\) use Eq. (15) 6:\\(r_{3}\\leftarrow\\) random number between 0 and 1 7:if\\(r_{3}<\\gamma\\)then 8: apply differential evolution strategy use Eq. (17) 9:endif 10: update position of \\(P_{i}\\) use Eq. (16) 11: apply constraints to position use Eq. (18) 12: calculate \\(F(L_{i},R_{i})\\) use Eq. (13) 13:if\\(F(L_{i},R_{i})>P_{i}^{best}\\)then 14:\\(P_{i}^{best}=(L_{i},R_{i})\\) 15:endif 16:if\\(F(L_{i},R_{i})>G_{best}\\)then 17:\\(G_{best}=(L_{i},R_{i})\\); 18:endif 19:endfor 20:endfor ``` **Algorithm 1** DE-PSO algorithm Fig. 4: Example of VGOP with the grid dimensions \\(\\mu_{t}=\\mu_{s}=\\mu_{f}=0.05\\text{m}\\times 0.05\\text{m}\\). 1 Update Velocities and Positions of Particle Swarms During the iteration process, each particle moves towards its historical personal best position and the global best position, influenced by inertia weight and acceleration factors. The movement speed is calculated as: \\[\\begin{split} V_{i}(t+1)=& w_{1}\\cdot V_{i}(t)+a_{1}r_{1} \\cdot(P_{i}^{best}-P_{i}(t))\\\\ &+a_{2}r_{2}\\cdot(G_{best}-P_{i}(t))\\end{split} \\tag{15}\\] where \\(V_{i}(t+1)\\) and \\(V_{i}(t)\\) represent the velocity of particle \\(i\\) at time \\(t+1\\) and \\(t\\), respectively; \\(w_{1}\\) is the inertia weight; \\(a_{1}\\) and \\(a_{2}\\) are the cognitive and social coefficients, respectively; \\(r_{1}\\) and \\(r_{2}\\) are random numbers between 0 and 1; \\(P_{i}^{best}\\) is the personal best position of particle \\(i\\); \\(P_{i}(t)\\) is the current position of particle \\(i\\) at time \\(t\\); \\(G_{best}\\) is the global best position among all particles. Based on the velocity, the position of the particle swarm is updated as: \\[P_{i}(t+1)=P_{i}(t)+V_{i}(t+1) \\tag{16}\\] where \\(P_{i}(t+1)\\) represents the position of the particle at time \\(t+1\\). #### Iii-B2 Differential Evolution Strategy During each iteration, a random number \\(r_{3}\\) between 0 and 1 is assigned to each particle. If this number falls below the differential threshold \\(\\gamma\\), the differential evolution strategy is implemented for that particle. \\[V_{i}(t+1)=P_{i}(t)+w_{2}\\cdot(P_{j}(t)-P_{k}(t)) \\tag{17}\\] where \\(w_{2}\\) represents the differential weight; \\(P_{j}\\) and \\(P_{k}\\) represent the positions of two randomly selected particles \\(j\\) and \\(k\\), respectively. #### Iii-B3 Constraints Strategy The random parameters and differential evolution strategy may lead to particles' positions exceeding the constraint space. Eq. (18) addresses this issue by constraining the positions of particles that exceed the boundaries to their respective boundaries. \\[P_{i,j}(t+1)=\\begin{cases}P_{j}^{-},&\\text{if }P_{ij}(t+1)<P_{j}^{-}\\\\ P_{j}^{+},&\\text{if }P_{ij}(t+1)>P_{j}^{+}\\\\ P_{ij}(t+1),&\\text{otherwise}\\end{cases} \\tag{18}\\] where \\(P_{i,j}\\) represents the position of particle \\(i\\) in dimension \\(j\\); \\(P_{j}^{-}\\) and \\(P_{j}^{+}\\) represent the lower and upper bounds of dimension \\(j\\), respectively. ## IV Validate of PE-VGOP This section validates the proposed PE-VGOP metric for evaluating LiDAR perception performance. In support of this proposal, the proposed PE-VGOP was utilized to assess the detection performance of 14,357 vehicles at different locations within 3,712 selected point cloud frames from the KITTI dataset [31]. These selected frames feature real-world point cloud data from a Velodyne 64-beam LiDAR, capturing diverse environments such as urban and rural areas. Additionally, we compared the PE-VGOP with state-of-the-art (SOTA) metrics, including MDG-P [24], S-MIG [6], and perception entropy [7]. In the comparison experiment, the vehicle detection performance based on the PointPillars vehicle detection algorithm [32] was used as the ground truth, which can be calculated as: \\[Performance=conf\\times IoU \\tag{19}\\] where \\(conf\\) denotes the confidence level that a vehicle is within the predicted bounding box, and \\(IoU\\) denotes the intersection over union between the predicted and ground truth bounding boxes. The \\(Performance\\) metric integrates prediction accuracy, as measured by \\(IoU\\), with prediction confidence, as measured by \\(conf\\). Fig. 5 illustrates the evaluation results of LiDAR perception performance using alternative metrics and the ground truth of the vehicle detection algorithm. Although it is evident that all alternative metrics capture the general trend of diminishing LiDAR perception performance with increasing distance, there are discrepancies in the finer details. Specifically, the evaluation results of SOTA metrics indicate that the LiDAR perception capability for vehicles follows a logarithmic decrease with increasing distance. However, the proposed PE-VGOP metric demonstrates a linear decrease in LiDAR perception capability, more closely aligning with the ground truth of the vehicle detection algorithm. This result indicates that the proposed PE-VGOP reflects LiDAR perception capabilities more accurately compared to SOTA metrics. More importantly, using the PE-VGOP to evaluate 3,712 frames of data takes only 18 seconds, which is significantly faster than the evaluation efficiency of the vehicle detection algorithm. As shown in Fig. 6, we also calculated the correlation between the alternative metrics and the ground truth of vehicle detection performance. From Fig. 5(a) to 5(c), it is evident that the correlation between the SOTA metrics and vehicle detection ground truth is notably low, with significant outliers. For instance, when the detection performance is around 0.7, the evaluation results of the SOTA metrics show significant variability. The reason for this issue is that the SOTA metrics primarily rely on the quantity or density of point clouds and occupied voxels for evaluation. However, these characteristics cannot accurately reflect vehicle detection performance, as they overlook the impact of point cloud distribution on vehicle Fig. 5: Comparison of the proposed PE-VGOP and SOTA metrics in evaluating LiDAR perception performance. detection results. Specifically, vehicle localization and size detection rely more on the distribution characteristics of point clouds rather than on their quantity and density. In contrast, the proposed PE-VGOP metric exhibits a robust correlation with the detection ground truth, with correlation coefficients exceeding 0.98. The above results indicate that the proposed PE-VGOP can accurately and quickly evaluate LiDAR perception performance while demonstrating strong stability. ## V Case Studies of LiDAR Deployment Optimization We optimized the deployments of 16-beam, 32-beam, and 80-beam LiDARs through the proposed optimization model to verify is. The optimization results for the three LiDAR deployments were validated in real-world scenarios and compared with the baseline deployments to assess vehicle detection performance. ### _Experimental Setup_ In the case studies, the scenario for LiDAR deployment optimization is a five-lane urban road. Given the limitations imposed by the roadside infrastructure and the scenario itself, we focused our optimization efforts solely on the LiDAR placement height and tilt angle along the X-axis. Additionally, we utilized vehicle locations from the NGSIM 1-80 vehicle trajectory dataset [33] as inputs to the simulator. This choice was made because the case study scenarios closely match the NGSIM dataset in terms of the number of lanes and road geometry, and the dataset provides accurate vehicle locations. In the optimization process, the resolution of the vehicle grid is set at \\(\\mu_{t}=\\mu_{s}=\\mu_{f}=0.05\\text{m}\\times 0.05\\text{m}\\); the threshold of VGOP \\(\\delta=0.005\\); the constant loss \\(C=-1\\); the iteration number \\(T=100\\); the particle swarm number \\(N=20\\); the inertia weight \\(\\omega_{1}=0.7\\) and the differential weight \\(\\omega_{2}=0.5\\); the acceleration factors \\(a_{1}=0.3\\) and \\(a_{2}=0.2\\); and the differential threshold \\(\\gamma=0.1\\). The optimized LiDAR models and parameters are listed in Table I. Considering the limitations of experimental equipment and operational complexity, the optimization space for LiDAR deployment height ranges from 2 to 4.5 meters, while the optimization space for the tilt angle ranges from 0 to 25 degrees. As illustrated in Fig. 7, two LiDAR units were used for each LiDAR model in these experiments, collecting point clouds Fig. 6: The correlation between alternative metrics and the vehicle detection ground truth in evaluating LiDAR perception performance. under both the baseline deployments and optimized deployments. Based on the recommendations from previous studies [3, 34, 35], an installation height of 2.0 meters without a tilt angle was selected as the baseline deployment. LiDAR units were mounted on a gimbal platform through a universal joint, allowing for the adjustment of mounting height and tilt angle to achieve both baseline and optimal deployments. From the collected point cloud data, 1,000 frames were selected for each LiDAR in both the baseline and optimized deployments for subsequent analysis. An open-source annotation tool [36] was used to annotate the positions, sizes, and headings of vehicles in the point cloud data. In the selected frames, 9,959, 19,949, and 27,497 vehicle samples were annotated for the 16, 32, and 80-beam LiDARs, respectively. These annotations were used as ground truth to evaluate the perception performance of the LiDARs before and after optimization. ### _Results and Analyses_ The LiDAR deployment optimization results are listed in Table II. It is evident that a greater number of LiDAR beams is more suitable for higher installation heights to achieve a broader field of view. Conversely, with fewer LiDAR beams, the lasers can be focused on the perception area by employing a larger tilt angle. By optimizing the deployment in this manner, we analyzed the results of the optimized LiDAR deployments regarding vehicle point cloud distribution and detection performance. #### Iv-B1 Vehicle Point Cloud Distribution A visualized comparison of point clouds obtained from LiDARs under the baseline deployments and optimized deployments is shown in Fig. 8. It is evident that the proposed LiDAR deployment optimization framework significantly improved the point cloud quantity. On the one hand, the optimizing LiDAR height reduces the shadow areas caused by vehicle obstructions, as shown in Fig. 8a and Fig. 8c. This reduction enhances LiDAR's ability to detect distant vehicles, such as vehicle No. 11 in Fig. 8a and vehicle No. 2 in Fig. 8c. On the other hand, optimizing the tilt angle allows the laser beams to concentrate within the perception area, increasing the density of the vehicle point clouds (e.g., vehicles No. 20 to 23 in Fig. 8b). #### Iv-B2 Vehicle Detection Performance In this study, vehicle detection performance is evaluated by _Recall_, which reflects the proportion of detected vehicles to the total number of vehicles. \\[Recall=\\frac{TP}{TP+FN} \\tag{20}\\] where \\(TP\\) (True Positives) represents the number of vehicles that were correctly detected and \\(FN\\) (False Negatives) represents the number of vehicles that were not detected. In this study, _Recall_ is obtained by comparing the filtered detection results at various confidence levels with the annotated ground truth rather than by calculating IoU. Because the LiDAR coordinate systems cannot be aligned under baseline and optimized deployments in the experiments, and IoU cannot be calculated correctly. Table III presents the vehicle detection _Recall_ in baseline deployments and optimized deployments. It is observed that after optimizing the LiDAR deployments, the vehicle detection performance of all three types of LiDAR has improved substantially. When the detection confidence is \\(\\geq\\) 0.1, the RS-32 LiDAR showed the most significant improvement of 25%. The RS-16 and RS-80 LiDARs also show improvements of 8% to 17% at different detection confidence levels. These results indicate that the proposed LiDAR deployment optimization model can effectively enhance LiDAR perception capability. Additionally, we provided a detailed comparison of the changes in _Recall_ with respect to perception distance at detection confidence \\(\\geq\\) 0.3, as shown in Fig. 9. It can be observed that the RS-16 LiDAR shows improved vehicle detection performance across all perception ranges with the optimized deployment configuration. However, the enhancement in perception performance for the RS-32 and RS-80 LiDARs mainly pertains to distant vehicles. For example, compared to the baseline deployment, the optimized deployment of the RS-80 LiDAR resulted in a _Recall_ increase of close to 40% for distances ranging from 120 to 140 meters. However, the RS-80 LiDAR under the baseline deployment configuration performs better in detecting vehicles within 0 to 20 meters. The possible reason for this phenomenon is that the RS-80 LiDAR beams are unevenly distributed within the FOV and fewer beams are allocated for perceiving nearby vehicles (see Table I). More critically, optimizing the LiDAR deployment through tilt angle cannot resolve this issue. As shown in Fig. 8b, while the RS-80 LiDAR deployment optimization increases the point cloud distribution for distant vehicles, the point clouds near the LiDAR become sparser. This results in a decrease in detection performance for nearby vehicles. These results indicate that although optimizing LiDAR deployment can enhance its perception capabilities, it is largely Fig. 7: The validation experiments in real-world. limited by the LiDAR's inherent beam count and distribution. For instance, in most roadside scenarios, LiDAR beams emitted above the horizontal plane are unable to detect targets on the road. Although the tilt angle can adjust the distribution of LiDAR beams, it can also potentially exacerbate issues related to decreased perceptual ability due to uneven distribution, as shown in Fig. (b)b. Therefore, it is crucial to consider the perception requirements of the subjects during the LiDAR design phase to optimize the number and distribution of beams. ## VI Conclusion This study proposed a LiDAR deployment optimization framework to enhance vehicle detection performance. Initially, we developed a simulator based on Gazebo to collect point cloud data under various LiDAR deployments. The simulator allows for customization of LiDAR models and deployments, simulation of traffic scenarios, and generation of vehicle point clouds. Subsequently, the PE-VGOP metric was introduced to evaluate LiDAR perception performance. A comparative Fig. 8: Comparison of point cloud with the baseline LiDAR deployments and the optimized LiDAR deployments. experiment conducted on the KITTI dataset demonstrated that PE-VGOP provides a more accurate and faster evaluation of LiDAR perception performance compared to the ground truth of vehicle detection results. Furthermore, we developed a LiDAR deployment optimization model and designed a DE-PSO solution algorithm focused on LiDAR placement and tilt angle, aimed at maximizing vehicle perception entropy. Field experiments using RS-16, RS-32, and RS-80 LiDARs demonstrated the effectiveness of the proposed optimization model. The results indicate that the proposed optimization framework achieved enhancements in vehicle detection Recall from 8% to 25% under different detection confidence thresholds. Additionally, experiments revealed that the optimization effect of LiDAR deployment on vehicle detection performance depends on the inherent beam distribution of the LiDAR. In future work, we plan to utilize the proposed metric to explore LiDAR perception capability from the perspective of beam number and distribution. For example, we aim to develop a quantitative model to analyze the impact of LiDAR beam distribution and the randomness of vehicles in dynamic traffic flow on perception performance. More importantly, from the LiDAR design perspective, we plan to ensure that each laser scan maximizes the number of collected vehicle point clouds. ## Acknowledgments This research was financially supported by the National Natural Science Foundation of China under Grant No.52172395, Natural Science Foundation of Sichuan, China No.2022NSFSC0476. ## References * [1]C. Xiang, C. Feng, X. Xie, B. Shi, H. Lu, Y. Lv, M. Yang, and Z. Niu (2023) Multi-sensor fusion and cooperative perception for autonomous driving: a review. IEEE Intelligent Transportation Systems Magazine15 (5), pp. 36-58. Cited by: SSI. * [2]Y. Cao, C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao (2021) Adversarial sensor attack on lidar-based Fig. 9: Vehicle detection _Recall_ comparison for LiDAR deployments at various distance. perception in autonomous driving,\" in _Proceedings of the 2019 ACM SIGSAC conference on computer and communications security_, 2019, pp. 2267-2281. * [18] F. Zhan and B. Ran, \"Data accuracy oriented method for deploying fixed and mobile traffic sensors along a freeway,\" _IEEE Intelligent Transportation Systems Magazine_, vol. 14, no. 1, pp. 173-186, 2022. * [19] Y. Ji, Z. Yang, Z. Zhou, Y. Huang, J. Cao, L. Xiong, and Z. Yu, \"Optimization of roadside sensors placement for cooperative vehicle-infrastructure system,\" in _2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)_. IEEE, 2023, pp. 4813-4819. * [20] T. Ma, Z. Liu, and Y. Li, \"Perception entropy: A metric for multiple sensors configuration evaluation and design,\" _arXiv preprint arXiv:2104.06615_, 2021. * [21] Y. Liu, P. Sun, N. Wergeles, and Y. Shang, \"A survey and performance evaluation of deep learning methods for small object detection,\" _Expert Systems with Applications_, vol. 172, p. 114602, 2021. * [22] D. Zhou, J. Fang, X. Song, C. Guan, J. Yin, Y. Dai, and R. Yang, \"Iou loss for 2d/3d object detection,\" in _2019 International Conference on 3D Vision (3DV)_. IEEE, 2019, pp. 85-94. * [23] R. Vijay, J. Cherian, R. Riah, N. De Boer, and A. Choudhury, \"Optimal placement of roadside infrastructure sensors towards safer autonomous vehicle deployments,\" in _2021 IEEE International Intelligent Transportation Systems Conference (ITSC)_. IEEE, 2021, pp. 2589-2595. * [24] S. Jin, Y. Gao, F. Hui, X. Zhao, C. Wei, T. Ma, and W. Gan, \"A novel information theory-based metric for evaluating roadside lidar placement,\" _IEEE Sensors Journal_, vol. 22, no. 21, pp. 21 009-21 023, 2022. * [25] Z. Liu, M. Arief, and D. Zhao, \"Where should we place lidars on the autonomous vehicle?-an optimal design approach,\" in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 2793-2799. * [26] S. Mou, Y. Chang, W. Wang, and D. Zhao, \"An optimal lidar configuration approach for self-driving cars,\" _arXiv preprint arXiv:1805.07843_, 2018. * [27] A. Qu, X. Huang, and D. Suo, \"Seip: Simulation-based design and evaluation of infrastructure-based collective perception,\" in _2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)_. IEEE, 2023, pp. 3871-3878. * [28] W. Jiang, H. Xiang, X. Cai, R. Xu, J. Ma, Y. Li, G. H. Lee, and S. Liu, \"Optimizing the placement of roadside lidar autonomous driving,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2023, pp. 18 381-18 390. * [29] T.-H. Kim and T.-H. Park, \"Placement optimization of multiple lidar sensors for autonomous vehicles,\" _IEEE Transactions on Intelligent Transportation Systems_, vol. 21, no. 5, pp. 2139-2145, 2019. * [30] N. Koenig and A. Howard, \"Design and use paradigms for gazebo, an open-source multi-robot simulator,\" in _2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)/IEEE Cat. No. 04CH37566_, vol. 3. IEEE, 2004, pp. 2149-2154. * [31] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, \"Vision meets robotics: The kitti dataset,\" _The International Journal of Robotics Research_, vol. 32, no. 11, pp. 1231-1237, 2013. [Online]. Available: [https://doi.org/10.1177/0278364913491297](https://doi.org/10.1177/0278364913491297) * [32] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, \"Pointillars: Fast encoders for object detection from point clouds,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 12 697-12 705. * [33] V. Alexiadis, J. Colvar, J. Halkias, R. Hranac, and G. McHale, \"The next generation simulation program,\" p. 22, 2004. * [34] J. Wu, H. Xu, Y. Zheng, and Z. Tian, \"A novel method of vehicle-pedestrian near-crash identification with roadside lidar data,\" _Accident Analysis & Prevention_, vol. 121, pp. 238-249, 2018. * [35] Z. Zhang, J. Zheng, H. Xu, X. Wang, X. Fan, and R. Chen, \"Automatic background construction and object detection based on roadside lidar,\" _IEEE Transactions on Intelligent Transportation Systems_, vol. 21, no. 10, pp. 4086-4097, 2019. * [36] E. Li, S. Wang, C. Li, D. Li, X. Wu, and Q. Hao, \"Sustech points: A portable 3d point cloud interactive annotation platform system,\" in _2020 IEEE Intelligent Vehicles Symposium (IV)_. IEEE, 2020, pp. 1108-1115. \\begin{tabular}{c c} & Yongjiang He earned his B.S. degree in Transportation Engineering from Shandong University of Science and Technology in 2018 and completed his M.S. degree in the same field at the same institution in 2021. Currently, he is pursuing a Ph.D. in Transportation Engineering at Southwest Jiaotong University. His primary research area is in the field of Intelligent Transportation Systems (ITS), with a focus on sensor-based traffic target detection and sensor deployment optimization. His work aims to contribute to the development of efficient and reliable solutions for enhancing traffic management and safety in urban environments. \\\\ \\end{tabular} \\begin{tabular}{c c} & Peng Cao earned his B.S. degree in Industrial Engineering from the University of Electronic Science and Technology of China, Chengdu, China in 2009, followed by an M.S. in Management Science and Engineering from Tsinghua University, Beijing, China in 2011, and a Ph.D. in Civil Engineering from Nagoya University, Nagoya, Japan in 2014. He is now an Associate Professor at the School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, China. His research focuses on Traffic Data Collection and Traffic Flow Optimization in the context of Connected Autonomous Vehicles. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zhongling Su earned her B.S. degree in Traffic Engineering from Southwest Jiaotong University, Chengdu, China in 2020, followed by an M.S. in Traffic Engineering from Southwest Jiaotong University, Chengdu, China in 2023. She is now an Assistant Engineer of High Performance Computing at Shanghai Artificial Intelligence Laboratory, Shanghai, China. Her areas of research include Intelligent Transportation Systems, Traffic Perception, and LiDAR Deployment Optimization. \\\\ \\end{tabular} \\begin{tabular}{c c} & Xiaobo Liu earned his Ph.D. degree from the New Jersey Institute of Technology, Newark, New Jersey, in 2004. He is currently a professor with the School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, China. His research focuses on the direction of transportation system analysis under connected vehicle/autonomous vehicle environment, and intelligent logistics analysis. He received the George Krambles Transportation Scholarship, 2003; Most Outstanding Student Paper Award by the Institute of Transportation Engineers Metropolitan Section of New York and New Jersey, 2004; and Stella Dafermos Best Paper Award by the Transportation Research Board Transportation Network Modeling Committee, 2018. He is a committee member on the Standing Committee on Transportation in the Developing Countries (AME40) at the Transportation Research Board. \\\\ \\end{tabular}
Developing an effective evaluation metric is crucial for accurately and swiftly measuring LiDAR perception performance. One major issue is the lack of metrics that can simultaneously generate fast and accurate evaluations based on either object detection or point cloud data. In this study, we propose a novel LiDAR perception entropy metric based on the probability of vehicle grid occupancy. This metric reflects the influence of point cloud distribution on vehicle detection performance. Based on this, we also introduce a LiDAR deployment optimization model, which is solved using a differential evolution-based particle swarm optimization algorithm. A comparative experiment demonstrated that the proposed PE-VGOP offers a correlation of more than 0.98 with vehicle detection ground truth in evaluating LiDAR perception performance. Furthermore, compared to the base deployment, field experiments indicate that the proposed optimization model can significantly enhance the perception capabilities of various types of LiDARs, including RS-16, RS-32, and RS-80. Notably, it achieves a 25% increase in detection _Recall_ for the RS-32 LiDAR. LiDAR deployment; evaluation metric; vehicle perception; perception entropy; optimization method
Provide a brief summary of the text.
230
ieee/2418c97b_365d_4d37_bd3c_ddcac0947780.md
# Scenario-Based Sensed Human Motion Editing and Validation Through the Motion-Sphere Bharatesh Chakravarthi Ashok Kumar Patil Jie Yeong Ryu Adithya Balasubramanyam Young Ho Chai Yununy Ho Chai This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see [https://ceatiwecommons.org/licenses/by/4.0/Virtual](https://ceatiwecommons.org/licenses/by/4.0/Virtual) Environments Laboratory, Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul 06974, South Korea Corresponding author: Young Ho Chai ([email protected]) This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government Ministry of Science and ICT (MSIT) (No. 2018-0-00599, Proxemics based Pervasive Interactions for the Wide area-High speed-Serial Motion Recognition). ## 1 Introduction Recent advances in the field of inertial measurement unit (IMU) based motion capture (MoCap) systems have opened new avenues for human motion retargeting, activity recognition, movement analysis, and motion data visualization. A wide range of real-time MoCap systems and three-dimensional (3D) human-like character animation systems are available for synthesizing realistic human motion data. Computer graphics professionals and researchers have used distinctive methods to construct numerous subject-specific motion databases to benefit the human-computer interaction researchers and the virtual reality user community [1, 2]. Despite the availability of MoCap systems and animation software, human motion data acquisition and its practical usage remains challenging. The IMU-based MoCap systems (e.g., Xsens, Perception Neuron, and Vicon) used to track and record human actions require a controlled environment, pre and post processing operations, and countermeasures to reduce sensor drift errors [3, 4]. The 3D character animation software (e.g., 3ds Max and MotionBuilder) used to synthesize human motion, requires proficient skills and tedious efforts. Further, the platform dependency and lack of customization increase the complexity, making the process difficult from an end-user perspective [5]. Human motion data centred research greatly relies on preexisting motion databases (e.g., CMU MoBo and KIT Whole-Body) [6, 7, 8]. However, these databases lack a standard controlled methodology required for the quantitative evaluation of versatile problems. In addition, motion databases are subjected to several factors, such as the diverse nature of MoCap systems, data sampling methods, range of motions, and discrete data file formats. Therefore, every user needs to select their data sets of interest individually and invest considerable effort intoprocessing the data to be used [9, 10]. The applications of human activity recognition and human motion analysis require 3D articulated figures, humanoid models, and virtual avatars. Human motion reconstruction using these articulated models is subject to a large number of body joint degree of freedom (DoF). In addition, it is crucial to address various types of constraints such as, the inherent body joint rotation limit, body segments overlapping, and external entity contact. Overcoming these shortcomings during motion data acquisition remains a primary concern, and these factors limit the reusability of the existing motion data. Therefore, it is necessary to efficiently utilize the existing human motion data and manipulate it without losing human movements' naturalness. This paper presents a modular open-source platform to interactively author and edit full-body human motion data in a 3D space. First, we record a basic set of human motion sequences, called the sensed unit motions using an IMU-based motion capture system (Xsens MTw Awinda) in real-time. The sensed motion data segments are collectively stored as a motion database. Then, the user can import and visualize motion data using a humanoid model in an interactive 3D space, where the proposed authoring and editing system automatically maps the sensed motion data to the respective bone segments of the humanoid model. Next, the users can specifically choose the motion sequences of their interest and edit further based on different scenarios to synthesize an entirely new set of meaningful motion sequences based on predefined scenarios. Finally, the edited motion can be visually validated using an open-source visualization tool called Motion-Sphere [11]. Figure 1 depicts the overview of the proposed human motion editing and validation system. The key contributions of the proposed interactive application system are summarized below: * A seamless interactive interface to edit and modify the existing MoCap data. * Minimal sensed unit motion collection to synthesize a new set of meaningful motion sequences. * Human body kinematics incorporated motion editing solver with spatio-temporal constraints. * An intuitive method to visually validate the edited motion data using Motion-Sphere. * Motion data composition from scratch, in the absence of MoCap systems. We introduce the concept of minimal sensed unit motion collection and demonstrate the superiority of the proposed 3D human motion editing and authoring system with scenario-based reactive motion synthesis under different spatio-temporal constraints using an analytical kinematic and constraint solver. Furthermore, we consider the human body kinematics as a critical parameter to verify the correctness of the resultant edited motion. In this regard, we validate the edited motion's kinematic correctness and plausibility using Motion-Sphere. Further, we evaluated the naturalness of the newly synthesized motion sequence, against MoCap captured data as the ground truth to determine the deviations in bone orientations incurred during the editing process. The proposed system serves as a comprehensive toolkit to synthesize numerous realistic variations of the human motion data and improves the reusability of the existing motion databases via simplistic editing operations. In addition, the proposed system serves as a solution to an ever-increasing demand for human motion data in various application domains, such as 3D character animation, computer vision, robotics, fitness, sports training, and rehabilitation. The rest of the paper is organized as follows: The various human motion editing principles and practices are discussed in Section II. The proposed sensed motion editing system design is presented in Section III, followed by the motion editing validation study in Section IV. Next, use-case assessment, comparative study and discussion are presented in Section V, followed by the conclusion in Section VI. ## II Background Human activity tracking and motion data acquisition are widely required in computer animation, human-computer Figure 1: Overview of the proposed interactive human motion editing system to synthesize numerous variations of the existing motion data and automatic editing validation using Motion-Sphere. interaction, and virtual reality. With the surge in MoCap technology, several research studies have been conducted to formulate all the necessary processes to reckon human body dynamics and kinematics to synthesize human motion. In addition, numerous investigations have been performed to develop feasible methods for editing and reusing the existing human motion data. Moreover, numerous 3D character animation application systems are reported to produce human-like character animation. Wang _et al._[12] performed a comparative study to outline the different motion synthesis techniques used by researchers, and animation professionals. In contemporary research, the IMU-based MoCap system data-driven methods are extensively used. Computer graphics and animation practitioners employ human-like character animation software such as, Autodesk Maya, 3ds Max, MotionBuilder, Adobe Animate, and Blender, to synthesize motion data artificially in different types of 3D file formats (*.mp, *.3ds, *.fbx, *.blend, and *.bvh) [13, 14]. Furthermore, wearable and ambient sensing technology based systems are commonly used by the motion capture system research community. Retroreflective or light-emitting diode markers-based optical sensors, video/marker-less tracking methods, and IMU-based wearable sensors are used to record human actions and store them as motion dataset. Currently, several human motions databases [1, 8, 15] are available that offer motion data in different formats (*.trc, *.htr, *.amc/asf, *.mvn, and *.c3d) [16, 17]. The 3D human motion data constitutes translatory and rotational motion related to the frontal, sagittal, and transverse planes. In practice, the motion data is either acquired using MoCap systems in real time or artificially synthesized using different application software. After that, to utilize the synthesized motion data efficiently, it requires preferred intended motion editing operations [18]. Inverse Kinematics (IK) principles serve as the best means to edit full-body human motion by enforcing user-defined constraints. Aristidou _et al._[19] presented the most popular IK methods in terms of performance, computational cost, and smoothness. Gleicher _et al._[20] compared various constraints-based editing methods and evaluated their suitability for online and offline editing applications. The most efficient techniques in the taxonomy of motion editing [20] are per-key methods, motion warping and displacement mapping, per-frame methods, and space-time methods. To date, many space-time domain-based motion editing techniques have been developed using motion warping techniques to manipulate the input motion sequences while overcoming specified constraints. A space-time constraint solver-based editing technique [21] considers the entire motion in making changes simultaneously. Contrarily, Lee _et al._[22] presented a multilevel B-spline fitting technique-based editing method by considering the inter and intra-frame relationships. Tak _et al._[23] proposed a per-frame Kalman filter framework to ensure the kinematic and dynamic accuracy and overcome the overhead computational cost in the space-time optimization techniques. Shin _et al._[24] proposed a dimensionality reduction technique based motion editing in low dimensional space where the high dimensional pre captured motion data are represented as streams of curves. A naive user needs to invest substantial effort into understanding the complex editing principles. Unfortunately, the existing motion editing systems lack an interactive means to author and edit MoCap data irrespective of the motion data file format and differences in the underlying human motion data acquisition system. Also, there exists no means to visually validate the kinematic correctness of the resultant edited motion data. ## III Sensed Motion Editing System Design The proposed system enables users to import and edit motion data from our unit motion collection and the existing open-source databases [25, 26, 27]. The import and pre-processing components of the editing module consists of open-source APIs to parse the motion data files (*.fbx, *.bvh, and *.txt). Users can perform editing operations using the proposed end effector-based analytical kinematic solver and enforce a defined set of inherent body joint limits and external constraints to compose user-defined scenario-based motion sequences. The sensed motion data can be edited independently and then combined to form complete and meaningful new motion sequences. Subsequently, the new motion sequences can be reconstructed and visualized on humanoid models and recursively perform editing operations to compose the desired motion sequences. Users can perform several editing operations interactively at the per-frame, inter-frame, intra-frame, and full-frame levels. Further, a user can also synthesize the motion data without using a MoCap system in a custom environment, with the aid of a hierarchical kinematic humanoid model. The user is provided all the necessary tools to synthesize the motion data in a user-friendly way. After completing the motion composition task, the user can export the motion data in terms of the positions and orientations (quaternion form) in a textual file format [28]. The retrieved motion data can be reconstructed and visualized on different humanoid models. Thus, the proposed system serves two purposes - to author new motion from scratch in the absence of the MoCap system and to edit existing sensed motion data interactively. ### Editing Module The sensed motion editing system is a graphical user interface-based interactive framework developed using an open-source software. A hierarchical humanoid model is used to map the motion data and edit the intended motion sequence through a series of mouse-based user interactions. The editing module enables the user to synthesize new 3D human motions consistent with the inherent body joint constraints. Our system allows forward kinematics (FK) and IK operations based on specific pose compositions. Users can switch between the FK and IK modes to compose different poses. Figure 2(a) depicts a humanoid model in the attention pose, Figure 2(b)shows the hierarchical structure of the humanoid model, and Figure 2(c) shows the default configuration for each body joint DoF with a 14-segment, 18-DoF model as an instance. Users can change and reconfigure the body joints' DoF based on the target motion plan. The humanoid model has a kinematically linked hierarchy with the pelvis segment as the root node. Users can scale the model to different heights and author different types of motion sequences. In addition, the editing module offers fixed foot and free foot modes to compose static and transforming motions. The chest, neck, upper arm, and upper leg segments inherit their movement relatively from the pelvis segment. The lower arm and lower leg segments inherit movements from their immediate parent segments. The hand and foot segments are the child segments of the lower arm and lower legs, respectively. With the IK mode in use, hand and foot segments are considered as end effectors to synthesize different poses with an automatic orientation update for elbow and knee segments. Each body joint segment will possess a local axis of rotation and DoF with which the user can make the model move naturally. For example, the chest forward bend motion will result in automatic arm movement with hands touching the ground. With a reference motion scheme, the user can select and rotate joint segments with a restricted DoF along their local axis to author key poses and capture the associated data frame at different time intervals. A captured keyframe comprises individual segment transformations in the form of unit quaternions (_w,x,y,z_). Next, the full-body motion data are estimated by applying spherical linear interpolation (SLERP) on the quaternion keyframes using Equation 1. \\[q=\\frac{q_{a}*sin((1-t)*\\theta)+q_{b}*sin(t*\\theta)}{sin(\\theta)} \\tag{1}\\] where, 'q' is the interpolated quaternion, '\\(q_{a}\\)' is the first quaternion keyframe, '\\(q_{b}\\)' is the second quaternion keyframe, 't' is a scalar between 0.0 at '\\(q_{a}\\)' and 1.0 at '\\(q_{b}\\)', and '\\(\\theta\\)' is half the angle between '\\(q_{a}\\)' and '\\(q_{b}\\)' and it is defined as, \\[\\theta=acos((q_{a}w*q_{b}w)+(q_{a}x*q_{b}x)+(q_{a}y*q_{b}y)\\\\ +(q_{a}z*q_{b}z)) \\tag{2}\\] The motion editing phase starts by loading existing motion data and mapping it with a humanoid model of choice. Then, configure all the parameters required to edit and compose poses. After that, SLERP is applied between the edited keyframes at a defined interval of time. ``` Initialize: Humanoid model \\(B_{s}\\): A set of bone segments (\\(0\\ldots n-1\\)) \\(J_{1}\\): A set of joint limits (DoF's) \\(C\\): A set of constraints (Spatio \\(-\\) temporal) Result: A new motion sequence file whileBone segments are manipulateddo for\\(B_{s}=0,1,\\ldots\\) (n-1)do \\(M_{e}:=0\\) foreach\\(B_{s}\\)transformationdo \\(J_{t}\\leftarrow\\)KinematicSolver(\\(T\\),\\(B_{s}\\)); \\(C\\leftarrow\\)ConstrainSolver(\\(S\\),\\(T\\)); \\(kf\\gets kf_{n}\\); endfor for\\(kf_{(0)}\\)to\\(kf_{(n)}\\)do \\(Q_{interpolation}(sQ_{a},Q_{b},t)\\) endfor endfor ``` **Algorithm 1**Steps to Configure and Edit Motion Data Algorithm 1 shows the steps for editing motion sequences and exporting resultant motion data in a file. To synthesize fast, medium, or slow-motion sequences, users can define the number of intermediate frames between two static key poses. The feedback-based analytical IK solver employed in the authoring module enables the user to synthesize kinematically valid motions with inherent body joint constraint. The hand and foot segments can be parameterized to reach specific target positions obeying the joint rotation constraints. The body joint-segment DoF constraint can be defined and restricted by the user inputs before starting the motion composition phase. To minimize the computational cost and recursively solve an IK problem based on scenarios while composing poses, Figure 3. End effectors-based kinematics editing configuration: (a) shows right and left-hand segments as the end effectors and their change in position, and (b) IK configuration. Figure 2. Humanoid model used for authoring and editing: (a) attention pose, (b) body segments hierarchy, and (c) body joints DoF. an analytical IK solver with reduced complexity is employed in this work. To compose a pose using an IK solver, user can use the end effectors (i.e., right hand, left hand, right foot, and left foot segments) and author target or goal-based poses in 3D space. Figure 3(a) depicts right and left arm movement poses using the end-effector based analytical IK solver used in this system, where L\\({}_{1}\\) and L\\({}_{2}\\) indicate the length of upper and lower arm segment respectively, and \\(\\theta_{1}\\) and \\(\\theta_{2}\\) indicate upper arm joint and elbow joint angles. Algorithm 2 depicts the steps and equations used to solve \\(\\theta_{1}\\) and \\(\\theta_{2}\\) as in Figure 3(b). User-defined joint movement constraints can be enforced through the constraints configuration module of the editing system. The joint rotation limit constraints offered by the analytical IK solver enables the user to author natural poses similar to human body anatomy. In addition, the oriented bounding box (OBB) tree-based collision detection filter [29] is used to implement body segment occlusion, foot on ground, and external entity contact constraint. Figure 4 shows the humanoid model with different constraints configured. Figure 4(a) depicts the body segments overlapping constraint where the right hand segment comes in contact with the right upper leg segment and constraint violation is reflected by restricting further movement such that no two body segments cross over. Figure 4(b) depicts inherent body joint rotation constraints applied for arm and leg segment's flexion and extension movements respectively. Figure 4(c) shows target hitting scenario with vertical bars as external obstacle constraints. Whenever any of the body segment comes in contact with vertical bars, the external contact constraint violation is indicated with a change in color of vertical bars and stops further cross-over movements. ### Sensed Unit Motion Conceptualization The motion editing framework presented in this section serves as an intuitive tool to synthesize numerous variations of existing sensed human motion. We used Xsens motion tracking system [30] to capture the basic set of subject specific poses to create smaller motion sequences called unit motions. Unit motion constitutes 'n' number of motion frames, where each frame corresponds to the position and orientation of body segments represented as unit quaternions. Human motion data is characterized by keyframes depicting a static pose, and unit motion is a sequence of motion between any two given keyframes. The idea is to use minimal set of unit motions and edit them further selectively to synthesize numerous variations, and blend different combinations based on predefined scenarios to create completely new and meaningful motion sequences. The unit motions are edited considering the joint rotations in terms of swing and twist, target position-based pose composition, pose duplication, adding or removing motion chunks, speed of the motion (by increasing and decreasing Figure 4: Different types of constraints enforced for scenario-based editing: (a) body segments contact constraint, (b) inherent body joint rotation constraint, (c) external contact constraint, and (d) steps to synthesize new motion sequences based on scenarios. the number of frames), and applying different types of constraints. Figure 4(d) shows the sequence of steps to synthesize new motion by editing unit motion selected from the sensed unit motion database. Users can select the unit motion from the data collection to perform edit and blend operations to create a meaningful motion sequence. The resultant edited motion sequences are stored along with unit motions, thus increasing the number of motion data files. The proposed editing module has APIs to map and use motion data in different file formats such as.bvh and.fbx files. ### Scenario-based motion editing The human motion data available in several open-source databases are subjective, and reusing them effectively is a challenging task. In this section, we demonstrate a practical course of action to reuse the MoCap data to synthesize variations of the existing motion sequences. We captured 12 primary boxing hook motion sequences from the hook-ready pose to different target hitting poses, as shown in Figure 5. The sensed unit motion collection is comprised of a hook ready pose (pose 1) from attention pose, chest segment flexion, extension, abduction, and adduction motion sequences (pose 2 to pose 5) with both the hands in the hook-ready pose and seven different targets hitting (using right arm) motion sequences (pose 6 to pose 12) are captured. The targets were placed at different positions (right, left, up, down, long, middle, short-range) from the mid-line of the body. These 12 primary motion sequences are further edited independently based on specific scenarios to synthesize new motion sequences. Figure 6(a) is an example scenario wherein a single-tap hook motion is edited to synthesize a double-tap hook motion. The analytical kinematic solver employed in this work adopts the joint anatomical constraints. The external contact constraints are enforced automatically when an end effector segment comes in contact with an external entity. In this example, when the right hand comes in contact with the target (sphere) placed at a distant position, further movement of the right arm is restricted. Trim and join, and chunk size parameters Figure 5: The 12 primary sensed boxing hook motion sequences: (1) attention pose to hook ready pose, (2-5) chest segment abduction, adduction, flexion, and extension poses, (6-12) right arm hook poses to hit targets placed at different positions with respect to body’s mid-line. Figure 6: Three boxing hook scenarios: (a) double-tap hook, (b) single constraint and multi target hitting, and (c) multi-target and multi-constraint scenario. are used to integrate a pullback and reach the target position motion frames, along with the original motion sequence, in a user input-controlled environment. The second example of editing is subjected to a hit and react scenario with two different target points (spheres) and an external constraint (vertical bar). The targets are placed at a distant position, and the sensed unit motion is edited to reach targets without violating the external entity contact constraint (i,e., both the arm segments do not cross over the vertical bar while reaching the targets). As shown in Figure 6(b), the sequence of action to reach the targets is realized using the end effectors by tweaking and updating the shoulder joint and wrist joint angles. In addition, the chest segment's horizontal adduction is fine-tuned to balance and help the end-effector reach the target. The third example scenario has two reachable targets with two external contact constraints (a vertical bar and a horizontal bar) as illustrated in Figure 6(c). In this example, the right arm motion is kinematically edited to reach a target (blue sphere) without violating the bar crossover constraints. Then, the sensed right arm motion is duplicated and applied to the left arm segment to reach another target (green sphere). ## IV Motion-Sphere Based Editing Validation The modular editing system is a graphical user interface-based application developed using Qt Designer 5.12.3 and visualization toolkit (VTK), which is an open-source software system for implementing 3D graphics, modeling, rendering, and visualization. Three different types of humanoid models (male biped, female biped, and skeletal biped) are used to author and edit the human motion in 3D space. The users can perform mouse-based interaction to select and modify the body segments to compose different poses. The end-effector segments can be used to perform drag and move operations such that the model reach a target position. Autodesk FBX API and BVH parser are used to import and edit the existing motion data from open-source motion databases. The users can control and regulate authoring and editing parameters through drop-down menus and lists offered by the application system. Figure 6: Example of kinematically valid boing motion variations: (1-6) right arm upper hook variations, (7-18) right arm short range punch variations, and (13-18) leg segments kinematics editing variations. Figure 7: Motion-Sphere based automatic editing validation: (1 and 4) right upper-arm and upper-leg motion represented as trajectory on the Motion-Sphere’s front-right KCA, (2) right lower-arm motion visualization on Motion-Sphere’s front KCA, (3 and 6) left upper-arm and upper-leg motion visualization on Motion-Sphere’s front-left KCA, (5) right lower-leg motion visualization on Motion-Sphere’s back side. Several variations of the boxing hook motion are synthesized using a small collection of sensed unit motions, as shown in Figure 8. The sensed unit motions are selectively chosen based on specific target hitting scenarios and edited further to synthesize a new set of hook boxing motion sequences. The unit motion sequences are captured from a range of 60-120 frames per second based on the type of motion. The full-body sensed motion sequences are composed of data segments mapped to 14 skeletal joints with 18 DoF. The end-effectors are modified using the kinematics solver to make the body segments reach specific and desired target positions. In this section, we present an experimental study to validate the motion editing system. The kinematic correctness and plausibility of the edited motions were compared and validated using the Motion-Sphere [31]. The Motion-Sphere is an intuitive tool used to visualize subtle movements of the human joints on a 3D unit sphere. Human motion visualization using Motion-Sphere aids to quantify the body joint movements and visually perform a comparative analysis. In addition, Motion-Sphere enables the user to visualize IMU sensor acquired human motion data as a trajectory and confirm the plausibility and kinematics correctness. Motion-Sphere offers kinematics constrained area (KCA) and kinematics unconstrained area (KUA) features to determine the kinematic correctness as shown in Figure 7. The KCA and KUA are represented by the highlighted and darkened areas of the Motion-Sphere. The trajectories representing the motion for a given bone segment should strictly be within the KCA. If the trajectories representing the motion of rotation constrained bone segment appear in KUA, this indicates the violation of joint DoF and its rotation constraint. Figure 9 shows a right arm hook motion sequence. The end effector (right lower-leg) position is edited to make it reach a target position. The sphere as in Figure 9(a) represents the motion data as a trajectory appearing in the KCA for right lower leg. The rightmost sphere represents the motion data as a trajectory appearing in the KCA for right lower arm. The maximum rotation for right lower arm is constrained and the trajectory is well outside the KCA of Motion-Sphere. It can also be noticed that since it is a backward lift motion sequence, the trajectory are in the back side of the Motion-Sphere. The motion is edited to correct the lower leg kinematics and the resultant trajectory appears in the KCA of motion sphere as shown in Figure 9(b). Figure 10 shows a right arm hook motion sequence. The end effector (right hand) position is edited to make it reach a target position. The sphere as in Figure 10(a) represents the motion data as a trajectory appearing in the KCA for right upper arm. The Motion-Sphere shows the motion data as a trajectory appearing in the KCA and that it holds valid. The same motion is further edited to make it as a double-tap hook motion. The maximum rotation for right upper arm is constrained and the trajectory is well within the KCA of Motion-Sphere as shown in Figure 10(b). It can also be noticed that, since it is a forward hook motion sequence, the trajectory is in the front side of the Motion-Sphere. ## V Results and Discussion In this section, we demonstrate the capability of users to employ the proposed system to synthesize realistic Figure 10: Motion-Sphere based editing validation: (a) right-arm hook motion authored and edited for a target hitting scenario and its validation, and (b) a double-tap hook motion editing validation using Motion-Sphere. Figure 9: Motion-Sphere based editing validation: (a) before editing - right lower-leg sensed unit motion appearing on the KUA of Motion-Sphere depicting the erroneous induced during data acquisition, and (b) after editing - right lower-leg edited motion appearing on the KCA of Motion-Sphere. human motion data. In addition, we compare the obtained scenario-based edited motion data with those acquired (ground truth) using a real-time motion capture system. ### Experimental Setup We recorded real-time boxing motion sequences using a heterogeneous multi-sensor integrated motion capture system. The motion data acquisition setup consists of a 32-channel light detection and ranging (LiDAR) (Ouster OS-0) system [32] placed at a fixed position and 10 IMU (Xsens MTw Awinda) [30] sensors strapped on to the body segments (pelvis, chest, right upper, and lower arm, left upper and lower arm, right upper and lower leg, and left upper and lower leg) of the subject performing the boxing action. The user performing the boxing action was well within the LiDAR's field of view (1.5 \\(m\\) away from LiDAR system and within a 5 \\(m\\) sensing range), as shown in Figure 11. Furthermore, we estimate the body segment's orientation and joint position using an open-source framework [33]. In this comparative study, we used the estimated orientations and positions from LiDAR measurements and IMU sensors as the ground truth. ### Editing System Use-Case Evaluation Herein, we illustrate the user's capability to edit the sensed unit motion as an input motion and synthesize three motion variations. Initially, the user needs to select the required motion data (in this experiment, a punch boxing motion data was selected) as shown in Figure 12 (a) from the sensed unit motion database. Next, the user can reconstruct and visualize chosen motion data through a hierarchical humanoid model using our interactive motion editing application system. Subsequently, the user can employ 3D props offered by our application (balls and bars) to compose the user-defined scenarios. In this study, a target hitting scenario was used to synthesize the punch boxing motion variations, namely lower straight, upper straight, and straight mid punch actions (as depicted in Figure 12(b), (c) and (d)). Users can fix the target contact point (green, blue, and purple ball) position and edit the right arm segment's forward-backward motion and lateral swing to make the end effector (right hand) reach the target contact point. Furthermore, the user can adjust the orientations of all other body segments' and pelvis joint position accordingly. Table 1 shows the user interaction attributes required to compose new variations of sensed input motion. The users can control the editing parameters and complete the editing process in a reasonable time. The time required to compose the poses as well as the number of iterations decrease as the user gets acquainted with our system. The user can ensure the kinematic correctness of the edited motion using the Motion-Sphere's KCA feature. \\begin{table} \\begin{tabular}{l|c|c|c|c} \\hline \\hline Motion Variation & Frames & Iterations & Speed & Time \\\\ \\hline \\hline Lower straight punch & 190 & 6 & fast & 4 \\(min\\) \\\\ \\hline Upper straight punch & 280 & 9 & medium & 6 \\(min\\) \\\\ \\hline Straight mrd punch & 510 & 11 & slow & 9 \\(min\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: User interaction attributes to synthesize three variations of the input sensed unit motion. Figure 11: Ground truth data acquisition setup: A multi-sensor (LiDAR and IMU) human motion capture environment to record real-time motion data. Figure 12: Motion editing system use case evaluation: (a) Sensed boxing motion sequence along with the right upper arm segment motion visualization on Motion-Sphere. (b), (c) and (d) represent the three motion variations synthesized using our editing system and their visual validation using Motion-Sphere. ### _Comparative Study_ The proposed scenario-based sensed human motion editing system is evaluated in terms of orientation and position accuracy. First, three boxing motion scenarios (as in Figure 6(a), (b) and (c)) performed by the subject are captured using the experimental setup shown in Figure 11. Then, we utilized the sensed unit motion data from the motion database to synthesize a set of motion sequences similar to that of real-time motion sequences performed by the subject. Next, we compared the orientations and position of the respective body segments obtained from the sensor setup with those extracted using our editing system. #### 1.3.1 Orientation Accuracy Comparison We evaluated the orientation estimation accuracy of the proposed editing system against the ground truth data to analyze the similarity between the synthesized motion data (scenario 1), 2.40\\({}^{\\circ}\\) for left arm (scenario 3), 4.02\\({}^{\\circ}\\) for right leg (scenario 1) and 3.18\\({}^{\\circ}\\) of angular deviation for left leg (scenario 1) segment. The similarity between the synthesized and sensed motion increases as the user becomes acquainted with the proposed system's motion editing module. #### 3.1.2 Position Accuracy Comparison A vector-based joint position estimation framework [33] was used to evaluate the proposed system's position estimation accuracy. The fullbody skeleton joint positions tracked from the LiDAR's point cloud were fused with those estimated using the IMU sensors. The resultant joint position data were used as the ground truth to validate the position accuracy obtained from the proposed editing system. Figure 16(a), (b) and (c) compares the joint positions (pelvis joint - scenario 1), (right shoulder joint - scenario 2) and (left shoulder joint - scenario 3) in a 3D space and as well as the corresponding errors obtained from our system's data and the ground truth data. The slope depicts the sensor drift error trend. Table 3 presents the mean position error related to the positions estimated using our system, against the real-time captured data. The example scenarios used for position accuracy comparison exhibits a maximum of 2.95 \\(cm\\) error for pelvis (scenario 3), 2.85 \\(cm\\) for chest (scenario 3), 2.65 \\(cm\\) for right arm (scenario 1), 2.75 \\(cm\\) for left arm (scenario 3), 3.25 \\(cm\\) for right leg (scenario 1) and 3.58 \\(cm\\) error for left leg (scenario 1). The result exhibits a close comparison in accuracy for estimating the body segment orientation and position. ### Discussion Human motion analysis and human activity recognition require numerous human motion data sources. Notably, several variations of the same motion are significant in deep learning-based recognition systems. Despite the availability of real-time motion capture systems and human motion databases, extensive efforts are required from the user for capturing the motion sequences every time. Moreover, utilizing the existing motion data and modifying them according to different user requirements is a major challenge. The proposed open-source scenario-based sensed human motion editing and validation system is a modular tool that can be easily used to synthesize numerous human motion variations using the concept of sensed unit motions. Through this comparative study, the orientation and positioning accuracy of the proposed was evaluated against the real-time motion capture system. The results of this study indicate that it is advantageous to capture minimal motion sequences and employ them further to synthesize a whole new set of motion sequences and numerous subtle variations. Our system also enables the user to validate the kinematic correctness and plausibility of the synthesized human motion visually using the Motion-Sphere. Although the proposed editing system allows the user to manipulate both the orientation and position, it is limited to visually validating the body segment orientation. At present, our system cannot be used to validate the joint position manipulations. ## 6 Conclusion With significant advancements in various fields that require human motion analysis and human activity recognition, the demands for human motion data has increased manifold, and thus, it is imperative to synthesize realistic human motion \\begin{table} \\begin{tabular}{c|c|c|c} \\hline \\hline Body Segment & \\multicolumn{3}{c}{Angular Deviation in degrees} \\\\ & Sensed (Ground Truth) _vs._ Edited (Our System) \\\\ \\hline & Scenario 1 & Scenario 2 & Scenario 3 \\\\ \\hline \\hline pelvis & 3.35 & 3.21 & 3.35 \\\\ \\hline Chest & 4.51 & 3.17 & 5.10 \\\\ \\hline Right Arm & 4.25 & 2.53 & 2.72 \\\\ \\hline Left Arm & 1.97 & 1.18 & 2.40 \\\\ \\hline Right Leg & 4.02 & 2.74 & 3.14 \\\\ \\hline Left Leg & 3.18 & 3.05 & 1.44 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Body segments angular deviation between ground truth and our proposed system. Figure 16: Joint position difference comparison: (a) pelvis joint position and its error plot (scenario 1), (b) right shoulder joint position and its error plot (scenario 2), and (c) left shoulder joint position and its error plot(scenario 3). \\begin{table} \\begin{tabular}{c|c|c|c} \\hline \\hline Body Joint & \\multicolumn{3}{c}{Mean Position Error in \\(cm\\)} \\\\ & Sensed (Ground Truth) _vs._ Edited (Our System) \\\\ \\hline & Scenario 1 & Scenario 2 & Scenario 3 \\\\ \\hline \\hline Pelvis & 2.58 & 1.65 & 2.95 \\\\ \\hline Chest & 2.40 & 1.60 & 2.85 \\\\ \\hline Right Arm & 2.65 & 1.40 & 2.10 \\\\ \\hline Left Arm & 2.05 & 1.37 & 2.75 \\\\ \\hline Right Leg & 3.25 & 1.95 & 2.50 \\\\ \\hline Left Leg & 3.58 & 1.68 & 2.88 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Comparison of the joint positions estimated using our system with those obtained from the ground truth. data to cater to all such demands. However, the existing open-source human motion databases exhibit several limitations and are not flexible enough for straightforward usage. Additionally, recording human motion data using a motion capture system in a controlled environment is challenging and tedious. One possible solution is to reuse the existing motion data effortlessly to generate numerous variations of the motion sequences or an entirely new set of motion data. This paper introduces a comprehensive open-source tool that can be effectively used to author and edit human motion data for synthesizing numerous variations of the existing motion data and an entirely new set of motion sequences. In addition, this tool serves as a basis to validate the kinematic correctness and plausibility of the edited motion data using Motion-Sphere. The concept of sensed unit motion collection, introduced in this paper demonstrates how well this relatively small collection of existing motion datasets can be used to synthesize new motion sequences based on different scenarios. A user-defined scenario serves as a motion plan, and one can select the unit motion segments, do further editing accordingly, and blend them to compose new motion sequences. The end effectors-based analytical kinematic and constraints solver, employed in this study, enables the user to edit the motion data interactively without prior knowledge in a user-friendly environment. Furthermore, the Motion-Sphere serves as a novel and intuitive method to validate the kinematic correctness of the edited motion and thus ensures the naturalness of the synthesized motion data. Finally, we compared the newly synthesized motion data with the IMU motion capture data as the ground truth data to verify the realism of the resultant motion. In the future, we will consider open-source databases and employ the proposed editing and validation framework to synthesize numerous variations of the existing motion data and readily use them in deep learning-based human activity recognition and human motion analysis. ## References * [1]J. Sedmatibsky, P. Elias, P. Budikova, and P. Zezula (2021) Content-based management of human motion data: survey and challenges. IEEE Access9 (9), pp. 64241-64255. Cited by: SSI. * [2]C. Mandery, O. Terlemez, M. Do, N. Vahrenkamp, and T. Asour (2016-04) Using representations and large-scale whole-body motion databases for studying human motion. IEEE Trans. Robot.32 (4), pp. 796-809. Cited by: SSI. * [3]M. M. Gleicher (1939) Animation from observation: motion capture and motion editing. ACM SIGGRAPH Comput. Graph.33 (4), pp. 51-54. Cited by: SSI. * [4]M. Gleicher (1939) Common cognitive constraint-based motion editing methods. Graph. Models63 (2), pp. 107-134. Cited by: SSI. * [5]M. Gleicher (1939) Motion editing with spacetime constraints. In Proc. Symp. Interact. 3D Graph. (S2D), pp. 19. Cited by: SSI. [MISSING_PAGE_POST] * [32] Ouster. _OS0 Hardware User Manual_. Accessed: Dec. 1, 2020. [Online]. Available: [https://data.ouster.io/downloads/hardware-user-manual/hardware-user-manual-revd-os0.pdf](https://data.ouster.io/downloads/hardware-user-manual/hardware-user-manual-revd-os0.pdf) * [33] A. K. Pahl, A. Balasubramanyam, J. Y. Ryu, B. Chakravarthi, and Y. H. Chai, \"An open-source platform for human pose estimation and tracking using a heterogeneous multi-sensor system,\" _Sensors_, vol. 21, no. 7, p. 2340, Mar. 2021. \\begin{tabular}{c c} & Bharatesh Chakravarthi received the B.E. degree in information science and the M.Tech. degree in computer networks engineering from Visvesvaraya Technological University, Kamataka, India, in 2011 and 2013, respectively. He is currently pursuing the Ph.D. degree in computer graphics and virtual reality with the Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul, South Korea. His research interests include human motion capture systems, human motion visualization, sensor, computer graphics, and virtual reality. \\\\ \\end{tabular} \\begin{tabular}{c c} & Ashok Kumar Panil received the M.C.A. degree from Visvesvaraya Technological University, Belgavaj, India, in 2007, and the Ph.D. degree in CG/VR from the Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul, South Korea. He is currently a Postdoctoral Fellow with the Virtual Environment Laboratory, Chung-Ang University. His research interests include computer graphics, virtual reality, interactive systems, HCI, and automation in 3D reconstruction. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jae Yeong Ryu received the master's degree in computer graphics and virtual reality from the Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul, South Korea, in 2021, where he is currently pursuing the Ph.D. degree with the Virtual Environments Laboratory. His research interests include virtual reality and motion recognition. \\\\ \\end{tabular} \\begin{tabular}{c c} & Adityha Balasubramanyam received the Ph.D. degree in computer graphics and virtual reality from Chung-Ang University, Seoul, South Korea. He is currently a Faculty Member with the Department of Computer Science and Engineering, PES University, Bengaluru, India, and a research member with Virtual Environments Lab, Chung-Ang University. His research interests include human motion tracking, virtual reality, and robotics. \\\\ \\end{tabular} \\begin{tabular}{c c} & Young Ho Chi received the M.S. degree in mechanical engineering from the Ph.D. degree in mechanical engineering from Iowa State University, in 1997. From 2006 to 2007, he was with the Louisiana Immersive Technology Enterprise (LITE), University of Louisiana at Lafayette, Lafayette, LA, USA. He is currently a Professor at the Graduate School of Advanced Imaging Science, Multimedia, and Film, Chung-Ang University, Seoul, South Korea. He also heads and leads the Virtual Environments Laboratory, Chung-Ang University. His research interests include spatial sketching, HCI, HAR, and motion recognition. \\\\ \\end{tabular}
Synthesizing realistic human motion data using a real-time motion capture system in a controlled environment is a critical challenge. In addition, effectively manipulating the existing motion data is another primary concern and using such modified data in human motion analysis and activity recognition systems are prone to errors. This paper presents a simplistic and comprehensive system to effortlessly author, edit, and validate human motion data. The system enables a naive user to edit the existing motion data interactively using a humanoid model in a three-dimensional space, based on user-defined scenarios and synthesize numerous variations of the motion sequences. A modular concept of scenario-based sensed unit motion editing has been adopted to demonstrate the proposed system. We employed an efficient analytical kinematic and constraint solver to enforce the inherent body joint limits and external constraints while editing to synthesize complete and meaningful motion sequences. Furthermore, we substantiated the proposed sensed unit motion editing framework through a visual validation study using an open-source intuitive visualization tool called the Motion-Sphere. Finally, we compared the resultant synthesized motion against the real-time motion capture system data to verify the body segments' orientation and position accuracy deviations. Data visualization, motion authoring, motion capture system, motion editing, motion reconstruction, motion-sphere, kinematics, and constraints. 2022
Write a summary of the passage below.
252
isprs/5e5c2868_c7c6_4d77_8fde_6b26c07d2395.md
# From TLS point cloud data to geometrical genesis determination of ribbed masonry vaults L. Agustin-Hernandez\\({}^{1}\\) R. Argiolas\\({}^{2}\\) V. Bagnolo\\({}^{2}\\) M. Sancho Mir\\({}^{1}\\) \\({}^{1}\\) Department of Architecture, University of Zaragoza, Zaragoza, Spain - [email protected] \\({}^{2}\\) DICAAR, University of Cagliari, Cagliari, Italy - [email protected] ## 1 Introduction The entire history of construction is characterised by the presence of geometric rules used to govern architecture. The geometric rule, and consequently the proportional one, were invested with the role of real sciences, even before the advent of construction science; the geometric rule played such a central role that, despite the progress of the physical and mathematical sciences, due to the simplicity and aesthetic canon that distinguish it, it still remains for a long time in architectural construction practice (Rondelet, 1832). An emblematic case diagrams for the layout of Gothic vaults, elements often quite complex which in historical treatises are brought back to relatively simple graphic schemes. Consider Villard's travel notebook (Villard De Homeocourt, XIII sec.), perhaps the first example of a geometric construction scheme put down on paper. He affirms with undeniable certainty rules that have not been scientifically (for us) demonstrated, and this is not for lack of mathematical knowledge, as demonstrated by Giordano Nemorario's reasoning on perpetual motion (Clagett, 1981), but for conviction, almost faith, in the geometric rule. Villard formulates his rule of the three arcs on the convenience of keeping the curvature of the arcs at the base of the geometric construction constant. Villard's is only the first example of a long series of geometric constructions that find their genesis in the identification of the curvatures of the ribs. The importance of the curvatures of the ribs is such that Willis (1842) asserts that knowing these curvatures is sufficient to be able to completely define the entire geometry of the vault. During the fifteenth and sixteenth centuries the Sardinian culture underwent important changes with the intense process of Higuimization. The islands architecture is also strongly influenced by the Spanish one and today in Sardinia there are numerous testimonies of this era, especially in religious architecture. The question remains whether the practices and models of Iberian architecture are literally imported into Sardinia or whether these are declined on the island with their own distinctive variations and peculiarities. An architectural component that can certainly give us a measure of the Spanish influence in Sardinian architecture is that of the ribbed vaulted systems that had widespread diffusion on the island in those centuries. The aim of the presented contribution is to search for proportional and geometrical characteristics that the Sardinian late Gothic systems have in common with those of the region of Aragon, precisely because of the strong cultural and architectural influence that the latter had on Gothic architecture in Sardinia. To do this, surveys and geometric analyses were carried out on two case studies, the Church of Santa Lucia in Cagliari and the Chapel of San Miguel in Zaragoza. In both churches there are two cross vaults in succession covering the two naves. The analysis focused on these vaults, which included a laser scanner survey in order to obtain a cloud of points of sufficient precision to carry out studies on the geometry of the vaulted systems. The analysis includes the identification of the intrados profiles of the ribs and therefore the definition of the curvatures and centres of all the arches making up the vaults. Finally, the results are presented by means of summary diagrams and comparison tables. The summary representation chosen is based on the work of Juan Carlos Palacios in his \"La Cantera Medieval\" (2009), in which the author collects the methods of representation used by various treatises of the 16th century; this type of representation involves matching the planimetric diagram to the ribs, or just the softfit profiles, by flipping them over and showing the various comparisons between the curvatures as well as the start and end points of each profile. ## 2 State of the art ### Digital survey techniques The analysis of complex architecture such as gothic vaulting systems requires precise dimensional data. The analogue instruments, such as measuring rods or plumb lines, used in traditional surveys have two major limitations: accessibility to the vaults to be surveyed and, if accessibility is via scaffolding, for example, an excessively high margin of error. The introduction of digital surveying systems, in particular laser scanning and photogrammetry, offers not only a higher degree of precision and greater speed, but also the possibility of surveying very high vaults from ground level. These advantages, combined with ease of use and availability, have made laser scanning and photogrammetry fundamental tools for surveying historical heritage. In the case of vaulted systems, it is now possible to survey both their details and their entire configuration at the same time, with levels of precision in the order of a millimetre; a level of detail that, under the right conditions, can be seen in both the point clouds and the resulting meshes (Corns et al., 2017; Tallon, 2014). The digital nature of the information collected offers the additional advantage of being able to apply data analysis and manipulation techniques that were previously impossible (Di Mascio, 2015). As mentioned above, the time needed to complete the surveys is also considerably reduced thanks to digital tools, from weeks in the case of manual surveys to just a few days, with a significant improvement also in terms of error elimination (Richens and Herdt, 2009). In spite of the advantages analysed here, the transition from analogue to digital methods does not eliminate the need to collect documentary information in order to design the survey in the best possible way, which is the only way to achieve an acceptable level of accuracy (Mitchell, 1992). This is fundamental when analysing vaulted systems; although at first glance these may appear to have particularly simple geometries, the accuracy of digital survey systems has often revealed a complexity far greater than initially assumed. An example is the analysis that Willis (1842) suggests for the definition of the curvatures of the ribs in the vaults and the identification of the centres of curvature in relation to the lines of imposts (Figure 1); a real verification of these theories, extremely difficult in the past, is now accomplished thanks to digital survey techniques (Webb et al., 2016). This is possible thanks to the versatility of these survey systems, which allow us to obtain two-dimensional models, such as orthophophots and sections, and at the same time three-dimensional models, such as point clouds and meshes, which can not only be analysed but also navigated (Bagnolo et al., in press). ### Geometrical analysis of the vaults and their representation In the transition from High to Late Gothic, the representations of vaults became more complex and elaborate, while techniques tended to facilitate their layout and construction. The search for standardisation and simplification characterises all Gothic architecture, and in this sense the tendency to build complex ribbed vaults using as few curvatures as possible is emblematic (Telfia, 2013). Analysing the historical treatises in which there are references to Gothic vaulting systems, a particularly frequent technique of representation seems to emerge: the proposition of the planimetric scheme connected, by means of projections, to the traces of the ribs overturned on one side of the plan; these traces appear both as complete representations and as simple intrados profiles. One reason for the choice of this type of representation technique is undoubtedly the importance of keeping the curvatures as constant as possible; this is clear from the very first examples of geometric constructions indicated for the construction of vaulted systems, first and foremost Villarf's so-called rule of three arches (XIII sec.). Drawings of plans and inverted curvatures are widely used not only in texts proposing vault construction techniques but also, in later times, in analytical studies; we can therefore find examples of treatises that use this technique to define the shapes of vaults, others to indicate how stones should be cut or even to carry out static analyses. An interesting example comes to us from Ribes (1708) who proposes a collection of 40 plates, each containing the description of a vault, organised according to an increasing complexity (Figure 2). Figure 1: Curvatures of the ribs in a drawing by Willis. In particular, his drawings respect polar or axial symmetries, according to coordinated axes or according to diagonals; the keys are represented in elevation and the ribs lie on planes perpendicular to the plane of the impost (Tellia, 2013). Ribes reworks and re-proposes some of the concepts already proposed in the past by illustrious examples such as Rodrigo Gil, Vandelvira and Gebaterl, atl from the mid-1500s. Contemporaries of the latter are also Herman Ruiz and Francisco de Luna, both authors of interesting works on stereotomy and on the cutting of ashlars; in particular, in 1560 Ruiz represents a Gothic vault by drawing the plan and the elevations of the arches, of which he only reports the directions (Navascues Palacio, 1974). The last example to be given is that of the drawing of vaults for static analysis. Starting with the studies of Willis (1835), the first to propose a theory on the possible geometric genesis of the rihy (Huerta, 2009), techniques of graphic analysis for their static verification were developed over time (Planet, 1887; Wittmann, 1879). Precisely because of the importance of his theories on geometric genesis, Willis is recognised as a central figure in the study of vaulted systems, with particular reference to the pivotal role given to curvatures, going deeper into what was described by De I'Orme (1561), one of his main references (Huerta, 2016). ## 3 Cases of Study ### The Chapel of San Miguel (da Parroquieta), Zaragoza The Chapel of San Miguel, known under the nickname \"La Parroquieta\", built on the site of the old mosque of Saraquista, is located at the northwest end of the Cathedral of La Seo del Salvador, in Zaragoza and has an independent access to the Cathedral itself (Figure 3). \"It is actually the funeral chapell commissioned by the Archbishop of Zaragoza, Don Lope Fernandez de Luna\" (Borras Gualis et al., 2019, p. 94). With approximate dimensions of 22.50 x 10.00 m and a height of 17.00 m to the viewpoint that finishes the building, has a naive of two sections covered by ogive vaults, made of stone, a rare occurrence in the city of Zaragoza where there are no quarries nearby and most of the heritage architecture is in brick and plaster. The perimeter walls are of double leaf containing stairs and corridors, covered by brick barred _boxfilldas_ that in turn lock the outer leaf with the interior, which avoids the construction of buttresses. \"The work was carried out between 1374 and 1381, responding to a rectangular plan with two sections covered with ribbed vaults and the main chapel opens by pointed arch and is covered with magnificent Mudejar armor\" (Lopez Guzman, 2016, p. 274) (Figure 4). In height the building is divided into three parts: in any case, that space of approximately square plant of 10,00 m of side, are the chapiters of what could have been a vault of cross in stone factory, in the present day two arcs of diaphragm support a one-way floor beams. It is accessed from the ground floor behind the solid arch where the sarcophagus of Don Lope is located and through the north wall. The second space at street level is the chapel, already described, with a double height, because at the top over the access is the choir, third space of this building, which, like the basement, is accessed by stairs between the north wall, the double leaf of the walls particularly interesting from a constructive point of view, it had another series of corridors such as the one that linked the cathedral with the Archbishop's Palace, of which photographs are still preserved. Outside, we can observe a brick factory of the most worked and valuable in Aragon, contrasts with the interior by the materials used from a constructive point of view out of the decorative elements, where brick and ceramics are used almost exclusively, in front of the stone inside. The complex is topped off with an accessible upper gallery, over the vaults of the chapel and over this one covered with Arabic tile (Figure 5). Figure 4: The vaults covering the navi of the Chapel of San Miguel. Figure 3: Plan of the Chapel of San Miguel. ### The church of Santa Lucia, Cagliari The church of Santa Lucia (Figure 6), part of the Convent of the Clarisse Nuns, is located in Cagliari in the historic district of Castello, an important area of the city which until the 19th century housed all the main powers of the city government. In the absence of reliable documentary sources, it is not possible to date the building with precision, but the presence of numerous elements that unite it with other churches in the same district ascribes it to the end of the 16th century. A useful document for dating the church of Santa Lucia is certainly the contract for the construction of the nearby church of Santa Maria del Monte di Pieta, dating back to 1571, in which we find the explicit request to build the vault of the presbytery with shapes similar to those of the stellar vault of the presbytery present in the Church of Santa Lucia (Mereu, 1999). The church, entirely built in limestone masonry, consists of a single nave divided into two vaulted bays ending in a quadrangular presbytery with a lower section than the nave, without transport and with two side chapels (Figure 7). The molded ribs of the vaults branch off from worked corbels. The keystones are in the form of a pendulous gem and constitute the most worked and decorative element of the vaulted system. The presbytery is shaped like a _capilla mayor_ and is surmounted by a star vault with five keystones set on a quadrangular base. The presbytery vault of Santa Lucia adheres to the languages of the same architectural avant-gardes of the central Iberian and Mesoamerican sleg Figure 5: External view of the Chapel of San Miguel. Figure 6: A view of the nave and presbytery of the Church of Santa Lucia. Figure 7: Plan of the Church of Santa Lucia. In the first vaulted bay, above the current entrance, there is a high choir, supported by a net vaulted ceiling with lunettes, built in later times to replace the original vaulted system of which two shutters of the arch still in the masonry can be identified. On the right for those entering from the main entrance, there are two rectangular side chapels, while on the left there were openings, now obliterated, which connected the church with the convert, only a connecting portal is still present. The first chapel has a barrel vault with lunettes, while the second chapel is covered by a ribbed cross vault to which is added a half cross vaulted ceiling. The sacrity room adjacent to the presbytery is covered with a barrel vault. All the perimetral and the diaphragm arches on which the late Gothic vaults are set are givid arches. The ribbed cross vaults of the church of Santa Lucia show a minimalism typical of simple cruises, familiar to Catalan, Valencian, and Majorcan architectural traditions. In the decorations we find instead a mixture that takes up the Spanish Gothic and the influences of the Italian Renassance, known as the Plateresque style, present in capitals, comices, and balustrades (Schirru, 2014). ## 4 Methodology The first step is the collection of historical and documentary data; this is indispensable for a correct interpretation of the characteristics of the churches and for planning surveys. In addition to the necessary historical and documentary investigations, laser scanning surveys were carried out on both buildings. In some cases, photogrammetry was used to survey specific details or for the outside of the buildings, when laser scanning appears too difficult. The aim of the research is to identify variants and invariants of the geometries governing the two case studies with the intention of increasing their number in subsequent phases. Carried out the survey and processed the point clouds, a first analysis was made about the plaminetric scheme, in order to find out the main features and dimensions of the plans. this phase aims to identify possible symmetries, both polar and axial, with which to define the plan underlying the synthesis diagrams. The study of curvatures of the arches was carried out sectioning the point clouds with vertical plans; according with theorical rules from treaties in fact the profiles of the arches should lie in a plan perpendicular to spring plan. One section for each arch was made. Using CAD software was possible to draw best-fitting arches, corresponding with intrados profile of the ribs; curves so defined were detailed with their radius and position of the centres, related to the spring plan. All resulting profiles are drawn along the plans according with drawings previously illustrated. The final step, the comparison between the case studies, is carried out means tabular summary of main characteristics of the vaults: symmetries, dimensional ratios, curvature radii, and elevation of the centres. ### Analysis of the Chapel of San Miguel Three-dimensional modelling of the interior space of the chapel has been used for the geometric analysis of the vaults, using a laser scanner of the Faroc model Focus 150, with 9 parking spaces on the ground floor, 4 additional for the choir at the top + 2 for the staircase giving access and 3 more for the crypt of the basement + 2 additional for the stairway down, to complete the work has also scanned the exterior of the building, complemented by series of low-level ground and aerial photogrammetry data capture with drone (Figure 8). With the data obtained, the point clouds observed in the figures have been generated, which, once inserted in a cad program, by cutting them and the location of the coordinate reference system in the planes containing each arc, their mounts have been delineated to obtain the geometric characterization of the internal Gothic vaults (Figures 9-10). The representation of the traces and mounts has been done by delineating the projection in plan, as well as the intrados of the arches lowered in true magnitude, a method that allows a quick reading and facilitates the comparison of the traces of different vaults (Palacios Gonzalo, 2009), already reported by various treaties of the sixteenth century (Figure 11). These are two simple cross vaults that cover a rectangular space with a light of 6.34 m and a distance between supports of 5.25 m, for whose comparative analysis we will precisely define the nerves, because the cross vaults are defined geometrically thanks to the definition of linear elements, the arches (Rabasa, 2013). The transport is resolved by two pointed arches with their centre of curvature on the fascia line and a radius of 5.2 m. The _perpia0_ (head) arch also has its centre on the fascia line and a curvature practically equal to that of the cruiser arches of 5 m, while the form arch is tilted with the centre to 90 cm on the spring line and a curvature of radius 4.73 m. The longitudinal rampant is straight with a height difference of 18 cm between ends and the transverse rampant is straight and flat as the height difference between ends is only 6 cm. Figure 8: Point cloud digital model of the Chapel of San Miguel. Figure 9: Definition of curvatures and centres of diagonal arches of one of the vaults of San Miguel. ### Analysis of the Church of Santa Lucia The surveys of the church of Santa Lucia were conducted through the integration of direct and digital surveys; in particular all the principal spaces were laser scanned, as well as the facade of the building. The scans were made with the DHS 7000, a Leica phase-based scanner with a field of view of 360\\({}^{\\circ}\\)x270\\({}^{\\circ}\\) and a maximum scan rate of one million points / sec. The survey operations included 11 pick-up points, for a total of 1.8 billion points (Figure 12); the resulting scans were processed using Leica Cyclone software, to obtain structured data. The subsequent cleaning, segmentation and sectioning of the clouds was carried out using CloudCompare software, from which it was possible to extract plan and elevation views and the various sections required. The drawings thus obtained were analysed using a CAD software, on which the geometric studies of the plans and their synthesis were carried out, as well as the analysis of the curvatures of the ribs. As already mentioned, the aim was to define for each rib and ridge line, the intrados profile, the radius of curvature and its centre (Figures 13-14). The information obtained was finally synthesised according to the graphic representation early described (Figure 15). Figure 11: Schematisation of the geometries of one of the vaults of San Miguel. Figure 12: Point cloud digital model of the Church of Santa Lucia. Figure 10: Definition of curvatures and centres of perimetral arches of one of the vaults of San Miguel. Figure 13: Definition of curvatures and centres of perimetral arches of one of the vaults of Santa Lucia. The last step is the compilation of a summary table of the main characteristics detected (Table 1), in order to have not only an overview of the properties of the vaults analysed, but also to facilitate the subsequent comparison phase. ## 5 Conclusions Starting from the hypothesis of the existence of strong analogies between the complex vaulted systems of the so-called \"Mediterranean Gothic\" in Italy and Spain, the research begins with the analysis of two case studies of ribbed ogival cross vaults identified in two churches in the regions of Aragon and Sardinia. In the historical treatise, a fundamental assumption for the definition of ideal reference models, the articulation of the geometric characteristics that govern the different parts of the cross vaults can be defined on the basis of certain recurring parameters; these features in the workflow become the initial parameters of a generative algorithm capable of modelling the geometry of all the different components of the ribbed cross vaults of the two churches (Bagnolo and Argiolas, 2021). In this first phase of the research, it was decided to identify two case studies that proposed solutions for ribbed cross vaults with ogival arches chronologically distant from each other and with geometries that in the first instance could be traced back to two different models proposed by historical treatise. In our study, we therefore considered an example of a cross-ribbed vault with straight ridgelines (Zaragoza) and a cross-ribbed vault with curvilinear ridgelines (Cagliari). This choice becomes fundamental when one not only wishes to carry out a comparative analysis of the geometries that characterise the two vaulted systems, but mainly when one wishes to verify the adequacy and effectiveness of the parameters identified at the basis of the process of analysis and procedural modelling. Apart from the obvious differences mentioned above, the two specific cases have several similarities: in both cases of study, the individual vaults present a precise axial symmetry with differences in curvature between the two halves of the same arch of less than 3%; furthermore, with a few exceptions, which will merit future analysis, the radii of curvature in the same vaulted system present maximum variations of less than 10% (Table 1). These two factors would seem to confirm the desire for standardisation in Gothic constructions. Finally, all of the vaults analysed, although of varying consistency, have raised ridge lines. In conclusion, in spite of the differences existing between the two examples investigated, deriving primarily from the choices made during the identification of the two case studies, the results obtained allow us to conclude that even in the presence of ogival cross vaults with different geometric characteristics, the initial parameters that allow us to conduct the geometric analyses of the surfaces remain the same. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline \\multicolumn{2}{|c|}{Vault} & Parroquatica 1\\({}^{a}\\) vault & Parroquatica 2\\({}^{a}\\) vault & S. Lucia 1\\({}^{a}\\) vault & S. Lucia 2\\({}^{a}\\) vault \\\\ \\hline H [m] (spring – boss difference) & 5.25 & 5.25 & 5.75 & 6 \\\\ \\hline W [m] (longitudinal) & 5.25 & 5.25 & 6.7 & 7.5 \\\\ \\hline L [m] (transversal) & 6.34 & 6.34 & 8.3 & 8.3 \\\\ \\hline \\multirow{4}{*}{Arches Radii [m]} & a & 5.2 & 5 & 4.47 & 4.46 \\\\ \\cline{2-6} & b & 5.2 & 5 & 4.34 & 4.46 \\\\ \\cline{2-6} & c & 4.73 & 4.73 & 5.23 & 5.17 \\\\ \\cline{2-6} & d & 4.73 & 4.73 & 5.23 & 5.10 \\\\ \\cline{2-6} & e & 5 & 5.2 & 4.46 & 4.53 \\\\ \\cline{2-6} & f & 5 & 5.2 & 4.46 & 4.45 \\\\ \\cline{2-6} & g & 4.73 & 4.73 & 5.14 & 4.72 \\\\ \\cline{2-6} & h & 4.73 & 4.73 & 5.14 & 4.82 \\\\ \\cline{2-6} b & i & 5.19 & 5.19 & 5.44 & 4.77 \\\\ \\cline{2-6} & j & 5.19 & 5.19 & 5.40 & 4.77 \\\\ \\cline{2-6} & k & 5.19 & 5.19 & 5.55 & 5.13 \\\\ \\cline{2-6} & l & 5.19 & 5.19 & 5.42 & 5.22 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Dimensional data of the vaults and arches radii, the coupled arches are enlightened in grey. All dimensions are in meters. Figure 14: Definition of curvatures and centres of diagonal arches of one of the vaults of Santa Lucia. Figure 15: Schematation of the geometries of one of the vaults of Santa Lucia. ###### Acknowledgements. The 3D laser scanning of the Church of Santa Lucia was carried out at LabMAST (Laboratory for historical and traditional materials and architectures), University of Cagliari, Department of Civil-Environmental Engineering and Architecture (DICAAR). LIDAR point cloud product by Andrea Dessi and Sergio Demontis. ## References * Bagnolo and Grigiolas (2021) Bagnolo, V., & Grigiolas, R. (2021). Scan-to-BIM Process Versus 3D Procedural Modelling of Gothic Masorny Vaults. In _From Building Information Modelling to Mixed Reality_ (pp. 17-31). Springer. * Bagnolo et al. (in press). * Bagnolo et al. (in press). Communicating architecture. An AR application in Scan-to-BIM processes. _Representation Challenges: Augmented Reality and Artificial Intelligence in Cultural Heritage and Innovative Design Domain_. * Borras Gualis et al. (2019) Borras Gualis, G. M., Pradinas, P. L., & Mogollon Cano-Cortes, P. (2019). _El Arte Mudejar: La estetica islamica en el arte cristiano_ (2 ed.). Museum Ohne Grenzen (Museum with No Frontiers). * Clagett (1981) Clagett, M. (1981). _La scenza della meccanica nel Medioevo_. Feltrinelli. * Corms et al. (2017) Corms, A., Devlin, G., Devey, A., Shaw, R., & Shine, L. (2017). 3D-ICONS Ireland-Fulfilling the potential of a rich 3D resource. _Internet archaeology, 43_(2), 1-7. * De L'Orme (1561) De L'Orme, P. (1561). _Nouvelles inventioms pour bien bastir et a petits fraiz_. Federie Morel. * Di Mascio (2015) Di Mascio, D. (2015). Analytical drawings of architectural built heritage. Proceedings of the 12th Conference of the European Architectural Envisioning Association (EAEA), Envisioning Architecture: Imaging, Perception and Communication of Heritage, * Huerta (2009) Huerta, S. (2009). The debate about the structural behaviour of gothic vaults: From Viollet-le-Duc to Heyman. Proceedings of the Third International Congress on Construction History, * Huerta (2016) Huerta, S. (2016). Willis's sources on gothic vault construction. In. * Lopez Guzman (2016) Lopez Guzman, R. (2016). _Arquietcura mudejar_ (3rd ed.). Catedra. * Mereu (1999) Mereu, S. (1999). Per una storia del Tardogotico nella Sardegna meridionale: nuove acousisizioni e documenti d'archivio. _Studi Sardi, 31_, 1994-1998. * Mitchell (1992) Mitchell, W. (1992). _The Reconfugured Eye. Visual Truth in the Post-Photographic Era. cambridge: Massachusetts_. MiT Press. * Navascues Palacio (1974) Navascues Palacio, P. (1974). El ibro de aquitectura de Hernan Ruiz, el Joven. * Palacios Gonzalo (2009) Palacios Gonzalo, J. C. (2009). _La canteria medieval: la construccion de la boveda gotica espanola_. Munilla-Leria. * Planat (1887) Planat, P. (1887). _Pratique de la mecanique appliquee a la resistance des matiraux_ (Vol. 2). Aux Bureau de La construction moderne. * Rabasa (2013) Rabasa, E. (2013). Estereotomia: teoria y practica, justificacion y alarde. _Inf. Constr._, 63(Extra-2), 5-20. [https://doi.org/10.3989%ic.13.014](https://doi.org/10.3989%ic.13.014) * Ribes (1708) Ribes, J. (1708). _Libre de trasas de vias y mutnea_. * Richens & Herdt (2009) Richens, P., & Herdt, G. (2009). Modelling the Ionic capital. * Rondelet (1832) Rondelet, J. B. (1832). _Traite theorique et pratique de l'ar de boir: Planches; 5_ (Vol. 5). Rondelet Fils. [https://books.google.es/books?id=Gnk0AAAAAYA4&printsec](https://books.google.es/books?id=Gnk0AAAAAYA4&printsec) -froncover&hl=it&v=enpage&q&f=false_ * Schirru (2013) Schirru, M. (2013). Il monastero di santa Lucia a Cagliari e Tarchitettura di clausura nella prima epoca moderna. * Schirru (2014) Schirru, M. (2014). I sistemi voltati nelle architetture religigose della Sardegna tra il Cinque ed il Sciencito: tecniche costruttive e varianti estetico. _Lecicon. Storie e architecture in Sicilia e nel Mediterraneo, 18_, 81-87. * Tallon (2014) Tallon, A. (2014). Divining Proportions in the Information Age. _Architectural Histories_, 2(1). * Tellin (2013) Tellin, F. (2013). Las bovedas de cruceria en el Llibre de trasas de vix y mutnea de Joseph Ribes. Actas del Octavo Congreso Nacional de Historia de la Construccion. Madrid, * Villard De Honnecourt (2011) Villard De Honnecourt. (XIII sec.). _Livre de portraiture_. Paris National * The (2014) The 2014bs. [http://gallica.bnf.fr/ark/12148/btv1b10509412/f1.image](http://gallica.bnf.fr/ark/12148/btv1b10509412/f1.image) * Webb et al. (2016) Webb, N., Buchanan, A., & Peterson, J. R. (2016). Modelling medieval vaults: Comparing digital surveying techniques to enhance our understanding of gothic architecture. _ECAADE 2016: COMPLEXITY & SIMPLICITY, VOL 2_, 493-502. * Willis (1835) Willis, R. (1835). _Remarks on the Architecture of the Middle Ages: Especially of Italy_. J. & J Delighton. * Willis (1842) Willis, R. (1842). _On the construction of the vaults of the Middle Ages_. Royal Institute of British Architects. * Wittmann (1879) Wittmann, W. (1879). Zur theorie der gewolbe. _Zeitschrift fur Banwesen, 29_, 61-74.
The contribution aims to explore the possibility of tracing the geometry of ribbed vaults from two different Mediterranean regions to a single matrix, verifying the presence of possible local variations of the same rules. In particular, the analyses are being carried out in parallel on some case studies of the regions of Sardinia in Italy and Aragon in Spain. The two case studies include the Iglesia Parrouquial del Salvador la Seo in Zaragoza and the Church of Santa Lucia in Cagliari. Both constructions can be traced back to the style known as Late Mediterranean Gothic, which characterised the architecture of the countries bordering the Mediterranean basin between the 14th and 17th centuries. The two case studies chosen were almost at the extreme ends of the Late Gothic period, to determine whether some invariants sought could persist even in relatively distant periods. The analysis focused on cross vaults covering the two naves, which included a laser scanner survey in order to obtain a cloud of points of sufficient precision to carry out studies on the geometry of the vaulted systems, the identification of the intrados profiles of the ribs and therefore the definition of the curvatures and centres of all the arches making up the vaults. Finally, the results are presented by means of summary diagrams and comparison tables. T + Footnote †: Corresponding author
Give a concise overview of the text below.
271
arxiv-format/1304_2307v1.md
# Elliptic Flow of Protons and Antiprotons in Au+Au Collisions at \\(\\sqrt{s_{NN}}\\) = 7.7-62.4 GeV within Alternative Scenarios of Three-Fluid Dynamics Yu.B. Ivanov [email protected] Kurchatov Institute, Moscow RU-123182, Russia ## I Introduction The large azimuthal anisotropic flow at the Relativistic Heavy Ion Collider (RHIC) is believed to be a conclusive evidence for the creation of dense partonic matter in ultra-relativistic nucleus-nucleus collisions. This anisotropy is described by flow parameters defined as the proper Fourier coefficients \\(v_{n}\\) of the particle distributions in azimuthal angle with respect to the reaction plane [1]. The major bulk of research both theoretical and experimental has been done on the elliptic flow (\\(v_{2}\\)). Recently new data of the STAR Collaboration [2] on transverse-momentum dependence of the elliptic flow of identified particles in the incident energy range of \\(\\sqrt{s_{NN}}\\) = 7.7-62.4 GeV were reported. These data were taken within the Beam Energy Scan (BES) program proposed at RHIC. The BES program was proposed in order to study possible onset of deconfinement, as well as possibly to identify the location of the critical end point that terminates the crossover transition at small quark-chemical potential to a first-order phase transition at higher quark-chemical potential [3]. This work aims to analyze the new STAR data [2] within different scenarios (with and without deconfinement) of heavy-ion collisions in order to draw conclusions on the matter produced in these collisions. The simulations were performed within a model of the three-fluid dynamics (3FD) [4] employing three different equations of state (EoS): a purely hadronic EoS [5] (hadr. EoS) and two versions of EoS involving the deconfinement transition [6]. These two versions are an EoS with the first-order phase transition (2-phase EoS) and that with a smooth crossover transition (crossover EoS). Details of these calculations are described in Ref. [7]. First results of the 3FD simulations within alternative scenarios of heavy-ion collisions are reported in Refs. [7; 8; 9] dedicated to analysis of the baryon stopping and particle production. Results on transverse flow in the energy range of AGS (Alternating Gradient Synchrotron) and SPS (Super Proton Synchrotron) but only within hadronic EoS were presented in Refs. [10; 11]. The 3FD model [4] treats the process of a nuclear collision from the very beginning, i.e. from the stage of incident cold nuclei, to the final stage of freeze-out. Contrary to the conventional hydrodynamics, where local instantaneous stopping of projectile and target matter is assumed, a specific feature of the 3FD is a finite stopping power resulting in a counter-streaming regime of leading baryon-rich matter. The basic idea of a 3-fluid approximation to heavy-ion collisions [12; 13; 14] is that at each space-time point a generally nonequilibrium distribution of baryon-rich matter can be represented as a sum of two distinct contributions initially associated with constituent nucleons of the projectile (p) and target (t) nuclei. In addition, newly produced particles, populating the mid-rapidity region, are associated with a fireball (f) fluid. Therefore, the 3-fluid approximation is a minimal way to simulate the finite stopping power at high incident energies. The main observation of Ref. [2] is a strong difference in \\(v_{2}(p_{t})\\) between particles and their corresponding antiparticles. It is argued that it cannot be explained in a purely hydrodynamic approach since particles and anti-particles have the same mass. The evolution of the elliptic flow in the transient BES energy range was addressed in a number of theoretical works [15; 16; 17; 18; 19; 20]. In Refs. [15; 16] the difference between particles and antiparticles is explained by the presence of a vector mean field potential which is repulsive for particles and attractive for antiparticles. In [15] it is introduced at the quark level within the Nambu-Jona-Lasinio (NJL) model, while in [16], at the hadronic level. As a consequence of these potentials, antiparticles are attracted by the matter and are trapped in the system, whereas particles feel a repulsive force and have the tendency to leavethe system along the participant plane. With the potentials included, a fair qualitative agreement was achieved [16]. In Ref. [17], a hybrid (hydrodynamical plus Ultrarelativistic Quantum Molecular Dynamics [21]) calculation was performed. The effect for the protons-antiprotons primarily results from the annihilation process. Antiprotons moving in the out-of-plane direction encounter more protons to annihilate with than those moving in the in-plane direction. Another effect discussed in this paper is related to the event plane calculation. It was claimed that fluctuations in this calculation can bias the event plane to be rotated towards the most abundantly produced particles. This would, for example, increase the \\(v_{2}\\) values for protons and reduce them for anti-protons. The 3FD model proposes a more plausible (as compared to Refs. [15; 16]) explanation of the difference between particle and antiparticle elliptic flow in terms of three interacting fluids: the fireball, consisting of particles newly produced near the spacial center of the colliding system, and two baryon-rich fluid (\\(p\\) and \\(t\\)) associated with leading particles which traversed the whole system and are finally located in longitudinally peripheral regions. The particle-antiparticle difference explained by the increase of nuclear stopping in heavy ion collisions with decreasing energy. When the nuclear stopping becomes strong, the mid-rapidity quantities are determined not only by particles newly produced near the spacial center (the f-fluid) but also contributed by leading particles (the p- and t-fluids). The center and peripheral regions differently contribute to the mid-rapidity elliptic flow of different species, because they have different content of particles-antiparticles (quarks-antiquarks). This naturally results in different \\(v_{2}\\) of particles and antiparticles because the center and peripheral regions have different \\(v_{2}\\) patterns. This explanation is similar in some features to those proposed in Refs. [18; 19]. Contrary to Refs. [18; 19], the 3FD model calculates contribution from peripheral regions, rather than does assumptions on them, and does not employs the quark coalescence. ## II Comparison with data The elliptic flow is proportional to the spatial anisotropy [22; 23]. Usually, for this purpose one uses the eccentricity \\(\\varepsilon\\) defined by \\[\\varepsilon=\\frac{\\langle y^{2}\\rangle-\\langle x^{2}\\rangle}{\\langle y^{2} \\rangle+\\langle x^{2}\\rangle}\\;. \\tag{1}\\] Mean values of spacial transverse coordinates \\(\\langle x^{2}\\rangle\\) (out of the reaction plane) and \\(\\langle y^{2}\\rangle\\) (in the reaction plane) are usually calculated with either the wounded-nucleon (WN) or the binary-collision (BC) weights, for details see Ref. [24]. These calculations are based on the usual Woods-Saxon profile of nuclear density \\[\\rho(r)=\\frac{\\rho_{0}}{1+\\exp[(r-R_{A})/d]}, \\tag{2}\\] where \\(\\rho_{0}\\) is the normal nuclear density, \\(R_{A}=1.12A^{1/3}\\) is the raduis of a nucleus with mass number \\(A\\), and \\(d\\) is a diffuseness of the nuclear surface. As long as the eccentricity is small, elliptic flow should be directly proportional to the eccentricity. For numerically large eccentricities the direct proportionality could break in principle, but as was shown in the very first hydrodynamic calculation by Ollitrault [22] the proportionality holds well even for rather large values of \\(\\varepsilon\\). Within the 3FD model the initial nuclei are represented by sharp-edged spheres, i.e. the zero diffuseness (\\(d=0\\)). This is done for stability of the incident nuclei before collision. This circumstance essentially affects the eccentricity. The results obtained with \\(d=0\\) and the realistic value of \\(d=0.6\\) fm calculated with BC weights are shown in Fig. 1. As seen, the (\\(d=0\\))-result noticeably exceeds the eccentricity for the physically realistic value of \\(d=0.6\\) fm. Moreover, the (\\(d=0.6\\) fm)-result with BN weights practically coincides with the eccentricity calculated with WN weights. The latter is considered as a realistic eccentricity and is accepted in the experimental analysis Ref. [24]. The overestimation of \\(\\varepsilon\\) in the 3FD model naturally results in the respective overestimation of the elliptic flow. There are two ways to compensate this overestimation: either the impact parameter of the collision should be reduced to get a reasonable value of \\(\\varepsilon\\) for a considered centrality bin or the calculated value of \\(v_{2}\\) should be rescaled with the factor of \\(\\varepsilon_{BN}(d=0.6\\) fm\\()/\\varepsilon_{BN}(d=0)\\), because \\(v_{2}\\propto\\varepsilon\\), as mentioned above. The latter method is applied in this paper, i.e. all the \\(v_{2}\\) values displayed below are rescaled with the above factor. As the calculations Figure 1: Spatial eccentricity \\(\\varepsilon\\) as a function of impact parameter in Au+Au collisions for different surface diffuseness of the Au nucleus (\\(d\\)) and different weights of averaging: the wounded-nucleon (WN) and the binary-collision (BC) weights [24]. Figure 2: Elliptic flow of protons and antiprotons at mid-rapidity in central (0%-10%) Au+Au collisions at \\(\\sqrt{s_{NN}}\\) = 7.7, 11.5, 19.6, 27, 39 and 62.4 GeV as a function of transverse momentum. The 3FD calculations are performed at \\(b\\) = 3 fm. Experimental data are from Star Collaboration [2]. Figure 3: The same as in Fig. 2 but for mid-central (10%-40%) Au+Au collisions. The 3FD calculations are performed at \\(b=6\\) fm. Figure 4: The same as in Fig. 2 but for peripheral (40%-80%) Au+Au collisions. The 3FD calculations are performed at \\(b=8\\) fm. are performed at fixed impact parameters, only proton and antiproton elliptic flow is considered for which data with comparatively narrow centrality selection are available [2]. Results of calculations of proton and antiproton elliptic flow at mid-rapidity and their comparison with STAR data [2] for three centrality bins are presented in Figs. 2, 3 and 4. The quality of data reproduction is quite reasonable for calculations at fixed impact parameters that are representative for the centrality bins considered. It should be kept in mind that in spite of the best reproduction at the top considered energy of 62.4 GeV, results at this energy are still not quite accurate, since an accurate computation requires unreasonably high memory and CPU time. Precisely due to this reason the calculations of peripheral (\\(b=8\\) fm) collisions at 62.4 GeV have not been done. All scenarios, including the purely hadronic one, quantitatively reproduce the proton data approximately to the same extent. It simply means that all scenarios represent strongly collective (fluid-like) behavior of the system that results in a large elliptic flow. The value of \\(v_{2}\\) does not directly indicate either hadronic or partonic content of the matter. To distinguish between the hadronic and partonic content we need more delicate properties of the elliptic flow. The calculated results as a rule overestimate the data. This makes room for the viscosity and/or afterburner corrections to the calculated results. In the 3FD model neither viscosity nor kinetic afterburner at the final stage of the expansion were incorporated. Elliptic flows of protons and antiprotons at mid-rapidity coincide only if these are formed by the single fireball fluid, i.e. if baryon-rich fluids (the projectile and target ones) are well separated from the fireball fluid in rapidity. As seen this is almost the case at 62.4 GeV. With the incident energy decrease this separation in rapidity space becomes smaller, and as a result the the proton and antiproton \\(v_{2}\\) differ from each other more and more. This is a consequence of difference in proton and antiproton content and in the \\(v_{2}\\) patters of the baryon-rich and fireball fluids. Of course, exceptions from this rule are possible if \\(v_{2}\\) patters of different fluids turn out to be very similar by some reason. For instance, this is the case for the energy of 19.6 GeV within deconfinement scenarios. Typically, \\(v_{2}\\) of protons and antiprotons become closer to each other with the incident energy rise, similarly to that observed in the experiment. This takes place for all considered scenarios because the baryon-rich and fireball fluids get more and more separated in rapidity. As for more delicate properties of the elliptic flow mentioned above, contrary to the hadronic scenario, all deconfinement scenarios result in proton elliptic flow exceeding or being approximately equal (with accuracy of the calculation) the antiproton one (with the exception of the lowest considered incident energy of 7.7 GeV). The latter is in qualitative agreement with the data. The lowest energy of 7.7 GeV is already quite close to onset of deconfinement within 2-phase-EoS and crossover-EoS scenarios [7; 8]. Therefore, the model predicts a kind of irregular behavior in this energy region that does not agree with the data at present. Within deconfinement scenarios the antiproton \\(v_{2}\\) data are, as a rule, better reproduced in the low-momentum region, that is the prime region of the applicability of the fluid description. This indicates the preference of the deconfinement scenarios. ## III Conclusions The elliptic flow of produced particles is one of the most sensitive observables that brings information about the degree of collectivity during the expansion stage of heavy-ion collisions. When the collectivity is strong, like in the case of ideal hydrodynamics, the elliptic flow takes the highest value (the so called hydrodynamic limit). If the collectivity is weak, like in a dilute system of weakly interacting particles, it is close to zero. The value of \\(v_{2}\\) does not directly indicate either hadronic or partonic content of the matter. This is confirmed by approximately the same extent of reproduction of the proton and antiproton elliptic flow achieved within all scenarios, including the purely hadronic one. To distinguish between the hadronic and partonic content we need more delicate properties of the elliptic flow. An indication in favor of the deconfinement scenarios is better reproduction of the antiproton elliptic flow in the low-momentum region within these scenarios because this region is the main domain of the applicability of the fluid description. In the present version of the 3FD model neither viscosity nor kinetic afterburner at the final stage of the expansion are incorporated. Therefore, it is not surprising that the calculated elliptic flow, as a rule, overestimates the data. This makes room for the viscosity and/or afterburner corrections to the calculated results. The calculated elliptic flows of protons and antiprotons become closer to each other with the incident energy rise, similarly to that observed in the experiment. In fact, the elliptic flows of protons and antiprotons at mid-rapidity coincide only if these are formed by a single fluid, expanding in the center of the colliding system, in other words, if this \"center\" fluid is well separated in rapidity from projectile- and target-like leading particles, dominantly occupying peripheral rapidity regions. If the nuclear stopping rises, as it takes place with incident energy decrease, the rapidity gap between the \"center\" fireball and the leading-particles matter shrinks. Then the leading-particles matter starts to contribute to the mid-rapidity quantities. In its turn, this results in a difference between elliptic flows of protons and antiprotons that is a consequence of difference in proton and antiproton content and in the \\(v_{2}\\) patters of the \"center\" fireball and the leading-particles matter. In the 3FD model this mechanism is realized in terms of three interacting fluids: the fireball one, consisting of particles newly produced near the spacial center of the colliding system, and two baryon-rich fluid (\\(p\\) and \\(t\\)) associated with leading particles which traversed the whole system and are finally located in longitudinally marginal regions. **Acknowledgements** I am grateful to A.S. Khvorostukhin, V.V. Skokov, and V.D. Toneev for providing me with the tabulated 2-phase and crossover EoS's. The calculations were performed at the computer cluster of GSI (Darmstadt). This work was supported by The Foundation for Internet Development (Moscow) and also partially supported by the Russian Ministry of Science and Education grant NS-215.2012.2. ## References * (1) S. Voloshin and Y. Zhang, Z. Phys. C **70**, 665 (1996) [hep-ph/9407282]. * (2) L. Adamczyk _et al._ [STAR Collaboration], arXiv:1301.2348. * (3) M. M. Aggarwal _et al._ [STAR Collaboration], arXiv:1007.2613 [nucl-ex]. * (4) Yu. B. Ivanov, V. N. Russkikh, and V.D. Toneev, Phys. Rev. C **73**, 044904 (2006) [nucl-th/0503088]. * (5) V. M. Galitsky and I. N. Mishustin, Sov. J. Nucl. Phys. **29**, 181 (1979). * (6) A. S. Khvorostukhin, V. V. Skokov, K. Redlich, and V. D. Toneev, Eur. Phys. J. **C48**, 531 (2006) [nucl-th/0605069]. * (7) Yu. B. Ivanov, arXiv:1302.5766 [nucl-th], to be published in Phys. Rev. C. * (8) Yu. B. Ivanov, Phys. Lett. **B721**, 123 (2013) [arXiv:1211.2579 [hep-ph]]. * (9) Yu. B. Ivanov, arXiv:1304.1638 [nucl-th]. * (10) V. N. Russkikh and Yu. B. Ivanov, Phys. Rev. C **74** (2006) 034904 [nucl-th/0606007]. * (11) Yu. B. Ivanov, I. N. Mishustin, V. N. Russkikh, and L. M. Satarov, Phys. Rev. C **80**, 064904 (2009) [arXiv:0907.4140 [nucl-th]]. * (12) Y..B. Ivanov, Yad. Fiz. **46**, 100 (1987) [Sov. J. Nucl. Phys. **46**, 63 (1987)]. * (13) Yu.B. Ivanov, Nucl. Phys. **A474**, 669 (1987). * (14) I.N. Mishustin, V.N. Russkikh, and L.M. Satarov, Yad. Fiz. **54**, 429 (1991) [Sov. J. Nucl. Phys. **54**, 260 (1991)]. * (15) T. Song, S. Plumari, V. Greco, C. M. Ko and F. Li, arXiv:1211.5511 [nucl-th]. * (16) J. Xu, L. -W. Chen, C. M. Ko and Z. -W. Lin, Phys. Rev. C **85**, 041901 (2012) [arXiv:1201.3391 [nucl-th]]. * (17) J. Steinheimer, V. Koch and M. Bleicher, Phys. Rev. C **86**, 044903 (2012) [arXiv:1207.2791 [nucl-th]]. * (18) J. C. Dunlop, M. A. Lisa and P. Sorensen, Phys. Rev. C **84**, 044914 (2011) [arXiv:1107.3078 [hep-ph]]. * (19) V. Greco, M. Mitrovski and G. Torrieri, Phys. Rev. C **86**, 044905 (2012) [arXiv:1201.4800 [nucl-th]]. * (20) V. P. Konchakovski, E. L. Bratkovskaya, W. Cassing, V. D. Toneev, S. A. Voloshin and V. Voronyuk, Phys. Rev. C **85**, 044922 (2012) [arXiv:1201.3320 [nucl-th]]. * (21) S. A. Bass, M. Belkacem, M. Bleicher, M. Brandstetter, L. Bravina, C. Ernst, L. Gerland and M. Hofmann _et al._, Prog. Part. Nucl. Phys. **41**, 255 (1998) [nucl-th/9803035]. * (22) J. Y. Ollitrault, Phys. Rev. D **46**, 229 (1992). * (23) S. A. Voloshin, A. M. Poskanzer and R. Snellings, arXiv:0809.2949 [nucl-ex]. * (24) P. Jacobs and G. Cooper, arXiv:nucl-ex/0008015.
Analysis of elliptic flow of protons and antiprotons in Au+Au collisions is performed in a wide range of incident energies \\(\\sqrt{s_{NN}}\\) = 7.7-62.4 GeV. Simulations has been done within the three-fluid model employing a purely hadronic equation of state (EoS) and two versions of the EoS involving deconfinement transition: an EoS with the first-order phase transition and that with a smooth crossover transition. It is found that the proton data are reproduced approximately to the same extent within all of the scenarios, including the hadronic one, while the deconfinement scenarios look certainly preferable for the antiproton elliptic flow. The fact that difference between elliptic flows of protons and antiprotons decreases with the incident energy rise is a consequence of reducing baryon stopping rather than an onset of deconfinement. relativistic heavy-ion collisions, elliptic flow, hydrodynamics, deconfinement pacs: 25.75.-q, 25.75.Nq, 24.10.Nz
Write a summary of the passage below.
229
arxiv-format/1607_02214v1.md
# Large Scale GPU Accelerated PPMLR-MHD Simulations for Space Weather Forecast Xiangyu Guo Tsinghua National Laboratory for Information Science and Technology Binbin Tang State Key Laboratory of Space Weather, National Space Science Center, CAS Jian Tao Center for Computation & Technology, Louisiana State University, Baton Rouge, LA, USA Zhaohui Huang State Key Laboratory of Space Weather, National Space Science Center, CAS Zhihui Du* Tsinghua National Laboratory for Information Science and Technology ## I Introduction The magnetosphere, which is the outermost of the geospace, is formed when the solar wind interacts with the Earth's internal magnetic field [1]. Understanding the formation and development of the magnetosphere is crucial because it is the key element of the space weather cause-and-effect chain process from the Sun to the Earth. Similar to the weather on Earth, space weather is about the time varying conditions taking place in the space from the solar atmosphere to the geospace. It is driven by the solar wind which carries solar energy and comes through interplanetary space from the near surface of the Sun and the Sun's atmosphere. Due to the effect of the Earth's magnetic field, the magnetosphere forms a shape similar to a bullet, where the sunny side is roughly like a hemisphere with a radius of 15\\(R_{E}\\)(Earth radii), while the nightside forms a cylinder shape with a radius of 20-25\\(R_{E}\\). The tail region stretches well beyond 200\\(R_{E}\\) while its exact length is not well known. It is known that the geospace, including the magnetosphere and ionosphere, has the nature of nonlinearity, multiple-component, and time-dependence, which together pose a big challenge to investigate it using only traditional analytical approaches. Therefore, numerical models have been developed to explore the properties of the solar wind-magnetosphere-ionosphere coupling. It has been demonstrated that this kind of study is a natural match for MHD numerical simulations[1]. Over the years, different global MHD models have been developed to study the cause-and-effect of the space weather: 1) Lax-Wendroff model [2], 2) FV-TVD model [3], 3) OpenGGCM model [4], 4) GUMICS-3 model [5], 5) LFM model [6], 6) BATS-R-US model [7], and 7) PPMLR-MHD model [1]. The PPMLR-MHD model is a new global MHD model, which achieves high order spatial accuracy and low numerical dissipation compared to other existing models. By applying the piecewise parabolic method, the PPMLR-MHD algorithm has an accuracy of a third order in space, which enables the numerical model to present physical solutions even using relatively larger grid spacing. However, the parabolic interpolation is more complex than other interpolations, i.e. the linear interpolation, and therefore the computation amount will cost more time. Besides, this interpolation advancement also requires more communication data transfer. These problem together pose big challenge to develop a highly efficient fast implementation. The development of the General-Purpose GPU technology in recent years has made big progress in the field of high performance computing. Due to the massive parallelism nature of GPUs, researchers are now able to solve large scale problems at a much faster speed while using less power. As of now NVIDIA has released its 4th-gen CUDA devices capable of running thousands of parallel threads simultaneously. However, GPU programming is very different from that of CPU as it has a very specific architecture, one needs to carefully design the architecture of the software to maximize the performance. Meanwhile, since GPU has its own dedicated memory, concerns also need to be addressed in minimizing the overhead of data transfer. In this work, we present a GPU implementation of the PPMLR-MHD model. Due to the computation and data transfer intensive nature of this model, special attention has been paid to alleviate the overhead during simulation. We discuss the parallel problem partition, followed by the adaptation design scheme for taking advantage of the latest NVIDIA GPUs. We also demonstrate the techniques adopted to alleviate data transfer overhead using GPU direct techniques. We scale our implementation to hundreds of processes and solve the MHD simulation problem in a very efficient manner to meet the real time requirements for space weather forecast. ## II The PPMLR-MHD Method The global numerical model for Earth's space environment, especially for the magnetosphere is based on the magnetohydrodynamic (MHD) description of plasma. The conservational form of the 3-Dimensional MHD equations is listed as follows: \\[\\frac{\\partial\\rho}{\\partial t}+\ abla\\cdot(\\rho\\mathbf{v})=0\\] \\[\\frac{\\partial(\\rho\\mathbf{v})}{\\partial t}+\ abla\\cdot(\\rho \\mathbf{vv}+p^{*}\\mathbf{I}-\\frac{1}{\\mu_{0}}\\mathbf{B}^{{}^{\\prime}}\\mathbf{B }^{{}^{\\prime}})=(\ abla\\times\\mathbf{B}^{{}^{\\prime}})\\times\\mathbf{B_{d}}\\] \\[\\frac{\\partial\\mathbf{B}^{{}^{\\prime}}}{\\partial t}+\ abla\\cdot( \\mathbf{v}\\mathbf{B}^{{}^{\\prime}}-\\mathbf{B}^{{}^{\\prime}}\\mathbf{v})= \ abla\\times(\\mathbf{v}\\times\\mathbf{B_{d}})-\\mathbf{v}\ abla\\cdot\\mathbf{B} ^{{}^{\\prime}}\\] \\[\\frac{\\partial E}{\\partial t}+\ abla\\cdot[(E+p^{*})\\mathbf{v}- \\frac{1}{\\mu_{0}}(\\mathbf{v}\\cdot\\mathbf{B}^{{}^{\\prime}})\\mathbf{B}^{{}^{ \\prime}}]=\\] \\[\\mathbf{v}\\cdot[(\ abla\\times\\mathbf{B}^{{}^{\\prime}})\\times \\mathbf{B_{d}}]+\\mathbf{B}^{{}^{\\prime}}\\cdot[\ abla\\times(\\mathbf{v}\\times \\mathbf{B_{d}})]\\] where \\[\\mathbf{B}^{{}^{\\prime}}=\\mathbf{B}-\\mathbf{B_{d}},p^{*}=p+\\frac{B^{{}^{\\prime }2}}{2\\mu_{0}},\\] \\[E=\\frac{p}{\\gamma-1}+\\frac{1}{2}\\rho v^{2}+\\frac{B^{{}^{\\prime}2}}{2\\mu_{0}},\\] \\(\\rho\\) is the density, \\(p\\) is the plasma pressure, \\(\\mathbf{v}\\) is the flow velocity, \\(\\mathbf{B},\\mathbf{B_{d}},\\mathbf{B}^{{}^{\\prime}}\\) are the magnetic field, the Earth's dipole field, and the difference between the two fields respectively, \\(\\mu_{0}=4\\pi\\times 10^{-7}\\mathrm{H}\\cdot\\mathrm{m}^{-1}\\) is the permeability of vacuum and \\(\\gamma=5/3\\) is the adiabatic index. In order to improve the accuracy in the calculation of the magnetic field, \\(\\mathbf{B}^{{}^{\\prime}}\\) is evaluated instead of \\(\\mathbf{B}\\) during the simulations, for the Earth's dipole field could be very large when closer to Earth. But it is noted that \\(\\mathbf{B}^{{}^{\\prime}}\\) does not have to be small compared with \\(\\mathbf{B_{d}}\\). A so-called piecewise parabolic method with a Lagrangian remap (PPMLR)-MHD algorithm, developed by Hu etc. [1], is applied to solve these equations. It is an extension of the Lagrangian version of the Piecewise Parabolic Method (PPMLR) developed by Collela & Woodward [8] to MHD. In this PPMLR-MHD algorithm, all variables (\\(\\rho\\), \\(\\mathbf{v}\\), \\(\\mathbf{B}^{{}^{\\prime}}\\) and \\(p\\)) are defined at the zone centers as volume averages, and their spatial distributions can be obtained by a parabolic interpolation which is piecewise continuous in each zone. When a characteristic method is used to calculate the local values of the variables at the zone edges, they are updated first in the Lagrangian coordinates to the next time step, and then the results are remapped onto the fixed Eulerian grids by solving the corresponding advection equations. For the closure of the field-aligned current, an ionosphere shell, setting at r = 1.017\\(R_{E}\\) (110 km altitude) is integrated in the simulation model under the electrostatic assumptions, which is connected with the magnetospheric inner boundary, setting at r = 3\\(R_{E}\\) by dipole field lines between them. An electrostatic potential equation is solved in the ionosphere, and the solved potential is mapped to the magnetospheric inner boundary as a boundary condition for magnetospheric flows. More numerical details of the simulation model can be found in [1], and a number of studies of the solar wind-magnetosphere-ionosphere interactions have been successfully carried out based on this model, and have been reviewed in [9]. ## III The Hybrid Implementation ### _Task Partitioning_ The numerical domain of the PPMLR-MHD model is over a stretched Cartesian coordinate which takes the Earth as the origin and lets the x, y, and z axes point to the Sun, dusk, and the northward directions, respectively. The size of this domain extends from 30 \\(R_{E}\\) to -100 \\(R_{E}\\) along the Sun-Earth line and from -100 \\(R_{E}\\) to 100 \\(R_{E}\\) in y and z directions. It is discretized into 156 \\(\\times\\) 150 \\(\\times\\) 150 grid points: the grid is rectangular and nonuniform with the highest spatial resolution of 0.4 \\(R_{E}\\) near Earth. To accommodate the large simulation volume and long simulation time, we parallelize the model to be scalable from several to hundreds of processes, and partion the domain in all three directions. For the purpose of load balance, each subdomain usually contains the same number of grid points. Since the subdomain that contains the Earth must stay in a single numerical grid, the number of processes in the y and z direction must be odd (1, 3, 5, etc.). In addition, one more process is required to solve the ionospheric potential equation. It is worthy to point out that the 3-Dimensional MHD equations on every grid in each subdomain is divided into three 1-Dimensional equations, which are independent in the other two directions when solving these one dimensional equations. As 156 is divisible by 3, 4 and 6, we employ the process configuration of 3 \\(\\times\\) 1 \\(\\times\\) 1, 3 \\(\\times\\) 3 \\(\\times\\) 3, 4 \\(\\times\\) 3 \\(\\times\\) 3, 6 \\(\\times\\) 3 \\(\\times\\) 3, 4 \\(\\times\\) 5 \\(\\times\\) 5, 6 \\(\\times\\) 5, \\(\\times\\) 5, plus one more process, the final number of processes are 4, 28, 37, 55, 101, 151, respectively. ### _Optimisation Analysis and Design for GPU_ GPU is different in hardware architecture compared to CPU in that it focuses more on the part of parallel computation in large scale. To maximize the computational power of the GPU, attentions have to be paid at the design phase of the GPU code implementation. This section first describes the general optimisation methods in GPU programming and then discusses the high performance design considerations adopted in our PPMLR-MHD implementation. In this work we employ NVIDIA's CUDA as our GPU programming tool, therefore the optimisation terminologies demonstrated are limited to CUDA. #### Iii-B1 Overview of Optimisation Methods In essence, GPU is a massively parallel device which is capable of running tens of thousands of tasks simultaneously. Similar to CPU, the basic GPU execution unit is termed as GPU thread. To achieve high efficiency on GPU, it is required that the GPU threads launched are well past the number of executable threads (tens of thousands) so that hanging and ready threads can be switched back and forth dynamically to hide load/store instruction latency. The general optimisation methods include (but not limited): a) increasing the occupancy ([10, 11]), which is defined as the ratio between the active executing threads and the maximal executable threads on GPU streaming multiprocessor (SMX), b) employing coalesced memory access pattern (adjacent threads access adjacent memory addresses in short) for global memory read/write, c) shared memory tuning for reusable data access, d) atomic optimisation [12] for superposition that involves small scale memory conflict (no greater than 16 threads per memory address), e) warp shuffle optimisation for reduction, f) streaming for increasing multiprocess GPU resources utilization. #### Iii-B2 High Efficiency Design Our goal is to achieve high efficiency, high performance while maintaining high accuracy. In an effort to achieve this goal, we concentrate our effort on the following four aspects in designing the implementation: a) Adopt non-uniform spacing strategy in choosing numerical grids to reduce unnecessary computation. In our PPMLR-MHD model, a uniform mesh is laid out in the near-Earth domain within 10\\(R_{E}\\), and the grid spacing outside increases according to a geometrical series of common ratio of 1.05 along each axis. b) Reorganize the data so that it's GPU memory efficient. The memory-bound nature of the PPMLR-MHD method makes it very important to efficiently access the data as the whole simulation process consists of tens of thousands of time steps. In each step, the boundary input is obtained from neighbouring MPI processes and employed only once to solve the equations described in section II. As a result, GPU's shared memory is no longer needed for optimisation which makes global memory access with coalesced pattern very important. In addition, each solving process contains only basic arithmetic, no reduction or memory conflict is involved, therefore warp-shuffle and atomic optimisation are not helpful in our case. c) Employ streaming strategy to make the most of GPU resources. As kernels in different GPU streams can make use of the GPU parallelly, we make every grid execute in a specific stream. d) Make use of CUDA-aware MPI to efficiently transfer data between MPI ranks. In traditional MPI programming, the data computed on GPU has to be transferred back into CPU's host memory and then sent to responding MPI processes via MPI message. This is a big overhead as unnecessary data transfer is totally a waste of time. In our implementation, each grid needs to obtain boundary data from neighbouring processes in each step and there are more than thousands of steps performed, which makes the data transfer even more severe. To address this problem, we take advantage of the CUDA-aware MPI. Besides the capability of transferring data pointers pointed to host memory, the CUDA-aware MPI also takes care of GPU's device memory, which significantly reduces the unnecessary data movement by directly move data between GPUs by taking advantage of network adapter's RDMA capability. Figure 1 illustrates the data movement difference between traditional MPI and CUDA-aware MPI. The long solid arrow in the upper part of the figure shows the data flow of the CUDA-aware MPI, the rest of the arrows shows the data flow of the traditional MPI. ## IV Performance and Scalability Study This section discusses the performance and scalability of the GPU PPMLR-MHD implementation. To begin with, we study the scalability of the simulation, which gives a general performance picture of our MHD method. We then discuss more specific topics including single process performance and the transfer time of each process in different execution configurations. ### _Testing environment_ We test our implementation on the Titan Supercomputer at the Oak Ridge National Laboraty (ORNL), the hardware and software configuration of each computing node is shown in Table I, each node is equipped with two 10-core CPUs as well as one NVIDIA GPU. In the case of the PPMLR-MHD simulation, the input scale depends on the resolution of the numerical grids, which is predetermined in the design phase of this method. In other words, the input scale is fixed. Therefore, we focus on the strong scalability of the PPMLR-MHD simulation. We test Figure 1: Comparison between traditional MPI and CUDA-aware MPI. The CUDA-aware MPI directly transfers data via RDMA while avoiding unnecessary data movement compared to the traditional MPI. our implementation with 4, 28, 37, 55, 101, 151 processes, respectively, and then look into detailed performance results on both computation and data transfer. The reason we choose these irregular numbers is that the PPMLR task partition scheme has its own constraint: the number of ranks in the y and z direction must be odd and the same as each other, plus one more MPI rank is used for calculating the boundary update information. To perform the simulation, we start with a somewhat arbitrarily prescribed initial state. In the domain of x \\(\\leqslant\\) 15\\(R_{E}\\), _B'_ (defined in section II) is produced by the image of Earth's dipole located at (x, y, z) = (30, 0, 0)\\(R_{E}\\), and the initial distribution of plasma density and temperature is spherically symmetric. Finally, on the right of x = 15\\(R_{E}\\), a uniform solar wind and a uniform interplanetary magnetic field (IMF) are assumed. The simulation will continue until a steady-state magnetosphere is reached. ### _Scalability and Accuracy study_ The accuracy and utility of space weather forecasts depend heavily on a thorough knowledge of the Sun-Earth system. The PPMLR-MHD model is designed to have high order spatial accuracy and low numerical dissipation. To further improve the accuracy in the calculation of the magnetic field, we have subtracted the Earth's dipole field from the total, and only the deviation field _B'_ is evaluated during the calculations. For the simulation, a given numerical accuracy can be achieved by using a corresponding numerical grids partition. In this work, we choose a relatively middle numerical grids (\\(156\\times 150\\times 150\\)) which is most frequently used in our daily working environment. Since the overall problem for a given accuracy can be considered as fixed for a given accuracy, we discuss the strong scalability of the problem and omit the study of weak scalability. The strong scalability of the PPMLR-MHD method is determined by the size of the numerical grids. In our experiment configuration, the numerical grids is chosen to be \\(156\\times 150\\times 150\\). To facilitate implementation, we add extra constraint to our task partition strategy: the number of MPI ranks in the y and z axis must be the same and they are required to be odd numbers. PPMLR-MHD employs an iterative method to solve the MHD equations, in each iteration of the simulation, each grid (computed by a single MPI rank) needs to exchange its boundary data from all of its neighbours, the total amount of data required to be exchanged (TDE) is proportional to the partition choice applied. The amount of data exchanged in z direction can be represented as \\(c\\times n_{x}\\times n_{y}\\times(n_{z}-1)\\) where c represents a certain constants and \\(n_{x},n_{y},n_{z}\\) is the number of MPI ranks in the x, y, z dimensions, respectively, x and y direction can be calculated in the same way, there fore, TDE can be calculated as: \\[TDE \\propto n_{x}\\times n_{y}\\times(n_{z}-1) \\tag{1}\\] \\[+n_{x}\\times(n_{y}-1)\\times n_{z}\\] \\[+(n_{x}-1)\\times n_{y}\\times n_{z}\\] From equation 1, the total number of MPI ranks has positive relation to the total amount of exchanged data, involving more GPUs also brings more data transfer overhead. Besides, GPU requires problem to be big enough in order to take advantage of its massive parallel power. A compromise has to be made between the computation resources and the data exchange overhead. To demonstrate, we simulate our implementation for 100 iterations since the total amount of execution time required for several execution configuration is too long. Figure 2 shows the simulation results. As illustrated in the figure, in every chosen execution configuration, our GPU PPMLR-MHD implementation outperforms the CPU counterpart (2.5 to 3.5 times faster). The best performance comes from the execution configuration of 101 MPI ranks, this result meets our expectation, because more processes bring not only more computational resources but also more data exchange overhead, the execution configuration of 101 processes is the balance point between these two performance related factors. Figure 2: Execution time of simulations for 100 iterations in different scale configuration. ### _GPU performance study_ The computing capability of GPU plays a key role for us to achieve real time simulation. This section studies the performance contribution of GPU. To begin with, we compare our GPU implementation with a CPU counterpart which is highly optimised for spatial-temporal locality memory access. To accurately measure the computation performance, we randomly choose several single steps and record the computation time from each MPI rank, we then average the sum of all records and use the final result to perform comparison. Since the computing workload for each step is predetermined and thus fixed, each step is supposed to take approximately the same amount of time. Our results give a direct comparions of the code performance on GPU and CPU. Although parallelization is well considered in the design process of the PPMLR-MHD implementation, the memory-intensive nature of the algorithm makes it more memory bandwidth limited when considering the expected performance. In theory, the PPMLR-MHD GPU implementation's maximum achievable speedup (MAS) over the CPU counterpart can be expressed as: \\[MAS=\\frac{GPU\\ Memory\\ Bandwidth}{Host\\ Memory\\ Bandwidth}\\] From the hardware configuration listed in Table I, the maximum expected speedup should be MAS = 250 GB/s / 52 GB/s \\(\\approx\\) 4.88 in our testing environment. Figure 3 illustrates the execution time of a single step in different scale configurations. As can be seen in the figure, the employment of GPU has significantly improves the performance of the PPMLR simulation. The GPU version has achieved up to 3.57x speedups (compared to the CPU counterpart) in the configuration of 4 MPI ranks. This result is very close to the MAS theoretical peak of 4.88 (73.2 % of the GPU bandwidth has been employed). As the number of MPI ranks increases, the effect of GPU acceleration is less significant, this is due to the fact that the increasing of parallel processes results in the decreasing of the amount of workload in each MPI rank. When the workload is below a certain amount, GPU is underutilize thus unable to achieve peak performance. ### _Data transfer time study_ Heterogeneous computing devices such as GPU have their own dedicated memory. Data that are used as input to execute are required to reside on GPU's device memory, as the host memory is not accessible from GPU streaming multiprocessors. For many traditional MPI-based scientific applications such as PPMLR-MHD, this causes a major problem in using multiple GPUs since the computed results must be transferred back and forth from GPU memory to CPU's host memory in order to update boundary data on the numerical grids. Arrows in the lower part of Figure 1 demonstrates the data flow from one GPU to another using traditional MPI. As can be seen, except for the necessary RDMA operation, there are 6 more memory copy operations involved in a single data transfer between two MPI ranks. Obviously, the redundant 6 memory copy operations per MPI rank pair causes a bandwidth overhead that is non-ignorable in achieving high performance. To reduce the data transfer overhead, we employ the CUDA-aware MPI. Through this technology, the GPU's device memory can be directly sent to/received from the MPI api, combined with Remote Direct Memory Access (RDMA) technology, buffers can be directly sent from GPU memory to a network adapter without staging through host memory, as shown in the upper part of Figure 1. Therefore, compared to a CPU counterpart using traditional MPI implementation, our PPMLR-MHD GPU version is supposed to take approximately the same amount of time in transferring data. We predict that \\[CPU\\ data\\ transfer\\ time\\approx GPU\\ data\\ transfer\\ time\\] To check our prediction, we randomly choose a single simulation step from both PPMLR-MHD GPU implementation and the CPU counterpart, we record the total amount of time taken for sending as well as receiving data for all MPI ranks involved, we then average them and use the result as the comparison input. Since the amount of data being transferred is exactly the same for every single step, the choice of the experiment design is supposed to demonstrate the accurate data transfer overhead difference between the CPU and GPU implementation. Figure 4 shows the comparison of the data transfer time of the PPMLR GPU implementation versus that of the CPU counterpart in different execution configurations. As presented in the figure, the PPMLR GPU implementation Figure 3: Average execution time of each step in different scale configurations. The number in the x axis stands for the number of processes used in launching MPI executables. takes slightly less time in transferring data compared to the CPU counterpart. We attribute this result to the employment of the CUDA-aware MPI as well as GPU's high speed memory bus. CUDA-aware MPI helps our implementation to take advantage of the RDMA in transferring data among MPI ranks, as the CPU version also uses this technology, we expect the two implementations spend approximately the same amount of time in transferring data. The reason PPMLR GPU takes less time is due to the fact that GPU has higher speed bus, thus data from GPU's device memory reaches network adaptor quicker than that from host memory. ## V Conclusion In this work, we present a GPU accelerated implementation of the PPMLR-MHD model for space weather forecast. We significantly improve the code performance by taking advantage of the GPU technology as well as CUDA-aware MPI. By making careful choices in the design phase of the implementation, we are able to make the most of GPU's powerful compute capability and achieve near peak performance. The final implementation is scaled to up to 151 processes and the real time performance requirement is met. ## Acknowledgment This research is supported in part by National Natural Science Foundation of China (Nos. 61440057,61272087,61363019 and 61073008), Beijing Natural Science Foundation (Nos. 4082016 and 4122039), the Sci-Tech Interdisciplinary Innovation and Cooperation Team Program of the Chinese Academy of Sciences, the Specialized Research Fund for State Key Laboratories. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. ## References * [1] Y. Hu, X. Guo, and C. Wang, \"On the ionospheric and reconnection potentials of the earth: Results from global mhd simulations,\" _Journal of Geophysical Research: Space Physics_, vol. 112, no. A7, 2007. * [2] T. Ogino, R. J. Walker, and M. Ashour-Abdalla, \"A global magnetohydrodynamic simulation of the magnetosheath and magnetosphere when the interplanetary magnetic field is northward,\" _Plasma Science, IEEE Transactions on_, vol. 20, no. 6, pp. 817-828, 1992. * [3] T. Tanaka, \"Finite volume tvd scheme on an unstructured grid system for three-dimensional mhd simulation of inhomogeneous systems including strong background potential fields,\" _Journal of Computational Physics_, vol. 111, no. 2, pp. 381-389, 1994. * [4] J. Raeder, R. Walker, and M. Ashour-Abdalla, \"The structure of the distant geomagnetic tail during long periods of northward imf,\" _Geophysical research letters_, vol. 22, no. 4, pp. 349-352, 1995. * [5] P. Janhunen, \"Gumics-3 a global ionosphere-magnetosphere coupling simulation with high ionospheric resolution,\" in _Environment Modeling for Space-Based Applications_, vol. 392, 1996, p. 233. * [6] J. Lyon, J. Fedder, and C. Mobarry, \"The lyon-fedder-mobarry (Ifm) global mhd magnetospheric simulation code,\" _Journal of Atmospheric and Solar-Terrestrial Physics_, vol. 66, no. 15, pp. 1333-1350, 2004. * [7] K. G. Powell, P. L. Roe, T. J. Linde, T. I. Gombosi, and D. L. De Zeeuw, \"A solution-adaptive upwind scheme for ideal magnetohydrodynamics,\" _Journal of Computational Physics_, vol. 154, no. 2, pp. 284-309, 1999. * [8] P. Colella and P. R. Woodward, \"The piecewise parabolic method (ppm) for gas-dynamical simulations,\" _Journal of computational physics_, vol. 54, no. 1, pp. 174-201, 1984. * [9] C. Wang, X. Guo, Z. Peng, B. Tang, T. Sun, W. Li, and Y. Hu, \"Magnetohydrodynamics (mhd) numerical simulations on the interaction of the solar wind with the magnetosphere: A review,\" _Science China Earth Sciences_, vol. 56, no. 7, pp. 1141-1157, 2013. * [10] C. Nvidia, \"Programming guide,\" 2008. * [11] C. CUDA, \"Best practice guide, 2013,\" 2013. * [12] X. Guo, X. Liu, P. Xu, Z. Du, and E. Chow, \"Efficient particle-mesh spreading on gpus,\" _Procedia Computer Science_, vol. 51, pp. 120-129, 2015. Figure 4: Average data transfer time of each step in different scale configurations. The number in the x axis represents the number of processes used in launching MPI executables.
PPMLR-MHD is a new magnetohydrodynamics (MHD) model used to simulate the interactions of the solar wind with the magnetosphere, which has been proved to be the key element of the space weather cause-and-effect chain process from the Sun to Earth. Compared to existing MHD methods, PPMLR-MHD achieves the advantage of high order spatial accuracy and low numerical dissipation. However, the accuracy comes at a cost. On one hand, this method requires more intensive computation. On the other hand, more boundary data is subject to be transferred during the process of simulation. In this work, we present a parallel hybrid solution of the PPMLR-MHD model implemented using the computing capabilities of both CPUs and GPUs. We demonstrate that our optimized implementation alleviates the data transfer overhead by using GPU Direct technology and can scale up to 151 processes and achieve significant performance gains by distributing the workload among the CPUs and GPUs on Titan at Oak Ridge National Laboratory. The performance results show that our implementation is fast enough to carry out highly accurate MHD simulations in real time. CUDA; Space Weather Forecast; PPMLR-MHD; CUDA-aware MPI
Give a concise overview of the text below.
235
cambridge_university_press/891a7f56_8ca2_437f_8b0f_b04648339e77.md
# On the formation of small marginal lakes on the Juneau icefield, south-eastern Alaska, U.S.A. Matti Seppala (Maantieteen Laidos, Turun Yliopisto, SF-2050o Turku 5o, Finland) ## Introduction Small \"moat lakes\", sometimes empty, sometimes filled with water, are encountered on the edges of glacial firn areas of the Juneau Icefield in south-eastern Alaska. Earlier explanations for such minor marginal lakes are that they may develop alongside nunataks owing to reflected heat from the rock (e.g. Embleton and King, 1968, p. 424). In the summer of 1971 I was afforded the opportunity of making observations on two marginal lakes on the Juneau Icefield: (1) Lake Linda at the uppermost end of Lemon Creek Glacier (lat, \\(\\$^{\\circ}\\)21' N, long: 134\\({}^{\\circ}\\)22' W.; approximately i 200 m a.s.l.), and (2) Salla Lake at Icy Basin, a cirque on the margin of Taku Glacier about 27 km above its terminus (lat. 58\\({}^{\\circ}\\)39' N., long: 134\\({}^{\\circ}\\)11' W.; approximately i 150 m a.s.l.) (Fig. 1) and, in addition, a number of other lakes by means of air photographic interpretation. those of Lake Linda. The maximum inclination of the slopes is about 45\\({}^{\\circ}\\). The declivity of the surface of Icy Basin and crevasses in the ice indicate that the glacier flows from south, east and west towards the lake basin and tries to fill it (Fig. 5). In the bottom of the lake there can be found old blue ice composed of large crystals (5-10 cm in diameter). In July and August 1971, there was no longer any water in the lake. The formation of glaciers and the positions they occupy in mountainous tracts are largely dependent not only on the topography but also on the direction of the winds that bring snow Figure 1: Map of the southern and central part of Juneau Icefold with location of the investigated small marginal lakes. Figure 3: General view of Lake Linda looking south-east. Photograph by the author, 12 July 1971. Figure 2: Position of Lake Linda. Iee-free areas are shown by dots. The table deflation basin is marked in black. C17 is a permanent camp site. The dashed line is the subglacial outlet of the lake. to the area. On the Juneau Icefield strong winds blow from the southern sector, principally from the south-east, and cause drifting (Miller, 1967, p. 208). This can be seen from the accumulation forms of the snow on the field and also from air photographs. Consequently, more snow tends to accumulate on the northern sides of the nunataks than on the southern sides, which face the wind. If the nunatak is in the form of a ridge following the direction of the wind, then a wind channel is formed on one or both sides. The snow does not pile up in such a channel to the same degree that it does elsewhere. This has happened in the case of Lake Linda. If there is a steep wall of rock at right-angles to the direction of the wind, this causes eddying currents of wind to be formed which blow upwards, downwards or to the sides. They prevent in part the snow from piling up (Fig. 6). Such has been the case with Salla Lake. In the winter of 1970-71 the snow that had accumulated in the bottom of the lake was between i and j m thick judging from the melted snow that remained (Fig. 7). The thickness of the corresponding layer of snow on the even surface of nearby Taku Glacier was about 5 m. The lake basin continues to develop in such a way that melt water from the surface of the glacier and the slopes of the nunatak collects in the depression originally made by the wind. Figure 4: Position of Salla Lake. Dotted areas are nunataks. Line a–b indicates the profile line of Figure 6. Cto is the main camp site on the Juneau Icefield. Such a pond enlarges its basin by melting the walls and floor by the heat of the water (cf. Maag, 1969), which is carried to the periphery and the bottom by convection currents (cf. Sharp, 1947, p. 44). Melting lumps of ice and snow which have risen from the bottom or fallen from the sides of the lake float in the water. Melt water flowing down from the slopes of the runatak carries with it considerable quantities of sediment, which make the water muddy and at the same time dirty the floor of the lake and the ice and snow floating in the water. The lake may be emptied of water, sometimes catastrophically rapidly (Asher, 1971), as in the case of many ice-dammed lakes of other types (cf. Glen, 1954; Aitkenhead, 1960; Marcus, 1960; Lindsay, 1966), when the ice floats to the surface and allows the water to drain out by way of a crevasse or tunnel in the bottom of the lake. A distinct dirty area is then left on the sides of the lake basin showing the extent of the water. Lumps of floating ice are deposited on the floor of the basin and, together with rocks which have rolled into the lake, form a chaotic topography (Fig. 7). Figure 5: Vivo of Salla Lake looking down the slope of the runatak towards Icy Basin. Crerasses surrounding the lake can be seen on the surface of the glacier. 31 July 1971. (_cf. Figs. 4 and 6._) Figure 6: Profile line a-b (see Fig. 4) showing the snow-drifting winds and snow accumulation (\\(dots\\)) on the Icy Basin glacier. Key to symbols: 1. Bedrock; 2. Ice; 3. Snow deposits; 4. Creassics; 5. Approximate bottom of glacier; 6. Wind direction. Because of the sediments, the albedo decreases and ablation increases to such a degree that nearly all of the snow which has piled up in the winter melts in the course of the summer and drains away as water through the bottom of the basin. The running water also helps to melt the ice in the bottom of the basin. There is a considerable difference in the albedo and ablation values for clean and dirty snow. This is clearly shown in the blocks of snow which have been floating in the water. These assume a mushroom-like shape of which the top consists of clean snow, while the foot is formed of the snow which has been under the water and has consequently melted much more quickly (Fig. 7). The amount of heat reflected from the slopes of the runatak is so small as not to be of any real significance as an ablation agent for the snow and ice lying in the lower parts of the lake basin. This is clearly seen from the position of Lake Linda (Figs. 2 and 3), situated as it is at the foot of a slope facing north. As far as Salla Lake is concerned, the slope does in fact face more or less south but in spite of that on the cliff side the lake is bounded by ice and snow in the centre. On the rock slope and the ice there was still white snow right down to the very edge of the water as late as the beginning of August 1971 (Figs. 4 and 7). Farther up the slope the snow had melted. The following winter, snow piles up on the uneven surface of the bottom of the lake, but not in any great quantities. The snow accumulates for the most part in calm weather, even though drift snow blowing across the bed of the lake may collect to some degree because of the unevenness of the ground. However, for the most part the wind carries the snow to the edges of the basins. In any case the depression still exists the following spring; possibly its drainage tunnel has frozen solid in the course of the winter and so the basin can once more fill with water. The process then continues in the same way as during the preceding summer. ## Conclusions To confirm this theory of the origin of small marginal lakes, which is based principally on morphological and exposure factors, individual observations on the accumulation of snow, on ablation and microclimatic conditions in the lake basins should be carried out in the future. Figure 7: Mushroom-like blocks of snow at the bottom of the Salla Lake basin. Note the snow cover on the slope of the runatak, 29 July 1971. marginal lakes on the juneau icefield, alaska 273 There are, in any case, a number of factors which provide evidence to the effect that too much importance has been hitherto attached to the influence exerted by heat reflected from runataks. Acknowledgements I am profoundly grateful to Professor Maynard M. Miller for giving me the chance of attending the 12th Summer Institute of Glaciological and Arctic Sciences, Juneau Icefield, Alaska, and also for many stimulating discussions as to the origin of moat lakes. To Mr Robert Asher I extend my thanks for his guidance in helping me to learn about Lake Linda. Mr Christopher Grapes kindly translated the manuscript into English. This journey was made possible as a result of grants received from the National Science Foundation, U.S.A., the Leo and Regina Wainstein Foundation, Finland, and the National Research Council for Natural Sciences, Finland. To all of these institutions I wish to extend my best thanks. _MS. received 15 March 1972 and in revised form 13 November 1972_ ## References * [Aitkenhead 1960] Aitkenhead, N. 1960. Observations on the drainage of a glacier-dammed lake in Norway. _Journal of Glaciology_, Vol. 3, No. 27, p. 607-90. * [Asher 1971] Asher, R. A. 1971. Lake Linda drainage. _Smithsonian Institution. Center for Short-lived Phenomena. Annual Report_, 1970, p. 85-89. * [Embleton & King 1968] Embleton, C., _and_ King, C. A. M. 1968. _Glacal and periglacial geomorphology_. [London], Edward Arnold (Publishers Ltd. * [Glen 1954] Glen, J. W. 1954. The stability of ice-dammed lakes and other water-filled holes in glaciers. _Journal of Glaciology_, Vol. 2, No. 15, p. 316-18. * [Lindsay 1966] Lindsay, J. F. 1966. Observations on the level of a self-draining lake on the Casement Glacier, Alaska. _Journal of Glaciology_, Vol. 6, No. 45, p. 443-45. * [Maag 1966] Maag, H. U. 1966. Ice dammed lakes and marginal glacial drainage on Axel Heiberg Island, Canadian Arctic Archipelago. _Axel Heiberg Island Research Reports, McGill University, Montreal.__Jaacoben-McGill Arctic Research Expedition 1959-1962._ * [Marcus 1966] Marcus, M. G. 1966. Periodic drainage of glacier-dammed Tulsequah Lake, British Columbia. _Geographical Retrieval_, Vol. 50, No. 1, p. 89-106. * [Miller 1967] Miller, M. M. 1967. Alaska's mighty rivers of ice. _National Geographic Magazine_, Vol. 131, No. 2, p. 194-217. * [Sharp 1947] Sharp, R. P. 1947. The Wolf Creek glaciers, St. Elias Range, Yukon Territory. _Geographical Review_, Vol. 37. No. 1, p. 46-52.
This paper puts forward an explanation of the origin of small marginal lakes (superglacial or moat lakes) occasionally found on the edges of valley glaciers. The explanation departs from earlier theories. On the basis of observations made on the Juneau Icefield in south-eastern Alaska, I have come to the conclusion that the lake basins are primarily blow-outs formed as a result of wind erosion. Sur la formation des petits las marginaux sur le Juneau Icefield dans l'Alaska du Sud-Est, U.S.A.Cet article avance une application de l'origine des petits lacs marginaux (lac surglaciaaire ou lac de bordure) que l'on trouve parfois aux cofinis des glaciers de vallee. L'explication's ecarre des theories precedentes. Sur la base des observations faites au Juneau Icefied dans l'Alaska du Sud-Est, j'en suis venu a la conclusion que les basins lacustres sont des formations d'origine coilena resultant de l'erosion par le vent. _Zusammenfassung. Uber die Bildung kleiner Randsen am Juneau-Eisfeld, Sadost-Alaska, U.S.A._ Die Arbeit enthalt eine Erklarung der Enstehung kleiner Randsen (auf dem Eis oder zwischen Eis und Morane (\"Graben\"-Seen)), wie sigelegestnich an den Randern von Talgletschen zu finden sind. Die Erklarung gelt von fruheem Theorien aus. Auf Grund von Beobachtungen am Juneau-Eisfeld im sudiedlichen Alaska bin ich zu den Schloss gekommen, dass die Seenbecken wranglich Auswehnsporkernen sind, die durch Winderosion entstanden._
Write a summary of the passage below.
383
arxiv-format/1802_00631v2.md
# Satellite Image Scene Classification via ConvNet with Context Aggregation Zhao Zhou 1Shanghai Advanced Research Institute, Chinese Academy of Sciences Yingbin Zheng Corresponding author.1Shanghai Advanced Research Institute, Chinese Academy of Sciences Hao Ye 1Shanghai Advanced Research Institute, Chinese Academy of Sciences Jian Pu 2East China Normal University, Shanghai, China Gufei Sun 3ZhongAn Technology, Shanghai, China [email protected] ## 1 Introduction With the growing deployment of remote sensing instruments, satellite image scene classification, or scene classification from high-resolution remote sensing imagery, has drawn attention for its potential applications in various problems such as environmental monitoring and agriculture. Multiple challenges exist to produce accurate scene classification results. Large intra-class variation in the same scene class is a common issue. Moreover, the semantic gap between the scene semantic and the image features could further increase the difficulties of robust classification. Thus, the design of suitable representations on satellite images to deal with the challenges is of fundamental importance. Great progress has been achieved in the recent years with the utility of representations based on the convolutional neural networks (ConvNet), which led to breakthroughs in a number of computer vision problems like image classification. The typical ConvNet including AlexNet [2], SPP-net [3], VGG [4],and GoogleNet [5], has also been applied to the task of satellite image scene classification. As the image number of the satellite image datasets are order-of-magnitude smaller than that of the image classification datasets (e.g., ImageNet [6]) and may not sufficient to train the robust deep models, these ConvNet based methods usually employ the off-the-shelf pre-trained deep networks (e.g., in [7, 8, 9, 10, 11, 12, 13, 14, 15]). The activations of the layers or their fusion are considered as the visual representation and sent to the scene classifiers. Evaluations on the benchmarks show that the deep learning based features often outperform previous handcrafted features. The number of stacked layers in most current deep networks for satellite images is relatively small. For example, [11] design classification systems based on the 7-layer architecture of AlexNet [2] or its replication CaffeNet [16], and [12, 13] employ the 16-layer VGG architecture [4]. Recent evidence suggests that deeper convolutional networks are more flexible and powerful with high modeling capacity for image classification [1, 17]. Some previous works (e.g., [14, 8]) employ the Residual Networks (ResNet) [1] as one of the basic models. However, the effectiveness of these deeper models and how their performance depends on the number of layers are still not fully exploited for remote sensing images. In this work, we focus on the problem of deeper ConvNet with context aggregation, and introduce an image representation built upon a novel architecture for satellite images, which adopts the ResNet [1] as backbone. The two-pathway ResNet (or ResNet-TP abbreviatedly) is proposed, and Fig. 1 illustrates the pipeline. The proposed structure aims to aggregate the contextual information Figure 1: Pipeline of the proposed framework with two-pathway ResNet (ResNet-TP). The network is pre-trained using ImageNet database (Phase 1). Phase 2 produces the fine-tuned network with the satellite image dataset. A given satellite image goes through the network and the representation is generated from the global average pooling on the last convolutional layers (Phase 3). to enhance the feature discrimination. The input images go through two paths of convolutional operations after a few layers: one path follows the default building block settings, and another path incorporates the dilation within convolutional layers to expand the receptive field. Training the deeper ConvNet is usually more difficult and may lead to higher risk of overfitting, especially when using the relatively small remote sensing dataset. Therefore, we also employ the transfer learning strategy to reuse the parameters learned from image classification dataset. The idea of constructing contextual representations has been taken in several previous remote sensing works, e.g., [11] and [7]. These approaches use a single spatial pyramid pooling [3] on the feature maps of last convolutional layer, which is usually tiny after progressively resolution reduction of previous operations. ResNet-TP is designed with contextual pathways _before_ last convolutional layers, and is able to alleviate the loss of spatial acuity caused by tiny feature maps. To evaluate our proposed framework, we report the evaluations on the recent NWPU-RESISC45 [12] and the UC Merced (UCM) Land Use dataset [18]. Our representation is compared with several recent approaches and achieves state-of-the-art performance. ## 2 Methodology ### ResNet Architecture We begin with a brief review of the ResNet and the residual learning to address the training issue of deeper neural networks, which is the foundation to win the ILSVRC&COCO 2015 competition for the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation [1]. First, downsampling is performed directly by one \\(7\\times 7\\) convolutional layer and one max-pooling (over \\(2\\times 2\\) pixel window) with stride 2, respectively. The main component used to construct the architecture is the stacked convolutional layers with shortcut connections. Such building block is defined as \\[\\mathcal{H}(\\mathbf{x})=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+W_{s}\\mathbf{x}, \\tag{1}\\] where \\(\\mathbf{x}\\) and \\(\\mathcal{H}(\\mathbf{x})\\) are the input and output of the building block, \\(\\mathcal{F}(\\cdot)\\) is the residual mapping function to be learned, \\(W_{i}\\) is the parameters of the convolutional layers, and \\(W_{s}\\) is linear projection matrix to ensure the dimension matching of \\(\\mathbf{x}\\) and \\(\\mathcal{F}\\) (\\(W_{s}\\) is set as identity matrix when they are with the same dimension). The operation \\(\\mathcal{F}(\\cdot)+W_{s}\\mathbf{x}\\) is performed by a shortcut connection and element-wise addition. There are usually two or three layers within one building block, and two typical building blocks are shown in Fig. 2, where the basic building block is for 18/34-layer and the bottleneck building block is for 50/101/152-layer in [1]. The convolution is performed with the stride of 2 after a few building blocks to reduce the resolution of feature maps. Unlike previous architectures such as AlexNet [2] and VGG [4], ResNet has no hidden fully-connected (FC) layers; it ends with a global average pooling and then a \\(N\\)-way FC layer with softmax (\\(N\\) is the number of classes). We refer the reader to [1] for more details. ### Context Aggregation We now elaborate the construction of ResNet-TP. The architecture of the network is summarized in Table 1. In general, the network contains six groups of layers or building blocks. Group conv1+pool1 consist of the \\(7\\times 7\\) convolutional layer and the max-pooling, and conv2_x to conv4_x are with a stack of building blocks. All their configurations follow the generic design presented as in Sect. 2.1, and differ only in the depth of blocks. Consider an input image with \\(224\\times 224\\) pixels, group conv4_x is with output stride of 16 and thus its feature map size is \\(14\\times 14\\). We introduce group with dilation convolutional layers, which has been shown to be effective in many tasks such as semantic segmentation [19, 20], video analysis [21, 22], RGB-D [23], and DNA modeling [24]. The two-pathway architecture is made of two streams: a pathway with normal building blocks (conv5_1_x) and another with larger receptive fields (conv5_2_x). The dilation is operated on the \\(3\\times 3\\) convolutional layer in the building block. Let \\(\\mathbf{x}\\) be the input feature map and \\(\\mathbf{w}\\) be the filter weights associated with the dilation convolutional layer, the output \\(\\mathbf{y}\\) for position \\(\\mathbf{p}=(p_{1},p_{2})\\) is defined as: \\[\\mathbf{y}(\\mathbf{p})=\\sum_{\\mathbf{d}\\in\\mathcal{G}_{d}}\\mathbf{w}(\\mathbf{ d})\\cdot\\mathbf{x}(\\mathbf{p}+\\mathbf{d}) \\tag{2}\\] \\begin{table} \\begin{tabular}{c|c|c|c} \\hline \\multirow{2}{*}{Group} & \\multicolumn{2}{c|}{Block} & Output size, \\\\ \\cline{2-3} \\cline{2-3} & 18/34 layer & 50/101 layer & dilation \\\\ \\hline conv1+pool1 & [7\\(\\times\\)7, 64]; Max Pooing & 56\\(\\times\\)56, 1 \\\\ \\hline conv2\\_x & Basic(64,64)\\(\\times n_{2}\\) & Bottleneck(64,256)\\(\\times n_{2}\\) & 56\\(\\times\\)56, 1 \\\\ \\hline conv3\\_x & Basic(128,128)\\(\\times n_{3}\\) & Bottleneck(128,512)\\(\\times n_{3}\\) & 28\\(\\times\\)28, 1 \\\\ \\hline conv4\\_x & Basic(256,256)\\(\\times n_{4}\\) & Bottleneck(256,1024)\\(\\times n_{4}\\) & 14\\(\\times\\)14, 1 \\\\ \\hline conv5\\_2\\_x & Basic(512,512)\\(\\times n_{5}\\) & Bottleneck(512,2048)\\(\\times n_{5}\\) & 14\\(\\times\\)14, 2 \\\\ conv5\\_1\\_x & & 7\\(\\times\\)7, 1 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Configuration of the groups in ResNet-TP Architecture. Suppose the input image is with size \\(224\\times 224\\). Basic(\\(IN,OUT\\)) and Bottleneck(\\(IN,OUT\\)) denote the basic and bottleneck building block with number of in-plane \\(IN\\) and out-plane \\(OUT\\) (see Fig. 2). ‘\\(\\times n_{i}\\)’ indicates stacking \\(n_{i}\\) blocks, where \\([n_{2},n_{3},n_{4},n_{5}]\\)=[2,2,2,2] for 18 layer, [3,4,6,3] for 34/50 layer, [3,4,23,3] for 101 layer. Figure 2: The building block with the residual function \\(\\mathcal{F}\\). \\(IN\\) and \\(OUT\\) denote number of in-plane and out-plane, respectively. Top: the basic building block. Bottom: the bottleneck building block. where \\(\\mathcal{G}_{d}=\\{(-d,-d),(-d,0),\\ldots,(0,d),(d,d)\\}\\) is the grid for the \\(3\\times 3\\) filters and \\(d\\) is the dilation. We set the dilation \\(d=2\\) for conv5_2_x, and the layers in conv5_1_x can also be considered as a special case with \\(d=1\\). The motivation for this architectural design is that we would like the prediction to be influenced by two aspects: the visual details of the region around each pixel of the feature map as well as its larger context. In fact, ResNet-TP is degenerated to the standard ResNet when conv5_2_x and its subsequent layers are removed. Finally, we connect the last convolutional hidden layers in both pathways with the global average pooling followed by the FC layer with softmax to perform a prediction of the labels. ### Model Training and Implementation Details The ResNet-TP architecture is with a large amount of parameters to train. A traditional remote sensing dataset contains thousands of high-resolution satellite images, which is far less than the image classification datasets for training the state-of-the-art deep learning models. Following previous works [7, 8], training of ResNet-TP is based on the transfer learning strategy and Fig. 1 illustrates the overall framework of the proposed ResNet-TP based scene representation. The whole training procedure as well as the feature extraction are carried out via the open source PyTorch library and an Nvidia Titan X (Pascal) GPU. The first phase is to get a pre-trained model using the ImageNet database [6]. During this process, due to the network with only conv5_1_x pathway having the same structure with the original ResNet, we set the weights of conv1_x to conv5_1_x with the existing PyTorch ResNet models4. Directly updating model from this initialization lead to performance drop, as the parameters of conv5_2_x are randomly initialized. On the other hand, it is time-consuming if the model is trained from scratch, since ImageNet contains millions of images. Here we make a compromise by learning the weights of conv5_2_x and its subsequent layers from the network with only conv5_2_x pathway and by frozen of conv1_x to conv4_x5. We compare this pre-training strategy with the model trained from scratch under ResNet-TP-18, and find that they are with similar performance on ImageNet validation set, while its training is much faster. Footnote 4: The download link can be found from [https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) Footnote 5: The stochastic gradient descent (SGD) is used with batch size of 64 and momentum of 0.9. The learning rate is initially set to be 0.01 and is divided by 10 every 30 epochs. For the fine-tuning phase, we only fine-tune the building block groups after conv3_x by using the training satellite images and their labels due to the limitation of the GPU memory. We take random rotated, mirrored, or scaled images for data augmentation during fine-tuning. Finally, the representation is obtained from the global average pooling in both pathways, and the linear SVM classifier with default setting \\(C=1\\) is carried out for a fair comparison with previous works [7, 12, 13, 14]. ## 3 Experiments To evaluate the effectiveness of the proposed method, we compare it with several state-of-the-art approaches on two remote sensing scene classification datasets, including the recent proposed 45-Class NWPU-RESISC45 dataset [12] and the widely used 21-Class UCM Land Use dataset [18]. ### NWPU-RESISC45 The NWPU-RESISC45 dataset contains 31500 remote sensing images extracted from Google Earth covering more than 100 countries and regions. Each scene class is composed of 700 images with the spatial resolution varied from about 30 to 0.2 m per pixel. Sample images and the scene categories are shown in Fig. 3(a), and we wrap the images into the size of \\(224\\times 224\\). We follow the official train/test split strategy with two training ratios, i.e., 10% (10% for training and 90% for testing) and 20% (20% for training and 80% for testing). We repeat \\begin{table} \\begin{tabular}{c|c c||c c} \\hline \\multirow{2}{*}{Network} & \\multicolumn{2}{c||}{Training ratios} & \\multirow{2}{*}{Network} & \\multicolumn{2}{c}{Training ratios} \\\\ \\cline{2-3} \\cline{5-5} & 10\\% & 20\\% & & 10\\% & 20\\% \\\\ \\hline PT-AlexNet & \\(76.69\\pm 0.21\\) & \\(79.85\\pm 0.13\\) & BoCF-AlexNet & \\(55.22\\pm 0.39\\) & \\(59.22\\pm 0.18\\) \\\\ PT-GoogleNet & \\(76.19\\pm 0.38\\) & \\(78.48\\pm 0.26\\) & BoCF-GoogleNet & \\(78.92\\pm 0.17\\) & \\(80.97\\pm 0.17\\) \\\\ PT-VGG-16 & \\(76.47\\pm 0.18\\) & \\(79.79\\pm 0.15\\) & BoCF-VGG-16 & \\(82.65\\pm 0.31\\) & \\(84.32\\pm 0.17\\) \\\\ \\hline FT-AlexNet & \\(81.22\\pm 0.19\\) & \\(85.16\\pm 0.18\\) & Triplet networks[15] & – & \\(92.33\\pm 0.20\\) \\\\ FT-GoogleNet & \\(82.57\\pm 0.12\\) & \\(86.02\\pm 0.18\\) & ResNet-TP-18 & \\(87.79\\pm 0.28\\) & \\(91.03\\pm 0.26\\) \\\\ FT-VGG-16 & \\(87.15\\pm 0.45\\) & \\(90.36\\pm 0.18\\) & ResNet-TP-101 & **90.70\\(\\pm\\) 0.18** & **93.47\\(\\pm\\)0.26** \\\\ \\hline \\end{tabular} \\end{table} Table 2: Overall accuracies and standard deviations (%) of the proposed methods and state-of-the-arts under different training ratios on the NWPU-RESISC45 dataset. The results of pre-trained (PT-*) and fine-tuned (FT-*) ConvNets are reported in [12], and results of BoCF are from [13]. Figure 3: Scene categories from the datasets. the evaluations ten times under each training ratio by randomly splitting the dataset and also report the mean accuracy and standard deviation. We compare ResNet-TP based representation with several baselines and state-of-the-art approaches. Among them, the first group contains several well-known baseline descriptors, including the pre-trained or fine-tuned AlexNet [2], GoogleNet [5], and VGG-16 [4]. Table 2 shows that the proposed representation outperforms all the baseline descriptors as well as the state-of-the-art approaches shown in the right part of Table 2, including the Bag of Convolutional Features (BoCF) [13] and a very recent triplet networks [15]. In Fig. 4, we report the confusion matrix and detail classification accuracy for each scene label using training ratios of 20%. _Network and Training Ratio._ We also study the performance of different network settings and training ratios, and results are given in Fig. 5. Adding the layers in ResNet-TP architecture and context aggregation by the two-pathways boost the classification accuracy.We conjecture that the applied single pathway may not be the one at which the network responds with optimal confidence, and context aggregation with multiple pathways increase the robustness. In addition, Figure 4: Confusion matrices under the training ratio of 20% by using ResNet-TP-101 on the NWPU-RESISC45 dataset. Figure 5: Evaluation of the ResNet-TP parameters and components with with different training ratio on the NWPU-RESISC45 dataset. conv5_1_x and conv5_2_x indicates the network with only one stream of ResNet-TP. we observe that increasing training images (70 to 140 per class) lead to significant performance gains, probably due to the scene variation and data diversity in the NWPU-RESISC45 dataset. _Pre-Trained_ vs. _Fine-Tuned_. Our last experiment on NWPU-RESISC45 evaluates the alternative method for ResNet-TP model generation. While the fine-tuned method follows the pipeline of Fig. 1, the pre-trained approach is only composed of phase 1 and 3 in the figure, and the model parameters are directly learned from ImageNet. The comparison between the curves in Fig. 6 verifies that for both training ratios using the fine-tuned network is important toward a more discriminant representation. We also find that the fine-tuned method outperforms the pre-trained method even though the training images is half of which for pre-trained. ### UCM Land Use The UCM Land Use dataset contains 2100 aerial scene images extracted from United States Geological Survey (USGS) national maps. Each land use class is composed of 100 images with the spatial resolution of 1 ft and the size of \\(256\\times 256\\) pixels. The sample images are illustrated in Fig. 3(b). As UCM Land Use dataset is with relatively small and the results on it are already saturated, in this paper we focus on the performance w.r.t. the number of training images. Fig. 7 shows the effect of training image number in the representation. We observe significant performance gains when the number of training images increases from 10 to 50, after which the performance tends to be saturated. Another observation is that the result of ResNet-TP-50 is similar to the accuracy of ResNet-TP-101 in most of the comparisons, indicating that the computation could be saved by ResNet-TP-50 with marginal performance drop. We also compare the results of proposed representation with several state-of-the-art approaches. Table 3 summarizes the overall accuracy and standard deviation of all the classes. As can be seen from the table, the ResNet-TP based representation shows very competitive performance with different number of training images, which is significantly better than the other representations when the training images are limited. We also notice previous approach ResNet152_EMR [14] is also a ResNet-152 based representation and reach the Figure 6: Comparison of the pre-trained and fine-tuned ResNet-TP models on the NWPU-RESISC45 dataset. TR indicates training ratio. accuracy of \\(98.90\\%\\) by combining information from multiple layers with larger input image size (\\(320\\times 320\\)). When the input image size is set to \\(224\\times 224\\), the classification accuracy is \\(98.38\\%\\), which is inferior to ours with fewer layers. We believe that ResNet-TP based representation is also complementary to these mixed-resolution methods since they focus on different levels of information, which will be examined in the future work. ## 4 Conclusion In this work, we have introduced ResNet-TP, a two-pathway convolutional network with context aggregation to generate a discriminant representation for satellite image scene classification. Through empirical scene classification experiments, we have shown that proposed ResNet-TP based representation is more effective than previous deep features, generating very competitive results on the UCM Land Use and NWPU-RESISC45 datasets. For future work, we plan to incorporate multi-scale and multiple layers into the ResNet-TP based representation, and also explore the performance benefits of a combination of this representation with other features. #### 4.0.1 Acknowledgments. This work was supported in part by grants from National Natural Science Foundation of China (No. 61602459) and Science and Technology Commission of Shanghai Municipality (No. 17511101902 and No. 18511103103). \\begin{table} \\begin{tabular}{c|c c c} \\hline Number of images & 5 & 50 & 80 \\\\ \\hline MKL [25] & \\(64.78\\pm 1.62\\) & \\(88.68\\pm 1.10\\) & \\(91.26\\pm 1.17\\) \\\\ \\hline SPP-net MKL [7] & \\(75.33\\pm 1.86\\) & \\(95.72\\pm 0.50\\) & \\(96.38\\pm 0.92\\) \\\\ \\hline AlexNet-SPP-SS [9] & - & - & \\(96.67\\pm 0.94\\) \\\\ \\hline VGG-16 [10] & - & \\(94.14\\pm 0.69\\) & \\(95.21\\pm 1.20\\) \\\\ \\hline ResNet50 [8] & - & - & \\(98.50\\pm 1.40\\) \\\\ \\hline ResNet-TP-50 & **77.07\\(\\pm\\)1.73** & **97.68\\(\\pm\\)0.26** & **98.56\\(\\pm\\)0.53** \\\\ \\hline \\end{tabular} \\end{table} Table 3: Overall accuracies and standard deviations (%) of the proposed methods and state-of-the-arts on the UCM dataset. ‘-’ indicates that the results are not available in the corresponding paper. Figure 7: Evaluation of the ResNet-TP models with different number of training images on the UCM dataset. ## References * [1] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2016) 770-778 * [2] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS). (2012) 1097-1105 * [3] He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence **37**(9) (2015) 1904-1916 * [4] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICLR). (2015) * [5] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2015) * [6] Deng, J., Dong, W., Socher, R., Li, L., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2009) 248-255 * [7] Liu, Q., Hang, R., Song, H., Li, Z.: Learning multiscale deep features for high-resolution satellite image scene classification. IEEE Transactions on Geoscience and Remote Sensing **56**(1) (2018) 117-126 * [8] Scott, G.J., England, M.R., Starms, W.A., Marcum, R.A., Davis, C.H.: Training deep convolutional neural networks for land-cover classification of high-resolution imagery. IEEE Geoscience and Remote Sensing Letters **14**(4) (2017) 549-553 * [9] Han, X., Zhong, Y., Cao, L., Zhang, L.: Pre-trained alexnet architecture with pyramid pooling and supervision for high spatial resolution remote sensing image scene classification. Remote Sensing **9**(8) (2017) 848 * [10] Xia, G.S., Hu, J., Hu, F., Shi, B., Bai, X., Zhong, Y., Zhang, L., Lu, X.: Aid: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing **55**(7) (2017) 3965-3981 * [11] Han, X., Zhong, Y., Cao, L., Zhang, L.: Pre-trained alexnet architecture with pyramid pooling and supervision for high spatial resolution remote sensing image scene classification. Remote Sensing **9**(8) (2017) * [12] Cheng, G., Han, J., Lu, X.: Remote sensing image scene classification: Benchmark and state of the art. Proceedings of the IEEE **105**(10) (2017) 1865-1883 * [13] Cheng, G., Li, Z., Yao, X., Guo, L., Wei, Z.: Remote sensing image scene classification using bag of convolutional features. IEEE Geoscience and Remote Sensing Letters **14**(10) (2017) 1735-1739 * [14] Wang, G., Fan, B., Xiang, S., Pan, C.: Aggregating rich hierarchical features for scene classification in remote sensing imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing **10**(9) (2017) 4104-4115 * [15] Liu, Y., Huang, C.: Scene classification via triplet networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing **11**(1) (2018) 220-237 * [16] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: ACM International Conference on Multimedia (MM). (2014) 675-678* [17] Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2017) * [18] Yang, Y., Newsam, S.D.: Bag-of-visual-words and spatial extensions for land-use classification. In: SIGSPATIAL International Conference on Advances in Geographic Information Systems. (2010) 270-279 * [19] Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (ICLR). (2016) * [20] Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence **40**(4) (2018) 834-848 * [21] Lea, C., Flynn, M., Vidal, R., Reiter, A., Hager, G.: Temporal convolutional networks for action segmentation and detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2017) * [22] Xu, B., Ye, H., Zheng, Y., Wang, H., Luwang, T., Jiang, Y.G.: Dense dilated network for few shot action recognition. In: ACM International Conference on Multimedia Retrieval (ICMR). (2018) 379-387 * [23] Zheng, Y., Ye, H., Wang, L., Pu, J.: Learning multiviewpoint context-aware representation for rgb-d scene classification. IEEE Signal Processing Letters **25**(1) (2018) 30-34 * [24] Gupta, A., Rush, A.M.: Dilated convolutions for modeling long-distance genomic dependencies. arXiv preprint arXiv:1710.01278 (2017) * [25] Cusano, C., Napoletano, P., Schettini, R.: Remote sensing image classification exploiting multiple kernel learning. IEEE Geoscience and Remote Sensing Letters **12**(11) (2015) 2331-2335
Scene classification is a fundamental problem to understand the high-resolution remote sensing imagery. Recently, convolutional neural network (ConvNet) has achieved remarkable performance in different tasks, and significant efforts have been made to develop various representations for satellite image scene classification. In this paper, we present a novel representation based on a ConvNet with context aggregation. The proposed two-pathway ResNet (ResNet-TP) architecture adopts the ResNet [1] as backbone, and the two pathways allow the network to model both local details and regional context. The ResNet-TP based representation is generated by global average pooling on the last convolutional layers from both pathways. Experiments on two scene classification datasets, UCM Land Use and NWPU-RESISC45, show that the proposed mechanism achieves promising improvements over state-of-the-art methods. Keywords:Scene classification, convolutional neural network, ConvNet, residual learning, context aggregation
Provide a brief summary of the text.
184
arxiv-format/0708_3197v1.md
# From Microscales to Macroscales in 3D: Selfconsistent Equation of State for Supernova and Neutron Star Models W G Newton\\({}^{1}\\) J R Stone\\({}^{1,2,3}\\) A Mezzacappa\\({}^{2}\\) \\({}^{1}\\) Department of Physics, University of Oxford, Oxford OX1 3PU, United Kingdom \\({}^{2}\\) Physics Division, Oak Ridge National Laboratory,P.O. Box 2008, Oak Ridge, TN 37831, USA \\({}^{3}\\) Department of Chemistry and Biochemistry, University of Maryland, College Park, MD 20742, USA [email protected], [email protected], [email protected] ## 1 Introduction Observational properties of neutron stars and supernovae serve as powerful constraints of the nuclear EoS. There is a large variety of EoS models in the literature and it is imperative to investigate the connection of the physical processes expected in stars with the features of individual EoS. The models used to construct nuclear EoS range from empirical to those based on non-relativistic effective and realistic nucleon-nucleon potentials and relativistic field theories (for a recent reviews see e.g. [1, 2]). It is unclear at present which of these EoS is closest to reality. All the EoS are required to reflect physics occurring in a wide region of particle number densities. In core-collapse supernovae these densities span from the subnuclear density of about 4\\(\\times 10^{-8}\\) to \\(\\sim\\)0.1 fm\\({}^{-3}\\) (inhomogeneous matter) to the high density phase (uniform matter) between \\(\\sim\\)0.1 fm\\({}^{-3}\\) - 0.6 fm\\({}^{-3}\\). Neutron star models involve an even wider density range starting from \\(\\sim\\)6\\(\\times 10^{-15}\\) fm\\({}^{-3}\\) (estimated density at the surface of neutron stars) to about 0.6-1.0 fm\\({}^{-3}\\) (expected in the center of neutron stars). Most of the currently used EoSs in both the subnuclear and supernuclear density do not cover the whole range but are composed of several EoSs reflecting the evolution of the character of matter with changing density in smaller intervals. One of the most interesting density regions covers the transition betweenuniform and inhomogenous matter, known is the 'pasta' phase. In this region superheavy neutron rich nuclei beyond the neutron drip line gradually dissolve into nucleon + lepton matter of uniform density. The proton and neutron density distribution is determined by a delicate balance between the surface tension of nuclei and the Coulomb repulsion of protons. Previous models of the 'pasta' phase of matter, assuming spherical symmetry, predicted the existence of a series of exotic nuclear shapes - rods, slabs, tubes and bubbles, immersed in a free neutron and electron gas, corresponding to minimal energy of the matter as a function of increasing density, until the uniform distribution becomes energetically favorable. The 'pasta' phase of stellar matter, although occurring in a relatively small region of density, has a significant influence on the neighboring higher and lower density regions due to the requirement of continuity and thermodynamical consistency of the energy per particle and related quantities throughout the whole density and temperature range. The focus of this work is on the EoS that serves as an input to core-collapse supernova models and non-equilibrium young neutron stars. However, only a slight modification, i.e. the inclusion of chemical equilibrium at supernuclear densities, is required to use this EoS in old neutron stars. The most widely used EoS in core-collapse supernova simulations so far have been the non-relativistic EoS by Lattimer-Swesty [3] and relativistic mean-field model by Shen et al [4]. Both these EoS describe hot stellar matter assuming spherical symmetry and use different models for matter in different density and temperature regions. It is the aim of this work to show that a fully self-consistent non-relativistic EoS in the Hartree-Fock (HF) approximation [5, 6] in three dimensions (removing the constraint of spherical symmetry) can be constructed in the whole density and temperature region of interest. In this way the matter is treated as an ensemble of nucleons that naturally configure to a distribution corresponding to the minimal energy per particle at given density and temperature. The computation method adopted here is an extension of previous work of Bonche and Vautherin [7] and Hillebrant and Wolff [8] who calculated self-consistent HF EoS at finite temperature but only in the spherically symmetrical case and Magierski and Heenen [9] who developed an HF EoS for the general case of three dimensions but considered only zero temperature. ## 2 Computational Procedure Equation of State, determining the pressure of a system as a function of density and temperature, is constructed for stellar matter at the densities and temperatures found during core collapse of a massive star pre- and post-bounce. Such matter is composed primarily of neutrons, protons and electrons, with a significant flux of photons, positrons and neutrinos also present during core collapse. There are three main bulk parameters of the matter, baryon number density \\(n_{\\rm b}\\), temperature T and proton fraction \\(y_{\\rm p}\\) defined as the ratio of the proton number density \\(n_{\\rm p}\\) to the total baryon number density \\(n_{\\rm b}\\). In the present work, the ranges of these parameters are \\(0.001<n_{b}<0.16fm^{-3}\\), \\(0<T<10MeV\\), and \\(0<y_{\\rm p}<0.5\\). Furthermore, the EoS is dependent on a number of microscopic parameters, determining the strong force, acting between nucleons in the matter. The phenomenological Skyrme SkM\\({}^{*}\\) force [10] is used here but it is easy to modify the computer code for any other applicable model of the nucleon-nucleon interaction. Finally, the electric Coulomb force acting between charged particles, protons and electrons is included. Electrons are treated as forming a degenerate Fermi gas which should be a valid approximation. Neutrinos are not considered at the present stage of the model. The fundamental assumption used here is that nuclear matter has a periodic character and can be modeled as an infinite sequence of cubic unit cells. This notion removes a serious limitation of all previous models based on consideration of spherical cells which allows only spherically symmetrical nucleon distribution in the cell and cannot fully express the period character of matter as the cells make contact only at limited number of points leaving the space between them unaccounted for. Each unit cell contains a certain number of neutrons \\(N\\) and protons\\(Z\\), making a total baryon number of \\(A=N+Z\\). Quantum mechanical determination of all energy states and corresponding wave functions of a system of A nucleons in the cell requires exact solution of the A-dimensional equation of motion - the Schroedinger Equation - which is not technically feasible at present. However, if it is assumed that there exists an average single-particle potential, created by all nucleons, in which each nucleon moves independently of all the other nucleons present, then it is possible to use the Hartree-Fock approximation to the A-dimensional problem which reduces it to A one-dimensional problems.A spectrum of discrete energy states, the single-particle states, can be defined in the cell which the individual nucleons occupy (in analogy to a spectrum of standing waves in a box in classical physics). The single-particle wave functions \\(\\psi_{i}\\), associated with these states, are used to construct the total wavefunction \\(\\Psi\\) and to calculate the expectation value of total energy in the state \\(\\Psi\\). Obviously there are many ways the nucleons can be distributed over the available single-particle states, which always considerably outnumber, by a factor of two at least, the the total number of nucleons in the cell. Each of these nucleon configurations corresponds to an energy state and a particular spacial distribution of nucleon density in the cell. It turns out that it is possible to find a state \\(\\Psi_{\\rm min}\\), constructed of a set of single-particle states, of which the lowest A states are occupied, which corresponds to the minimum energy of the system and is the best approximation to the true A-particle ground state. Starting from a trial set of single-particle wave functions \\(\\psi_{i}\\), the expectation value of total energy is minimized using the variational principle \\[\\delta E[\\Psi]=0 \\tag{1}\\] This conditions leads to a system of A non-linear equations for \\(\\psi_{i}\\) that has to be solved iteratively. In this work, three forms of the trial wavefunction have been tested, Gaussian times polynomial functions, harmonic oscillator wave functions and plane waves. At the beginning the lowest A trial single-particle states are occupied. After each iteration, the resulting states are reordered according to increasing energy and re-occupied. This approach ensures that the final solution is fully independent from the initial choice of trial wavefunction and it is not predetermined by this choice. The evolution of the shape of neutron density distribution during the iteration process is illustrated for A=900 and \\(y_{\\rm p}\\)=0.3; in Figs. 1-2 the 3D density distribution is displayed for \\(n_{\\rm b}\\)=0.08 fm\\({}^{-3}\\), \\(T\\)=2.5 MeV and Figs. 3-4 for \\(n_{\\rm b}\\)=0.12 fm\\({}^{-3}\\), \\(T\\)=5.0 MeV. The change in the distribution after 500 and several thousand iterations is quite striking. We note that in these figures increase in density is color-coded from blue to red. Two iteration schemes have been employed to avoid instabilities in the iteration process - the Imaginary Time Step (ITS) and the Damped Gradient Step (DGS). The ITS is very robust and leads to initial rather rapid convergence even when the iteration process is started from trial functions not too similar to the true single-particle wavefunctions. However, when the minimum is approached, it slows down exponentially. The DGS method requires fairly good initial wavefunctions but converges much faster and leads to close to linear convergence for final iterations. In the present work both schemes have been used starting with the ITS and switching over after first few hundred iterations to DGS. After convergence is reached, the total energy density, entropy and pressure and other related observables are calculated and the EoS constructed in tabular form. It is important to realise that it is not known _a priori_ what is the number of particles in the cell at given density that corresponds to the physical size of the unit cell in nature. For each particle number density the volume of a cell is defined as \\(A/n_{\\rm b}\\) and the energy density and the spatial particle density distribution varies significantly with \\(A\\), as demonstrated in Figs. 5-8 for \\(n_{\\rm b}\\)=0.08 fm\\({}^{-3}\\), T=2.5 MeV and \\(y_{\\rm p}\\)=0.3. Each of these results are examples of possible _excited_ states of the true unit cell (although they are local ground-states for a given set of parameters). These states are rather close in energy and a series of careful calculations has to be performedto search for the value of \\(A\\) which gives the absolute minimum energy density for a given set of bulk parameters (i.e. _minimum minimorum_). ## 3 Results and discussion One of the main results of the current work is the development of properties of nuclear matter through extended density and temperature regions. At the lower density limit (\\(n_{\\rm b}<0.0001\\) fm\\({}^{-3}\\)), the nucleons are arranged as a roughly spherical (but very large) 'nucleus' at the centre of the cell. As the density increases, however, the shape deforms and the nucleon density distribution starts to spread out toward the cell boundaries, assuming a variety of exotic forms made of high and low density regions. At the extreme density, the nucleon density distribution becomes uniform. This behaviour is illustrated in Figs. 9-11 which shows neutron density distribution at three selected densities, T=5 MeV and \\(y_{\\rm p}\\)=0.3 and clearly demonstrates the transition between spherical and homogeneous density distribution. The entire nucleon configuration within each cell is treated self-consistently as one entity for each set of the macroscopic parameters and evolves naturally within the model as the macroscopic parameters are varied. This is in sharp contrast with previous models where neutron heavy nuclei at and beyond the particle drip-line where considered as immersed in a sea of unbound free nucleons and the two systems were treated separately. In this approach the transition between the inhomogeneous and homogeneous phase of nuclear matter did not emerge naturally from the calculation but had to be imposed artificially, introducing uncertainly about the threshold density region. Furthermore, important phenomena discussed in more detail elsewhere [11] such as shell effects, influence of the lattice structure on Coulomb energy and scattering of weakly bound nucleons on inhomogeneities in the matter are automatically included. ## 4 Summary The present model provides the first fully self-consistent 3D picture of hot dense nuclear matter. It offers a new concept of hot nuclear matter in the inner crust of neutron stars and in the transitional density region between non-uniform and uniform matter in collapsing stars. Instead of the traditional notion of super-neutron-heavy nuclei immersed in a free neutron gas it predicts a continuous medium with varying spatial concentration of neutrons and protons. The properties of this medium come out self-consistently from the model, as well as the transitions to both higher and lower density phases of the matter. These results may have profound consequences for macroscopic modelling of core-collapse supernovae and neutron stars. In particular, weak interaction processes (neutrino transport and beta-decay) in such a medium, will have to be investigated. ## Acknowledgments Special thanks go to R. J. Toede, Chao Li Wang, Amy Bonsor and Jonathan Edge for development and performing data visualisation. This work was conducted under the auspices of the TeraScale Supernova Initiative, funded by SciDAC grants from the DOE Office of Science High-Energy, Nuclear, and Advanced Scientific Computing Research Programs and partly supported by US DOE grant DE-FG02-94ER40834. Resources of the Center for Computational Sciences at Oak Ridge National Laboratory were used. Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725. ## References * [1] Lattimer J. M and Prakash M 2000 _Phys. Rep._**334** 121. * [2] Stone J. R 2006 _Open Issues in Understanding of Core-Collapse Supernova Theory_ ed. A Mezzacappa and G M Fuller (World Scientific) p 318 * [3] Lattimer J. M and Swesty F. D 1991 _Nucl.Phys._ A **535** 331 * [4] Shen H, Toki H, Oyamatsu K and Sumioshi K 1998 _Nucl.Phys._ A **637** 435 * [5] Hartree D R 1928 _Proc.Camb.Phil.Soc._**24** 89 * [6] Fock V A 1930 _Z.Phys_**61** 126 * [7] Bonche P and Vautherin D 1981 _Nucl.Phys._ A **372** 496 * [8] Hillebrandt W and Wolff R.- G 1985 _Nucleosynthesis:Challenges and New Developments_ ed. D Arnett and J W Truran A (Univ. Chicago) p 131 * [9] Magierski P and Heenen P.-H 2002 _Phys. Rev._ C **65** 045804 * [10] Bartel J, Quentin P, Brack M, Guet C, and Hakansson H.-B 1982 _Nucl.Phys._ A **386** 79 * [11] Newton W G, 2006 _DPhil Thesis_ Oxford UniversityFigure 1: 3D neutron density distribution after 500 iterations. Figure 2: 3D neutron density distribution after 2800 iterations. Figure 3: 3D neutron density distribution after 500 iterations. Figure 4: 3D neutron density distribution after 6500 iterations. Figure 5: Neutron density distribution for A=180. Figure 6: The same as Fig. 5 but for A=460. Figure 7: The same as Fig. 5 but for A=1400. Figure 8: The same as Fig. 5 but for A=2200. Figure 9: 3D neutron density distribution at \\(n_{\\rm b}\\)=0.04 fm\\({}^{-3}\\). Figure 10: 3D neutron density distribution at \\(n_{\\rm b}\\)=0.08 fm\\({}^{-3}\\). Figure 11: 3D neutron density distribution at \\(n_{\\rm b}\\)=0.12 fm\\({}^{-3}\\).
First results from a fully self-consistent, temperature-dependent equation of state that spans the whole density range of neutron stars and supernova cores are presented. The equation of state (EoS) is calculated using a mean-field Hartree-Fock method in three dimensions (3D). The nuclear interaction is represented by the phenomenological Skyrme model in this work, but the EoS can be obtained in our framework for any suitable form of the nucleon-nucleon effective interaction. The scheme we employ naturally allows effects such as (i) neutron drip, which results in an external neutron gas, (ii) the variety of exotic nuclear shapes expected for extremely neutron heavy nuclei, and (iii) the subsequent dissolution of these nuclei into nuclear matter. In this way, the equation of state is calculated across phase transitions without recourse to interpolation techniques between density regimes described by different physical models. EoS tables are calculated in the wide range of densities, temperature and proton/neutron ratios on the ORNL NCCS XT3, using up to 2000 processors simultaneously.
Give a concise overview of the text below.
215
arxiv-format/2007_05859v2.md
# SARS-CoV-2 orthologs of pathogenesis-involved small viral RNAs of SARS-CoV Ali Ebrahimpour Boroojeny and Hamidreza Chitsaz Department of Computer Science, Colorado State University [http://chitsazlab.org](http://chitsazlab.org) [email protected] ## 1 Introduction The world is now struggling with a pandemic known as COVID-19 which is caused by a novel coronavirus that was first identified in December 2019 in a local sea food market in Wuhan, China [2]. Due to the similarity of its genomic sequence to that of Severe Acute Respiratory Syndrome (SARS-CoV), which is a member of the subgenus of Sarbecovirus, the aforementioned novel coronavirus was named SARS-CoV-2. Phylogenetic studies have found a bat origin for this virus [3, 4]. As of April \\(29^{\\mbox{th}},2020\\), this virus has infected more than 45,360,000 people in 190 countries, caused more than 1,185,000 cases of death, and has become a global health concern leading to massive lock downs and quarantine all around the world. Since the emergence of SARS-CoV in China in 2002, which infected around 8,000 people world-wide, multiple research efforts have tried to understand that virus and to suggest potential treatments. Despite the fact that no vaccines or antivirals have been approved to date for any of coronaviruses, improvements on reducing the severity of the disease and mortality rate have been reported. Because of the similarity of the recent fast-spreading coronavirus SARS-CoV-2, which shares more than 79% of its genomic sequence with SARS-CoV, one plausible way to understand how it works and suggest possible treatments would be to port what has been found for SARS-CoV previously to SARS-CoV-2. Non-coding RNAs (ncRNAs) are, as the name suggests, RNAs that do not translate to proteins. Although it is likely the case that some of them do not play a major role in the cell [5, 6], some have crucial functions, such as transfer RNAs (tRNAs), ribosomal RNAs (rRNAs), micro RNAs (miRNAs), etc. Some of the ncRNAs, such as miRNAs, play a role in post-transcriptional regulation of gene expression. They, through a procedure called gene silencing, bind with the complementary parts of the target RNAs, and prevent the translation of those RNAs through cleavage of their strand, shortening their poly-A tail, or downgrading the efficiency of their translation by making some nucleotides unavailable to the ribosomes [7, 8]. First viral ncRNA was identified by Reich _et al._[9]. Since then a plethora of viral-associated ncRNAs have been identified and this has been accelerated by the advances in technology [10]. Especially, deep sequencing has facilitated the detection of small virus-associated RNAs [11, 12]. Some of these ncRNAs are known to be responsible in counteracting the antiviral defense mechanism that are present in the host cells, mostly through inhibition of protein kinase R (PKR) [13]. Therefore, they aid in the life cycle of the virus [14, 15], such as svRNAs in influenza A virus that are involved in the mechanisms this virus uses for switching between transcription and replication [12]. It had been well-known that nuclear and DNA viruses encode miRNAs [16] that play a role in persistence [17] of the virus as well as changing the transcriptome in the host cell [18]. Using the deep sequencing technologies, it had been revealed that cytoplasmic RNA viruses also express ncRNAs [11, 12, 1] and most of them induce various cytoplasmic pathways to express their ncRNAs [12]. Flaviiviruses can be mentioned as examples of cytoplasmic RNA viruses, which are very sensitive to interferons and have evolved a variety of mechanisms to avoid their action [19]. It has been shown that ncRNAs in flaviviral RNA bounds to genes responsible for regulation of antiviral state of the host cell and affects the interferon response againsts the virus [20]. A recent research has reported three small viral RNAs (svRNAs) that are derived from the genomic regions of SARS-CoV [1]. Morales _et al._ have have shown the presence of these positive sense svRNAs, which are \"mapped to nsp3 at the \\(5^{\\prime}\\) end of the Replicase gene and the N gene (svRNA-N) at the \\(3^{\\prime}\\) end of the genomic RNA (gRNA)\" [1], by using specific small RNA RT-qPCR assays. Their experiments on a mouse model of the infection [21, 22] show that these svRNAs contribute to SARS-CoV pathogenesis, and also suggest a potential antiviral treatment using antagomir-mediated inhibition of these svRNAs. Small non-coding RNAs that play an important roll have been shown to be highly conserved among genuses and families. Given that SARS-CoV-2 is in the the same subgenus as SARS-CoV and their genomic sequence has more than 79% similarity, if orthologs of the svRNAs found in SARS-CoV be present and found in SARS-CoV-2, then (antagomir-mediated) inhibition of those svRNA orthologs is expected to lead to reduction of SARS-CoV-2 titers and help decrease severity of COVID-19 in the majority of patients, as was shown to be the case for SARS-CoV [1]. What is needed is the sequence of the three aforementioned svRNAs in SARS-CoV-2 genome, from which the corresponding antagomirs are simply designed through base pair complementarity. To target SARS-CoV-2 svRNAs, we first characterize the sequence of three svRNAs in SARS-CoV-2. We achieve that goal by aligning the sequences of SARS-CoV svRNAs to the SARS-CoV-2 genome sequences. After investigating many of the available sequenced genomes of SARS-CoV-2 that have been reported in various locations around the globe, we discovered the presence of three svRNAs that were highly conserved orthologs of the svRNAs that played a role in pathogenesis of SARS-CoV. This _in silico_ discovery still needs to be confirmed using _in vitro_ and _in vivo_ experiments, but our findings reported in the following sections suggest strong likelihood of our hypothesis. ## 2 Methods In order to find the svRNAs in SARS-Cov-2 that are orthologs of those in SARS-CoV, we selected all of the complete genomic sequences of SARS-CoV-2, that were made available on the NCBI portal as of March 27\\({}^{th}\\) (173 complete sequences) as well as 27 more randomly chosen ones among the more recent uploads on the NCBI portal. These genomic sequences are from different states and countries, including but not limited to: New York, Washington, California, Illinois, Utah, China, Japan, South Korea, Italy, India, Brazil, Germany, Australia, Turkey, and Greece. We used a variant of the algorithm introduced by Smith and Waterman, which is known in the community as the _fit alignment[23]_, as the core of investigation for the svRNAs of interest. The algorithm devised by Smith and Waterman is itself a variant of the _global alignment_ algorithm known as Needleman-Wunsch algorithm [24], and can be used to find the regions of two genomic sequences that are similar. Fit alignment is a variant of this algorithm that searches a reference genome for a subsection that is highly similar to another shorter sequence. We used our own in-house alignment tool, which is freely availabe 1, that implements a faster variant of local and fit alignment algorithms, based on the idea of limiting the search to considering only a subset of the alignment search space that represents high similarity and pruning the regions that fall below the desired threshold. Also, when aligning two sequences, there might be multiple alignments with the same score as the highest score. Therefore, our tool keeps track of all the alignments with a score equal to the highest score. For this specific problem, we also added the feature of choosing the best alignment that exists in all the other reference sequences. However, we should note that, for all the sequences considered (the ones in table 1) the loci reported for the exact match to our suggested svRNAs have also the highest score of alignment with the svRNAs reported by Morales _et al._ in their respective sequence. Footnote 1: [https://github.com/Ali-E/WeightAligner](https://github.com/Ali-E/WeightAligner) Another distinctive feature of our alignment tool is the ability to penalize A/G and C/T mismatches less than the other mismatches. The intuition behind this feature is that bases A and G both can bind to U, so by mutating A to G, GU bonds can replace AU bonds. The same idea holds for C and T bases which can bind to G bases. To further test this hypothesis we used 1000 randomly chosen pairs of interacting RNAs from the RISE database. After extracting the reported binding sites, we made 3 other sets of sequence-pairs by randomly mutating one of the A bases of each of the sequences to C, G, and T (the position of the mutating A is the same); we repeat this random mutation 5 separate times (each time a potentially different A base in a sequence is chosen for mutation) to increase the number of samples in each set and have a more unbiased generalization. After making this 3 sets of mutated versions of the original data, we compute the energy of each of these pairs using piRNA [25] which is a tool that computes free energy of a pair of RNAs by considering both the enthalpy and entropy of the interaction RNA structures. Finally, we compare the energy changes to see which mutations are less detrimental than the others. ## 3 Results Figures 1, 2, and 3 show the sequences that we have found for three svRNAs in SARS-CoV-2 that are orthologs of the three aforementioned svRNAs in SARS-CoV. To further test our hypothesis, we searched for all these svRNAs in 200 different complete reference sequences of the virus. Our three svRNAs are wholly present, without any mutation, in all the reference sequences. Table 1 shows the NCBI ID of each of these sequences, as well as the string loci of each of the svRNAs. As you can see 199 out of 200 tested sequences are present in the table and contain the exact match of the proposed RNAs. The missing entry of the table is LR757997 which showed the presence of the third svRNA at loci 28604 but does not contain the other two because there is a gap in the sequence from loci 3001 to 3235 (filled with Ns), and this is the region where the first two svRNAs reside in according to the loci values in the table for these two columns. We identify a non-detrimental mismatch by : in Figures 1, 2, and 3. RNA-RNA binding energies is mainly governed by Watson-Crick base pairing, namely A-U, G-U, and C-G. Particularly, U can pair with both A and G. Hence, an A vs. G mismatch (substitution) in an ortholog RNA is non-detrimental for the binding to a target RNA. Similarly, G can pair with both U and C. Hence, an U/T vs. C mismatch (substitution) in an ortholog RNA is non-detrimental for the binding to a target RNA. The table below shows the results of the experiment designed to evaluate our hypothesis about non-detrimental mutations. The tables show the amount of decrease in the free energy which results in a more stable structure. Therefore the higher the decrease value gets, the more energetically favorable the resulting structure becomes after the mutation. The results in the first table show that A/G mutations are the least detrimental ones and they even lead to a more favorable interaction structure ensemble than the original ones. This can be explained by the analysis reported by Boroojeny et al. [26] in which authors concluded that CG bonds are stronger than GU bonds, and they are both stronger than AU bonds. Therefore when A is mutated to G, the potentially existing AU interaction bond can be at least replaced by a GU bond which is stronger; that G even finds opportunity to form a GC bond with a free C base which would theoretically be even stronger. The second table reports the results for C mutations. Based on the table, we can see that C/T mutations are not as detrimental as C/A mutations, but still seem to be more detrimental than C/G mutations. However, the scatter plot in Figure 5 shows that for the more energetically favorable interaction structures (the ones toward the right side of the plot), the C/T mutations seem to be less detrimental than C/G mutations, which results in the intersection of the fitted lines around -8 kcal/mol. Based on these results, we penalize C/T mutations slightly more than A/G mutations when finding the best alignments. Figure 3: Alignment of the third svRNA in SARS-CoV and its identified ortholog in SARS-CoV-2. Top: N svRNA sequence AGGAACTGCCAGAAGCTTC in SARS-CoV according to [1]. Bottom: Ortholog of N svRNA sequence AGGAACTGGGCCAGAAGCTGAC in SARS-CoV-2. \\(|\\) represents a match, \\(-\\) represents a gap (indel), empty represents a mismatch, and dotted line represents a potential omission. It is possible that some or all of the last four (4) nucleotides GGAC of the bottom sequence are dropped (omitted). Figure 4: X axis shows the free energies of piRNA (-RT ln Z) for the original sequences and Y axis shows the free energy when mutated. Blue shows the A/G mutation and orange shows A/T mutations. As you can see, the fitted line to the blue points is constantly above the red line (fitted to the orange points). Figure 5: X axis shows the free energies of piRNA (-RT ln Z) for the original sequences and Y axis shows the energy when mutated. Blue shows the C/T mutation and orange shows C/G mutations. As you can see for the more energetically favorable interaction sites (the ones toward the right side of the X axis which have lower free energy) the C/T mutation is still less detrimental on average and the fitted blue line is above the fitted red line after the intersection at around -8 kcal/mol. \\begin{table} \\begin{tabular}{l l l l l} 184 & MT371019 & 2998 & 3124 & 28557 \\\\ 185 & MT371024 & 2981 & 3107 & 28540 \\\\ 186 & MT371034 & 3006 & 3132 & 28565 \\\\ 187 & MT371035 & 2998 & 3124 & 28557 \\\\ 188 & MT371036 & 2998 & 3124 & 28557 \\\\ 189 & MT371037 & 2989 & 3115 & 28548 \\\\ 190 & MT371048 & 3053 & 3179 & 28612 \\\\ 191 & MT371568 & 2929 & 3055 & 28488 \\\\ 192 & MT371572 & 2945 & 3071 & 28504 \\\\ 193 & MT372481 & 3048 & 3174 & 28607 \\\\ 194 & MT374112 & 3051 & 3177 & 28610 \\\\ 195 & MT375470 & 3021 & 3147 & 28580 \\\\ 196 & MT385448 & 3050 & 3176 & 28609 \\\\ 197 & MT394529 & 3042 & 3168 & 28601 \\\\ 198 & MT394864 & 2999 & 3125 & 28558 \\\\ 199 & MT396242 & 3028 & 3154 & 28587 \\\\ \\end{tabular} \\end{table} Table 1: Start loci of the three svRNAs in various strains of SARS-CoV-2 genome. ### Potential targets We also analyzed the potential target genes of these svRNAs in human body. To this end, we considered more than 32,000 known RNAs that are transcribed in human body. We processed the sequence of each of these RNAs and removed the intron regions of each. Finally, we aligned the reverse complement of our svRNAs to all the exon sequences. A highly similar region to the reverse complement of an svRNA is highly complementary to the svRNA and hence suggests a high chance of interacting with that svRNA. Tables showing the results of that alignment for the regions that had an alignment score higher than 0.7 with these three svRNAs are available in the Appendix Tables 2, 3, and 4. To compute the score of the alignment, we used a reward of one (1) for a pair of matching nucleotides and a penalty of negative one (-1) for substitutions and insertion/deletions (indels). In the end, the total score was divided by the length of each svRNA to normalize the scores. ## 4 Analysis Our proposed svRNAs for SARS-CoV-2 are highly conserved versions of the svRNAs of SARS-CoV. Presence of these svRNAs in all the 200 reference sequences that we used to test our hypothesis increases the possibility of correctness of our claim. Also, our proposed svRNAs occur at very similar loci in different reference sequences, and these loci are almost the same as the ones for the original svRNAs reported for SARS-CoV. The fact that the two viruses are in the same subgenus makes our hypothesis more plausible. However, still this hypothesis has to be verified experimentally. As mentioned earlier, the complete tables showing the possible target RNAs of our proposed svRNAs are available in the appendix. However, it is worth mentioning some of them. The second highest match to the reverse complement of the first svRNA is HIF3A transcript which is a transcriptional regulator in adaptive response to low oxygen levels. Silencing this gene affects the reaction of the body in response to hypoxia. The second best match with the reverse complement of the second svRNA is MEX3B transcript which is a member of MEX3 translational regulators. MEX3 are RNA-binding proteins that are evolutionarily conserved and their _in vivo_ functions is yet to be fully characterized. ## 5 Conclusion In this paper, we reported three potential svRNAs, which are orthologs of SARS-CoV svRNAs, in the SARS-CoV-2 genome. To validate our results, we confirmed the discovered orthologs are fully conserved in all the publicly available genomes of various strains of SARS-CoV-2; the loci at which the SARS-CoV-2 orthologs occur are close to the loci at which SARS-CoV svRNAs occur. Furthermore, our proposed svRNAs occur at very similar loci in different reference sequences, and these loci are almost the same as the ones for the original svRNAs reported in SARS-CoV. We also reported potential targets for these svRNAs. We hypothesize that the discovered orthologs play a role in pathogenesis of SARS-CoV-2, and therefore, antagomir-mediated inhibition of these SARS-CoV-2 svRNAs inhibits COVID-19. This _in silico_ discovery still needs to be confirmed using _in vitro_ and _in vivo_ experiments. ## References * [1] Lucia Morales, Juan Carlos Oliveros, Raul Fernandez-Delgado, Benjamin Robert tenOever, Luis Enjuanes, and Isabel Sola. Sars-cov-encoded small rnas contribute to infection-associated lung pathology. _Cell host & microbe_, 21(3):344-355, 2017. * [2] Hongzhou Lu, Charles W Stratton, and Yi-Wei Tang. Outbreak of pneumonia of unknown etiology in wuhan china: the mystery and the miracle. _Journal of Medical Virology_. * [3] Roujian Lu, Xiang Zhao, Juan Li, Peihua Niu, Bo Yang, Honglong Wu, Wenling Wang, Hao Song, Baoying Huang, Na Zhu, et al. Genomic characterisation and epidemiology of 2019 novel coronavirus: implications for virus origins and receptor binding. _The Lancet_, 395(10224):565-574, 2020. * [4] Yushun Wan, Jian Shang, Rachel Graham, Ralph S Baric, and Fang Li. Receptor recognition by the novel coronavirus from wuhan: an analysis based on decade-long structural studies of sars coronavirus. _Journal of virology_, 94(7), 2020. * [5] Jurgen Brosius. Waste not, want not-transcript excess in multicellular eukaryotes. _TRENDS in Genetics_, 21(5):287-288, 2005. * [6] Alexander F Palazzo and Eliza S Lee. Non-coding rna: what is functional and what is junk? _Frontiers in genetics_, 6:2, 2015. * [7] Marc Robert Fabian, Nahum Sonenberg, and Witold Filipowicz. Regulation of mrna translation and stability by micromas. _Annual review of biochemistry_, 79:351-379, 2010. * [8] David P Bartel. Micrornas: target recognition and regulatory functions. _cell_, 136(2):215-233, 2009. * [9] Paul R Reich, Bernard G Forget, Sherman M Weissman, and James A Rose. Rna of low molecular weight in kb cells infected with adenovirus type 2. _Journal of molecular biology_, 17(2):428-439, 1966. * [10] Kazimierz T Tycowski, Yang Eric Guo, Nara Lee, Walter N Moss, Tenaya K Vallery, Mingyi Xie, and Joan A Steitz. Viral noncoding rnas: more surprises. _Genes & development_, 29(6):567-584, 2015. * [11] Poornima Parameswaran, Ella Sklan, Courtney Wilkins, Trever Burgon, Melanie A Samuel, Rui Lu, K Mark Ansel, Vigo Heissmeyer, Shirit Einav, William Jackson, et al. Six rna viruses and forty-one hosts: viral small rnas and modulation of small rna repertoires in vertebrate and invertebrate systems. _PLoS pathogens_, 6(2), 2010. * [12] Jasmine T Perez, Andrew Varble, Ravi Sachidanandam, Ivan Zlatev, Muthiah Manoharan, Adolfo Garcia-Sastre, et al. Influenza a virus-generated small rnas regulate the switch from transcription to replication. _Proceedings of the National Academy of Sciences_, 107(25):11525-11530, 2010. * [13] Joan Steitz, Sumit Borah, Demian Cazalla, Victor Fok, Robin Lytle, Rachel Mitton-Fry, Kasandra Riley, and Tasleem Samji. Noncoding rnps of viral origin. _Cold Spring Harbor perspectives in biology_, 3(3):a005165, 2011. * [14] Amiya K Banerjee. Transcription and replication of rhabdoviruses. _Microbiological reviews_, 51(1):66, 1987. * [15] Bryan R Cullen. Viral and cellular messenger rna targets of viral micromas. _Nature_, 457(7228):421-425, 2009. * [16] Bryan R Cullen. Viruses and micromas: Riscy interactions with serious consequences. _Genes & development_, 25(18):1881-1894, 2011. * [17] Benjamin R Tenoever. Rna viruses and the host microrna machinery. _Nature Reviews Microbiology_, 11(3), 2013. * [18] David P Bartel. Micrornas: genomics, biogenesis, mechanism, and function. _cell_, 116(2):281-297, 2004. * [19] Michael S Diamond. Mechanisms of evasion of the type i interferon antiviral response by flaviviruses. _Journal of interferon & cytokine research_, 29(9):521-530, 2009. * [20] Katell Bidet, Dhivya Dadlani, and Mariano A Garcia-Blanco. G3bp1, g3bp2 and caprin1 are required for translation of interferon stimulated mrnas and are targeted by a dengue virus non-coding rna. _PLoS pathogens_, 10(7), 2014. * [21] Anjeanette Roberts, Damon Deming, Christopher D Paddock, Aaron Cheng, Boyd Yount, Leatrice Vogel, Brian D Herman, Tim Sheahan, Mark Heise, Gillian L Genrich, et al. A mouse-adapted sars-coronavirus causes disease and mortality in balb/c mice. _PLoS pathogens_, 3(1), 2007. * [22] Marta L DeDiego, Enrique Alvarez, Fernando Almazan, Maria Teresa Rejas, Elaine Lamirande, Anjeanette Roberts, Wun-Ju Shieh, Sherif R Zaki, Kanta Subbarao, and Luis Enjuanes. A severe acute respiratory syndrome coronavirus that lacks the e gene is attenuated in vitro and in vivo. _Journal of virology_, 81(4):1701-1713, 2007. * [23] Temple F Smith and Michael S Waterman. Comparison of biosequences. _Advances in Applied Mathematics_, 2(4):482-489, 1981. * [24] Saul B Needleman and Christian D Wunsch. A general method applicable to the search for similarities in the amino acid sequence of two proteins. _Journal of molecular biology_, 48(3):443-453, 1970. * [25] Hamidreza Chitsaz, Raheleh Salari, S.Cenk Sahinalp, and Rolf Backofen. A partition function algorithm for interacting nucleic acid strands. _Bioinformatics_, 25(12):i365-i373, 2009. Also ISMB/ECCB proceedings. * [26] Ali Ebrahimpour Boroojeny, Sanjay Rajopadhye, and Hamidreza Chitsaz. BPPart and BPMax: RNA-RNA interaction partition function and structure prediction for the base pair counting model. _arXiv_, 1904.01235, 2020. ## Appendix Tables 2, 3, and 4 show the matches with a score of at least 70% between more than 32,000 RNAs transcribed in the human body and reverse-complement of our proposed svRNAs for Sars-CoV-2. The ones with a high score are likely to be target genes of the corresponding svRNA. The entries of each table are sorted based on the normalized score of the alignment. Tables 5, 6, and 7 show the matches with a score of at least 80% between our proposed svRNAs and human reference genome. Patch release 13 of build 38 was used for as the reference genome. \\begin{tabular}{l l l l} & Gene ID & Score & Matched Sequence & Loci \\\\ \\hline 1 & ENSG00000168421 & 0.94 & ATACACCTCTTCTTCTC & 20 \\\\ & & ATACACCTTCTTCTTCA-TC & \\\\ 2 & ENSG00000196218 & 0.89 & ATACACCTTCTTCTTCTC & 15459 \\\\ & & ATACCTCTTCTTCTTCTC & \\\\ 3 & ENSG00000124440 & 0.89 & ATACACCTTCTTCTTCTTCTC & 5383 \\\\ & & ATACACCTTCTTCTTCTCCTC & \\\\ 4 & ENSG00000234545 & 0.83 & ATTCATCTTCTTCTTCTC & 1521 \\\\ & & A-TCACCTTCTTCTTCTCCTC & \\\\ 5 & ENSG00000186867 & 0.83 & ATACACCTTCTTCTGCATTC & 692 \\\\ & & ATACACCTTCTTCTTCCA-TC & \\\\ 6 & ENSG00000108100 & 0.83 & ATACACCTTCTTCTTCTTCAAGTTC & 4064 \\\\ & & ATACACCTTCTTCTTCTT-A-TC & \\\\ 7 & ENSG00000119632 & 0.83 & ATTCACCTTCTTCTTCTC-TC & 1524 \\\\ & & A-TCACCTTCTTCTTCTTCTC & \\\\ 8 & ENSG00000168038 & 0.83 & ATACACCTGCTTCTTCTTCTC & 5803 \\\\ & & ATACACCT-TCTTCTTCTC & \\\\ 9 & ENSG00000229913 & 0.83 & ATACACCTTCTTCTTCTTCTC & 274 \\\\ & & ATACACCTTCTTCTT-TCATC & \\\\ 10 & ENSG00000137767 & 0.78 & ATCA-CTTCTTCTTCTTCCCATC & 315 \\\\ & & ATACACCTTCTTCTT-CAT & \\\\ 11 & ENSG00000135902 & 0.78 & ATACACCTTCTTACCCTCATC & 929 \\\\ & & ATACACCTTCTTCTTCTTCTCATC & 1553 \\\\ 12 & ENSG00000136758 & 0.78 & ATACAC ATCACCTTC-T-TCTTCATC * 19 ENSG00000163485 0.78 ATGCCACCTTCTGCTTCATC 1300 AT-CACCTTCTTCTTCTC * 20 ENSG00000103599 0.78 ATC-CCTTCTTCTTTC-TC 5911 ATCACCTTCTTCTTCTC * 21 ENSG00000148935 0.78 TTCACCTT * 36 ENSG00000128973 0.78 ATCCCCTTCTTCTC 5808 ATCACCTCTTCTTCTC * 37 ENSG00000118596 0.78 ATCTCCTGCCTTCTTCTC 9308 ATCACCTTCTTCTTCTC * 38 ENSG00000164778 0.78 ATCACCAACTCTTCTTCTC 445 ATCACCTTCTTCTTCTC * 39 ENSG0000012660 0.78 AT-CACCTTCTTCTTCTCCTC * 40 ENSG0000138161 0.78 ATCACCTT-TTCTGCATC 2535 ATCACCTTCTTCTTCTCCTC * 41 ENSG00000116329 0.78 AT-TCTTCTTCTTCTCCTC * 42 ENSG00000237693 0.78 ATCACCTTCTT-TTAAATCATCATCCTTCTTCTC * 43 ENSG00000150712 0.78 ATCACCTTCTTCTTCTTCTTCTC * 44 ENSG00000146574 0.78 AT-ACCTTCTT-TCTTCTCCTC * 45 ENSG0000198589 0.78 ATCACCTTCTTCTTCTTCTC * 46 ENSG00000107611 0.78 ATCAAACTCTTCTTCTTCTC * 47 ENSG00000109501 0.78 ATCACCTTCTTCTTCTC * 48 ENSG00000109501 0.78 ATCACCTTCTTCTTCTC * 49 ENSG00000109501 0.78 ATCACCTTCTTCTTCTC * 50 ENSG0000016402 0.78 ATCACCTTCTTCTTCTTCTC * 51 ENSG00000094963 0.78 ATCACCTTCTTCTTCTTCTTCTC * 52 ENSG00000151067 0.78 ATCTCCTCCTTCTTCTTCTC * 53 ENSG0000183853 0.78 ATCACATCACCTTCTTCTTCATC * 54 ENSG00000114841 0.78 ATCACCTTCTTCTTTC-TC 8495 ATCACCTTCTTCTTCTCCTC * 55 ENSG00000197077 0.78 ATCACCTTCTTCTTCTTGC 5266 ATCACCTTCTT-TCTTCAT-C * 56 ENSG00000083544 0.78 ATCACGCCTTCTTCTTCTTATC 5611 ATCA-CC-TTCTTCTTATC * 57 ENSG00000187533 0.78 ATCCACCTTCTTCTTCTT-ATC 4084 AT-CACCTTCTTCTT-CTTATC * 58 ENSG00000147724 0.78 ATTC-CCTGCATTCTTCTTCTCCTC * 59 ENSG00000143126 0.78 ATCACCTTCTTCTTCTTCTC * 60 ENSG0000185950 0.78 ATCACCTTCTTCTTCTCCTC * 61 ENSG00000266885 0.78 ATCACCTTCTTCTTCTC * 62 ENSG00000198796 0.78 ATCACCTCTTCTTCTTCACT * 63 ENSG00000081059 0.78 ATCACCCTCTGTTCTTCTC * 64 ENSG00000091986 0.78 ATCACCTTCTTCTTCTTCTTCTTCTTCTTCTT * 65 ENSG0000100142 0.78 ATCAC-TTCTTCTTCTTCTTCTT * 66 ENSG00000237921 0.78 ATCAC-CCTCTTCTTCTTCTTCTTCTT * 67 ENSG0000074706 0.78 ATCACCTTCTTCTTCTTCTTCTT * 68 ENSG00000164418 0.78 ATCAC-CCTCTTCTTCTTCTTCTT * 68 ENSG00000164418 0.78 ATCAC-CCTCTTCTTCTTCTTCTT * 69 ENSG0000167216 0.78 ATCACCTTCTTCTT-TTC-TC * 70 ENSG00000257230 0.78 ATCACCTTCTTCTTCTTCTTCTT* [71] ENSG00000168329 0.78 ACCGGCCTTCTTCTTCAT 955 ATCACCTCTTCTTCTCCTC 399 ATTACCCCTTCTTCTTCTCCTC 1384 ATCACCTTCTTCTTCTCCTC 4221 ATCACCTTCTTCTTCTCCTC 9070 STESG00000231728 0.78 AGTACCTTCTTCTTCTTCTTCTCCTC 1153 ATCACCTTCTTCTTCTCCTC 1154 ATCACCTTCTTCTTCTCCTC 11556 ATCCTCCTCCTCCTCCTCCTCCTCCTCCTCCTCCTCCTCCTCCTCCTC ATCAC-TT-CTTCTTCATC * 89 ENSG00000251615 0.78 TTCACCTTCTTCTTTC-TC 1388 ATCACCTTCTTCTTCTTCATC * 90 ENSG00000164185 0.78 ATCCATCTTCTTCTTCTC 1070 TCACCTTCTTCTTCTTCTC * 91 ENSG00000018280 0.78 A-CACCTTCTTCTTCTTCTC 3526 ATCACCTTCTTCTTCTCCTC * 92 ENSG00000176783 0.72 ATCACCTTCTTCTTCTTCTC-C * 93 ENSG00000226145 0.72 AGCACCTTCTTCTTCTTCTTCTTCTT-TC * 94 ENSG00000151413 0.72 ATGAACCTTCTTCTTCTTCTTCTC * 95 ENSG00000184014 0.72 ATCATCAT-TCCTTCTTCTTCTC * 96 ENSG00000244242 0.72 ATCATCC-TCGTCTTCTTCTTCTTCT * 97 ENSG00000279675 0.72 ATCCCTTCTTCTTCTTCTTCTT * 98 ENSG00000165424 0.72 ATCAT-CACCTTCTTCTTCTTCTC * 99 ENSG0000132640 0.72 ATCACCTTCTTCTTCTT-C * 100 ENSG0000205086 0.72 ATTCACT-CC-TTC-TTC-TTC- * 106 ENSG00000248966 0.72 ATACACCTTCTTCTCCCCAGTTC 2270 ATCACCTCTCTTCT-TCA-TC * 107 ENSG0000179218 0.72 CTCACCTCTTCTTCCTTCTC 1032 ATACACCTTCTT-CTTCATC * 108 ENSG0000123243 0.72 ATCACCTTTTTCTTCCCTC 12805 ATCAC-TTCTTCTTCTC * 109 ENSG0000104321 0.72 ATACCTCTCTTCTTGCATT 396 ATCACCTCTTCTT-CATC * 110 ENSG0000100150 0.72 AGTC-CCTTCTTTTTTTCATC 6080 A-TCACCTTCTTCTTCTCCTC * 111 ENSG0000072041 0.72 AGCACCATTCTTCTTCTC-TC * 112 ENSG00000230894 0.72 AACACCGTTC-TCTTTCATC 1223 ATCAC-TCTCTTCTTCTC * 113 ENSG00000086570 0.72 ATCAGACCCTTGACTTCTTCTCCTC * 114 ENSG00000162511 0.72 ATCACACCTCTCTTCTTCTC * 115 ENSG0000009977 0.72 ATACACCTCTCTTCTTCTC * 116 ENSG00000270104 0.72 ATCTACCTATCTTCCGTCTATC * 117 ENSG00000153822 0.72 ATACACCTTCTTCTTCTTCTTCTT * 118 ENSG00000236449 0.72 ATACACCTTCTTCTTCTTCA-TC * 119 ENSG00000278903 0.72 ATACACCTTCTTCTTCTC * 120 ENSG00000278903 0.72 ATACACCTTCTTCTTCTC * 120 ENSG0000173705 0.72 ATACACCTCTCTTCTTCTT * 121 ENSG00000172215 0.72 AACACCTGCTCTTCTTCTTCT * 122 ENSG00000172215 0.72 AACACCTGCTCTTCTTCTTCT * 122 ENSG000000155609 0.72 ATACCTTCTTCTTCTTCTT * 123 ENSG00000139174 0.72 ATCTCTCTTCTTCTTATCAC-TT-C-TTCTTCATC * 124 ENSG00000178607 0.72 ATCAACCTCTCTCTCTGT-ATC 2866 ATC-ACCT-TCTTCT-TCATC * 125 ENSG00000109163 0.72 ATCA-CTTCTTCTTCTTTC-TC 939 ATCACCTCTCTCT-TTCATC * 126 ENSG00000179912 0.72 ATCACCTCTCTCTCT-CCATC 5117 ATCAC-CTTCTTCTTCTC * 127 ENSG00000250220 0.72 ATC-CCTTCCTATGTTTCATC 592 ATCACCTTCT-TCTTCATC * [141] ENSG00000065135 0.72 ATCACCTTTGTTCTTCAGC 18023 ATCACC-TTCTTCTTCTC * [142] ENSG0000181847 0.72 ATCTACCTT-TTCTGCATC 4887 ATC-ACCTTCTTCTTCTCCTC * [143] ENSG0000250383 0.72 ATCCATCTTCTTCTTCTCCTC 92 AT-CACCTTCTTCT-T-CATC * [144] ENSG0000197852 0.72 TTCACCCATTCTTCTTCTTCAGTTC 1541 ATCA-CC-TTCTTCTTC-TC * [145] ENSG0000170374 0.72 CTCACCTTCTTCTTCTCCTC 2017 ATCACCTT-TCTTCTC * [146] ENSG0000106344 0.72 ATC-CACCTCTCTTCTTCTTCTC ATCACCTCTCTTC-TT-CATC * 159 ENSG00000213853 0.72 ATCCACCATTCATTCATTCATTCC 5343 AT-CACC-TTC-TTC-TTCA-TC * 160 ENSG00000148600 0.72 ATCCACC-TCTTCCCTCATC 3955 ATCA-CCTTCTTCTTCATC * 161 ENSG00000280548 0.72 ATCCACTACTATTCCCTCTCCA-C 2047 ATCACCT-TCTT-CT-TCATC * 162 ENSG000002587 * 176 ENSG00000153395 0.72 ATCACCTGCTTTGCTTCCCATCATCACT-TCTT-C-TT-CATC * 177 ENSG00000140943 0.72 ATCAACCTCTCTGCTTTCTTCTCATCATC-TCTT-TCTTCTC * 178 ENSG00000213676 0.72 TTCACCTCTCTCTC-TCATCATCATCACT-TCTTCTTCTC * 179 ENSG00000087460 0.72 ATC-CCTTCTTCTTGC- ATCACCT-TCTTCTTCATC * 194 ENSG00000105202 0.72 ATCACCTATCTTCCTCTCTCA-C 2047 ATCACCT-TCTT-CT-TCATC * 195 ENSG00000115183 0.72 AGCACCCGTTTCTC-TCATC 2647 ATCAC-TTCTTCTTCTCCTC * 196 ENSG00000188283 0.72 ATCATTTCCTTTCTTCTTCAGCTC 684 ATCA-CC-TTCTTCTT-TC * 197 ENSG00000226312 0.72 ATCTACCATCTTCTTCTTCTC * 198 ENSG00000091536 0.72 ATCTCTCTTCTTCTTCTC * 199 ENSG00000154265 0.72 AT-ACCTT-TTCCTTTCTCCTC * 200 ENSG0000147118 0.72 ATTCAGCCTTCTTCTTCTTCTC * 201 ENSG00000232021 0.72 ATCAC-ACCTT-CTTCTTCTC * 202 ENSG0000169087 0.72 ATCACACCTTCTTCTTCTC * 203 ENSG0000154237 0.72 ATCA-ACCTTCTT-TCATC * 211 ENSG00000116641 0.72 ATACACCTTCTGT-TTCTTTC 8041 ATACACCTTCT-TCTTCC * 212 ENSG00000282961 0.72 ATACACCTGTCTATC-TCATC 4883 ATACAC-T-TCT-TCTTCTC * 213 ENSG00000277991 0.72 ATACACCTT-TTGTTCTCCTC 6070 ATACACCTTCTT-CTTCTC * 214 ENSG0000231049 0.72 ACTCTCCTTTTCTTCTTCCATC 3182 A-TCACC-TTCTTCTTCA-TC * 215 ENSG0000130038 0.72 AGTCA-CTTCTTCTTCTCAGC 8197 A-TCACCTTCTTCTTCTCCTC * 216 ENSG00000259134 0.72 ATCAGCTATCTTCTGTCTATC 1611 ATACACCT-TCTTCT-TC-ATC * 217 ENSG00000253894 0.72 ATAC-CCTTCATTCTTCTTTC * 218 ENSG00000174255 0.72 ATCA-C-TCTGCTCTCTC * 219 ENSG00000088387 0.72 ATCATAGCCTTGCTGTCTTCTC * 220 ENSG0000162804 0.72 ATACACCTTCTTCTTCCATC * 221 ENSG00000080815 0.72 ATACACCTTCTTCC-CCATAC * 222 ENSG00000109182 0.72 ATACACCTTCTTCCGCTC * 223 ENSG00000143322 0.72 ATGTCCA-CTTCTTCTTCTC * 224 ENSG00000152705 0.72 ATTCACCATTCTTCTTCTC * 225 ENSG00000242715 0.72 ATTCCTACCTTCTTCTTCTC * 226 ENSG00000186847 0.7ATCACCTT-CTTCTTCATC * 229 ENSG00000227110 0.72 ATCACCCGCTCTTCTCTGCATC ATCA-CCTCTTCTTCTTCTC * 230 ENSG00000165181 0.72 ATCACCTTCCTCTTCTTCAATA ATACACCTTCTTCTC-ATC * 231 ENSG00000188984 0.72 ATGCCACCTTCTCTCTCTCTCTAC AT-CACCTTCTT-TCTTCAT-C * 232 ENSG0000012677 0.72 ATTACCTTCTTCTTCTTCATGGC ATACCTTCTT-CTTCAT-C * 233 ENSG00000155886 0.72 ATCGTCCTTC-TCTTCATC AT-ACCTTCTTCTTCTCCTC * 234 ENSG00000254465 0.72 ACTCACACCTTCTTCTTCTCATC * 235 ENSG00000175841 0.72 ATCACCATCTCTCTCTCTTATC * 236 ENSG00000112096 0.72 ATCACCTGTCTTCTTCGGTC * 237 ENSG00000255085 0.72 ATCACCATCTTCTTCTTCTC-TC * 238 ENSG000002 ATCA-CCTTCTTCTTCA-TC * 264 ENSG00000198569 0.72 ATC-CACTCTTCTTCTTCAAC 1710 ATCAC-CTCTTCTTCTTCTC * 265 ENSG00000144792 0.72 ATCACACCTTCTTCTTCAACTC 3422 ATCAC-CTTCTT-TCTTCA-TC * 266 ENSG00000091262 0.72 ATCTACC-TCCTTCTTCATC 236 ATCAC-ACCTTCTTCTTCTCCTC * 267 ENSG0000149948 0.72 ATCACCTCTT-TGAATC 12587 ATCACCTCTTCTT-TCATC * 268 ENSG0000170180 0.72 TCCAACTTCTTCTTCAGTC 1575 ATCACCTTCTTCTTCA-TC * 269 ENSG00000160588 0.72 ATCATTTTCCTTTCTTCTCCTC * 270 ENSG0000171827 0.72 ATC-CCTTAC-TCTTCTCCTC * 271 ENSG0000197641 0.72 ATCATCCCTTCTTCTTCTC * 272 ENSG0000144152 0.72 ATCACCATTCTTCTTCTTCTTCTC * 273 ENSG0000138778 0.72 ATACCTTCTTCTTCTTCTTCTTCTT * 274 ENSG0000198246 0.72 ATCACCTACGTCTTCTTCTTCTTCTT * 275 ENSG00000224885 0.72 ATCACCTTCTTCTTCTTCTTCTT * 276 ENSG00000168671 0.72 ATCACCTTCTTCTTCTTCTCTCT * 277 ENSG0000127920 0.72 ATCACCTGCCTTCTTCTTCTTCTT * 278 ENSG000012053 0.72 ATCACCTCTCTCTCTCTCTCTCTCTCTCTCTCTCTCTCTCTCTCTCT * 281 ENSG00000105722 0.72 ATC-CTCTCTTCTTCTTTC 1853 ATCAC-CTCTCTTCTTCTCATC * 282 ENSG00000135636 0.72 ATCATCC-TCTTTCATC 8006 ATCA-CCTCTTCTTCTTCTCCTC * 283 ENSG00000267784 0.72 ATCA-CTTCTTCTTCTT-TCTC 2064 ATCACCTCTTCTT-CTTCTC * 284 ENSG000009958 0.72 ATTCAGCTTCTTCTTCTTCAAC 536 A-TCACCTTCTTCTTCTCCTC * 285 ENSG00000080644 0.72 AT-ACCTTCTTCTTCTCCTC * 286 ENSG00000135966 0.72 ATACACCCCTC-TCTTCATC * 287 ENSG0000147874 0.72 ATCACCTTTTCTTCTTCTC * 288 ENSG00000215182 0.72 ATCACCTTCTTCTTCTC * 289 ENSG0000140092 0.72 ATCAC-CCTCTTCTTCTTCTC * 290 ENSG000018755 0.72 ATCAC-CTTCTTCTTCTC * 291 ENSG00000103335 0.72 ATCATCC-TCTTCCCTC * 292 ENSG00000170836 0.72 ATTC-CCTCTTCTTCTTCTC * 293 ENSG00000188649 0.72 ATACAC-CTTCTTCTTCTC * 294 ENSG00000141837 0.72 ATCCCCATGCCTCTTCTTCTC * 33937 ATCAC-TTCTTCTTCTTCTTCTT * 3394 ENSG0000205730 0.72 ATCACCTTCTTCTT-CTCTCT * 3395 ENSG00000205730 0.72 ATCACCTTCTTCTT-CTCTCT * 3396 ENSG00000184277 0.72 AGTACCTTCTTCTTCTTCTTCTTCTT * 3397 ENSG0000204697 0.72 ATCA-CTTCTT-TCATTC * 3398 ENSG0000204697 0.72 ATCAC-CTTCTT-TCATTC * 33998 ENSG0000112137 0.72 ATCAC-CTTCTT-CTCTCT * 34000ATCACCTCTCTCTTCTT-CATC * 299 ENSG00000169083 0.72 ATCACATGCTTC-TCTTCATC 4499 ATCAC--CTCTCTCTTTCTCTCTCTC * 300 ENSG00000198093 0.72 ATCTATCCCTTCTTCTTCTCCTC-ACC-TTCTTCTCCTC * 301 ENSG00000280224 0.72 ATCA * 316 ENSG00000205060 0.72 ATACACTTTTCCTTCTTCAGC 2028 ATCAC-TT-CTTCTTCATTC * 317 ENSG00000172671 0.72 A-CAGCCTTCTTCTTTC-TC 4096 ATCA-CCTTCTTCTTCATTC * 318 ENSG00000274286 0.72 ATCACCTTCCTTCTC- \\[\\begin{array}{ll}\\mbox{ATCACCTCTTCTTCT-TCATC}\\\\ \\\\ 334\\mbox{ENSG00000212743}&0.72\\mbox{ACCACCTTC-TCTTCCATC}\\\\ \\\\ 335\\mbox{ENSG00000196923}&0.72\\mbox{ACTCACCTTC-TCTTC-TC}\\\\ \\\\ 336\\mbox{ENSG00000179630}&0.72\\mbox{ATCA-CTTCTTCTTCTTGTATC}\\\\ \\\\ 337\\mbox{ENSG00000111012}&0.72\\mbox{ATCACCCT-TTCATTCAT}\\\\ \\\\ 338\\mbox{ENSG00000132915}&0.72\\mbox{ATCAACCTTCTTCTTCTT- * 351 ENSG00000101204 0.72 ATCACCTATGCCTTCGTCATC 6151 ATCACCT-T-CTTCTTCATC * 352 ENSG00000116299 0.72 ATTACCTTTCTTCTTTC 7668 ATCACCTCTCTC-TTCATC * 353 ENSG00000165240 0.72 ATCACCTCTTCTGTATTCACTC 7521 ATCAC-CTTCT-TCTTCA-TC * 354 ENSG0000221813 0.72 ATCACCATGTCTACTTTCAATC 1443 ATCAC-T-CTTCTTCTC-ATC * [369] ENSG00000118113 0.72 ATCACCTCTCATCTCACC 910 ATCACCT-TCTTCTTCTCCTC * [370] ENSG00000159842 0.72 ATCACCTTCCTCTTCCTGC 8875 ATCACCTTCTTCTTCAT-C * [371] ENSG00000164106 0.72 ATTCATCTT-TTCTTCTC 5375 A-TCACCTTCTTCTTCTCCTC * [372] ENSG00000267586 0.72 ATCACCTTCATTTATTATTCAT 8512 ATCACCTCTCT--TTCTTCATC * [373] ENSG00000185532 0.72 AT-CACCTTCTT-TATCATC * [374] ENSG00000215580 0.72 ATCACCTTGTCAT-TTCAGGTC * [375] ENSG00000085999 0.72 ATCACCTTGTCTTCCCACC * [376] ENSG00000105339 0.72 ATCACCTTCTTCTC-TCTTCTC * [377] ENSG00000267665 0.72 ATCACCTTCTTCTTCTC-TC * [378] ENSG00000231367 0.72 ATCACCTTCTTCTT-ATC \\begin{table} \\begin{tabular}{l l l r} & & & ATCAC-TTCTTCTTCTC & \\\\ 369 & ENSG00000118113 & 0.72 ATCACCTCTCTCTCTC & 910 \\\\ & & ATCACCT-TCTTCTTCTC & \\\\ 370 & ENSG00000159842 & 0.72 ATCACCTTCTTCTTCTC & 8875 \\\\ & & ATCACCTTCTTCTTCTT-CTCT & \\\\ 371 & ENSG00000164106 & 0.72 ATCACCTTCTTCTTCTC & 5375 \\\\ & & ATCACCTTCTTCTTCTTCTC & \\\\ 372 & ENSG00000267586 & 0.72 ATCACCTTCTTCTTATTCTC & 8512 \\\\ & & ATCACCTT \\begin{tabular}{c l l r} & Gene ID & Score & Matched Sequence & Loci \\\\ \\hline 1 & ENSG00000196628 & 0.86 & AGATCTTCTTCTTGCCTACTTTCTC & 10282 \\\\ & & A-ATCTTCTTCTTTG-CT-CTTCTTC & \\\\ 2 & ENSG00000183496 & 0.82 & TATCTTCTTCTCTGCCTTTCTTC & 2447 \\\\ & & AATCTTCTTCTTGTCTCTTCTTCTTC & \\\\ 3 & ENSG00000102290 & 0.77 & AAT-TTCTTCTTCTTCTTCTC & 1349 \\\\ & & AATCTTCTTCTTGTCT * [36] ENSG00000118946 0.73 ACTCTGTCTTCGTTGCCTTTCATC 149 AATCT-TCTTC-TTGCTCTCTCTTCTC * [37] ENSG00000181222 0.73 AATTC-TC-TCTTTGCTTTCTTC 1996 AAT \\begin{tabular}{l l l l} & Gene ID & Score & Matched Sequence & Loci \\\\ \\hline 1 & ENSG00000166908 & 0.84 & ACGCTTCTGCCCCAGTTCCT & 759 \\\\ & & A-GCTTCTGGCCCCAGTTCCT & \\\\ 2 & ENSG00000174243 & 0.84 & AGCTTCTTGGCCCCAGTT-CT & 4934 \\\\ & & AGCTTTC-TGGCCCCAGTTCCT & \\\\ 3 & ENSG00000280011 & 0.84 & AGCATCTGGCCCCCCAGTTCCT & 7953 \\\\ & & AGCTTCTG-CCCAGTTCCT & \\\\ 4 & ENSG00000145685 & 0.84 & AG-TTCTGGCCCCAGTTCCT & 5046 \\\\ & & AGCTTCTGGCCCCAGT-TCCT & \\\\ 5 & ENSG00000008710 & 0.79 & AGCTTCCTGGCCCTCA-TTCCT & 14664 \\\\ & & AGCTT-CTGGCC-CAGTTCCT & \\\\ 6 & ENSG00000179698 & 0.79 & AGCTTCTGGCCC-GCTCCT & 3729 \\\\ & & AGCTTCTGGCCCCAGTTCCT & \\\\ 7 & ENSG00000198722 & 0.79 & ACGCTTCTGGTGCACCAGTTCCCT & 1093 \\\\ & & A-GCTTCTG-GC-CCCAGTT-CCT & \\\\ 8 & ENSG00000131979 & 0.79 & AGCTTCTGGTCCCGGTTCCT & 2034 \\\\ & & AGCTTCTG-CCCAG-TTCCT & \\\\ 9 & ENSG00000254401 & 0.79 & AGCGTTCTGGCCTCCCCAGTTCCT & 1539 \\\\ & & AGC-TTCTG-C-CCCAGTTCCT & \\\\ 10 & ENSG00000071246 & 0.79 & AGCTGTCTGCCCTGTTTCCCCT & 260 \\\\ & & AGCT-TCTGCCC-CAG-TT-CCT & \\\\ 11 & ENSG00000142698 & 0.79 & AGCTTCTGATCCCGTTCCC & 1122 \\\\ & & AGCTTCTGGCCCGCCGTTCCT & 7297 \\\\ 12 & ENSG00000237945 & 0.79 & AGCTCTCTGGCCCGCCGTTCCT & 728 \\\\ & & AGCTTCTG * [36] ENSG00000107537 0.74 AGCTTACTGCGGCG CCTCAAGTTCCT & 1294 AGCTT-CT-GGCC-C-AGTTCCT * [37] ENSG00000104613 0.74 AGCTTCT-CCCCAGTTCCT & 6185 AGCTTCTGTGCCCAGT-TCCT * [38] ENSG0000072134 0.74 AGCTTGTACCCAAGTTCCT & 631 AGCTT-CTGGCCCCAGTTCCT * [39] ENSG00000164591 0.74 AGCCTCTGGTCCACGTCCT & 317 AGCTTCTCTGGCC-CCAGTTCCT * [40] ENSG00000256166 0.74 AGCTTCCTAGCCCCAGTTCCT * [41] ENSG00000185829 0.74 AGCTT-TGGCCCCAGTGCCCT & 4968 AGCTTCTCTGGCCCCAGT-TCCT * [42] ENSG0000007312 0.74 AGCCTGTGGCCCCCAGTTCCT * [43] ENSG00000137496 0.74 AGCTTGTGTG-CCAGTTCCT * [44] ENSG0000011678 0.74 ATGCTTCTGGCCGCGCGCAGGTGTCCT * [45] ENSG0000143578 0.74 ACCTTCCTGGCCCCACTTCCT * [46] ENSG0000100284 0.74 AGCTCTCTGTGCCCCGGTTCCT * [47] ENSG0000127561 0.74 AGCTTCCTGGCCCCAGTTCCT * [48] ENSG00000197563 0.74 AGCT-TCTGGGCCCCAGTTCCT * [49] ENSG0000158292 0.74 AGCCT AGCTTCT-GGCCCAGTTCCT * 54 ENSG00000138073 0.74 AGCCTTGTG-CCAGTTCCT 2946 AG-CTCTCTGGCCCAGTTCCT * 55 ENSG00000078808 0.74 AGCTCTCTGTGCCCGAGTTCATCT 3655 AGCT-TCTG-GCCC-AGTTTC-CT * 56 ENSG00000197056 0.74 AGCTTACTGGCCCAG-CCCT 4113 AGCTT-CTGGCCCAGTTCCT * 57 ENSG00000168350 0.74 ATG-TTCTGAGCCCAGCTTCCT 4046 A-GCTTCTG-GCCC-AGTTCCT * 58 ENSG0000004777 0.74 AGCTGTCTGGCCCCCTGATTCCT 1835 AGCT-TCTGCCCCCAG-TTCCT * 59 ENSG00000010803 0.74 AGCTCTTCTTGGCCCCATGTTCCT 1418 AG-CTTC-TGCCCCA-GTTCCT * 60 ENSG0000158786 0.74 AGCTCTTCTTGGCCCCAGCTTCCT 3342 A-GCTTC-TGGCCCG-TTCCT * 61 ENSG00000279072 0.74 AGCCTTTTTTGGCCCAGTTCCT 2629 AG-C-TTCTGCCCAGTTCCT * 62 ENSG00000179820 0.74 AGCTTCTTCGTTCCTATGTTCCT * 63 ENSG00000175215 0.74 AGCCTGTCTGGCCCCAGTTCCT * 64 ENSG00000229847 0.74 AGCTTCTTGGCCCAGGTCC * 65 ENSG00000072501 0.74 AGCTTCTG-GCC-CAG-TTCCT * 66 ENSG00000276805 0.74 AGCCTTTTTTGGCCCAGTTCCT * 67 ENSG00000203685 0.74 AGCTTTCTGCCCCACATTCCT * 68 ENSG00000112541 0.74 AGCTTCCG-GCCC-GCCC-GCTTCCT * 69 ENSG00000167972 0.74 AGCAAATTCTGGGCCATGTTCCT * 70 ENSG00000167972 0.74 AGCCAATTCTG-GCTTCCT * 71 ENSG00000133961 0.74 AGGCTTCTGCC-CAGTTCCT* [71] ENSG00000257431 0.74 AGC-T-TGGCCACCAGTTCCT 1237 AGCTCTCTGGCC-CCAGTTCCT 2061 AGCTCTCT-GGCC-CAG-TTCCT 1612 AGCTCTCTG-GCCCCAGTTCCT 1165 AGCTCTCT-GCCCCAGTTCCT 11494 AGCTCTCT-TGGCCCCAGTTCCT 4333 AGCTCTCTGCCCC-TTCCT 2426 AGCTCTCTGCCCCAGTTCCT 2429 AGCTCTCTGCCCCAGTTCCT 2507 AGCTCTCTGCCCCAGTTCCT 251 AGCTTCTGGCCCCA-G-TTC-CT * 89 ENSG00000115592 0.74 AGCTCTCTCTGTGCCCTGTTCCT 1774 AG-CT-TCTG-GCC-CAG-TTCCT * 90 ENSG0000186469 0.74 AGCC-CCTTGGCCCCAGTTCCT 2335 AGCTTC-TGGCCCCAGTTCCT * 91 ENSG00000103148 0.74 AGCCTTTCCTGCCCAG-TCCT 5486 AG-C-TT-CTGGCCCCAGTTCCT * 92 ENSG00000272556 0.74 AGCCTTTTTTGGCCCCAGTTCCT 3440 AG-C-TTCTGGCCCCAGTTCCT * 93 ENSG00000253326 0.74 AGCCTTTTTTGGCCCCAGTTCCT 4 AG-C-TTCTGCCCAGTTCCT * 94 ENSG00000165816 0.74 ATGCTTCTCTGTGCCCCAGATCCT 690 A-GCTTCTG-G-CCC-AGTTCCT * 95 ENSG0000183242 0.74 AGCTCTCTCTGGACCTCAGTTCCT * 96 ENSG00000178187 0.74 AGCTTTTGACCCAGTTCCT * 97 ENSG00000108219 0.74 AGCTTTCATGGACCCAGTTCCT * 98 ENSG00000263072 0.74 AGCTCTCTGGCCACAGTT * 106 ENSG00000092068 0.74 AGCATTGGCGTGAAGCCCCAGTTCCT -CTG-GCCCAGTTCCT * 107 ENSG00000251143 0.74 AGCTTCTGGGGCCCTGACTTCCT -GGCCC-AGTTCCT * 108 ENSG0000188612 0.74 AACTTCTGGCCCTCCAGTTCCCT -2825 AGCTCTCTGGCC-CAGTTCCT * 109 ENSG0000188039 0.74 AGCCTTCTGTGTCCCCA-TTCCT * 110 ENSG0000225465 0.74 AGCTT-GGGCTCCCCAGTTCCT * 111 ENSG0000064999 0.74 AGC-TGTGTGTGCCCCAGTTCCT * 112 ENSG0000150995 0.74 AGCTTCTGTGCCCC-AG-TTCCT * 113 ENSG00000226995 0.74 AGCTTCTGGCCCTCAGTTCCT * 114 ENSG0000279141 0.74 AGCATCTGGGGACCCCCAGTTCCT * 115 ENSG0000279440 0.74 AGTCCCTCTGGCCCC-AGTTCCT * 116 ENSG00000233837 0.74 A-CTCTTGCCCCCAAAGTTCCT * 117 ENSG0000230590 0.74 AGCTTCTGGCC-CAGT-TCCT * 118 ENSG0000060718 0.74 AGCTTCTGGCC-CACAGTTCC-CACAGTTCC-CAC-CAGTTCC * 119 ENSG00000170322 0.74 AGCTTCTGGCCCCAGTTCCT * 120 ENSG0000182010 0.74 AGCTTCTGGTCCAGTTCCT * 121 ENSG00000148187 0.74 AGCTTCTGGTCCCCAG-CCT * 122 ENSG00000130830 0.74 AGCCTTCTTGCCCCCAGTTCC * 123 ENSG00000228327 0.74 AGCCTTTTTGGCCCCAGTTCCTAG-C-TTCTGGCCCCAGTTCCT * 124 ENSG00000229433 0.74 AGCCTCTCTGCCCGCGCAGTTCCCT 748 AG-CT-TCTG-GCC-CAGTT-CCT * 125 ENSG00000101203 0.74 AGGCTTCTGGACCCCCAAGTTGCCT 601 A-GCTTCTG-G-CCC-AGTT-CCT * 126 ENSG00000149809 0.74 AGCCTGCAGGCCCCAGTTCCT 2273 AG-CTTCTGGCCCCAGTTCCT * 127 ENSG00000283050 0.74 AGCCTTTTTTGGCCCCAGTTCCT 4237 AG-C-TTCTGGCCCCAGTTCCT * 128 ENSG00000256028 0.74 AGCC-TCTGGCCCCCCAGTTCCT 241 AGCTTCTG-CCC-AGTTCCT * 129 ENSG00000164073 0.74 ATTCATCTGGCCCCAGTTCCT 5201 A-GCTTCTGGCCCCCAGTTCCT * 130 ENSG0000237438 0.74 AGCCTTCTGCCCCCAGTGCCT 2347 AG-CTTCTGGCCCCAGTTCCT * 131 ENSG00000119772 0.74 AGCTTTCAGGCCCCAGTTTTCT 6769 AG-CTTCTGGCCCCAGTTCCT * 132 ENSG00000088832 0.74 AG-TTCTGGGGGCACACAGTTCCT 498 AGCTTT-GGC-CCAGTTCCT * 133 ENSG00000180155 0.74 AGCTTTTGGGCCCAG-TCCT * 134 ENSG00000149091 0.74 AGCTCTCGGCCCCAGTTCCT 9092 AGCTCTCTGGC-CCAGTTCCT * 135 ENSG00000172819 0.74 AGCTAC-GGCCCAGTCCCT 915 AGCTCTCTGGCCCCAGTT-CCT * 136 ENSG00000168765 0.74 AGCTCTCTGCCCAATTCCT * 137 ENSG00000142609 0.74 AGCTTCTGGCCCCAGTTCCT 1363 AGCTCTCTGGCCCCAGTT-CT * 138 ENSG00000214837 0.74 AGCCTTTTTTGGGCCCAGTTCCT 3757 AG-CTCTGGCCCCCAGTTCCT * 139 ENSG00000168939 0.74 AGGC-TCT-GCCCAGTTCT 3687 AGCTCTGGCCCCAGTT-CT * 140 ENSG00000178188 0.74 AGCTGTCTCTGGCCCCCAG-TCCT 2194 AGCTCT-TCTG-CCCAGTTCCT* [141] ENSG00000164855 0.74 AGCTCTCTCCGGCCTCAGTTCCT 5633 AG-CT-T-CTGGCC-CAGT-TCCT * [142] ENSG00000260404 0.74 AGCCTTTTTTTGGCCCCAGTTCCT 2012 AG-C-TTCTGGCCCAGTTCCT * [143] ENSG00000228696 0.74 AGCTT-TGGCCCCAGTGCCCT 3341 AGCTCTCTGTGCCCAGT-TCCT * [144] ENSG00000205176 0.74 AGACTGCCTGGCCCAG-TCCT 6432 AG-CTTCTGGCCCAGTTCCT * [145] ENSG00000111325 0.74 AGGC-TCTGGCCCCCGAGTTCCT * [146] ENSG00000174405 0.74 AGCTCTCTCTGGCCCAGCTTCCG * [147] ENSG00000177084 0.74 CGCTCTCTGGCCCAGTTCAGGT * [148] ENSG0000171700 0.74 AGCTGGCTGGCCCAGTGCTCCT * [149] Potential target RNAs for svRNA3 (truncated version). \\begin{table} \\begin{tabular}{l c c c} 141 & ENSG00000164855 & 0.74 & AGCTCTCTCTCGGCCCTG & 5633 \\\\ & & AG-CT-T-CTGGCC-CAGT-TCCT & \\\\ 142 & ENSG00000260404 & 0.74 & AGCCTTTTTTGGCCCAGTTCCT & 2012 \\\\ & & AG-C-TTCTGGCCCCAGTTCCT & 3341 \\\\ 143 & ENSG00000228696 & 0.74 & AGCTTTTGGCCCAGGT-TCCT & 332 \\\\ & & AGCTCTCTGGCCCAGT-TCCT & 6432 \\\\ 144 & ENSG00000205176 & 0.74 & AGACTGCCTGGCCCG & 6432 \\\\ & & AG-CTTCTGGCCCCAGTTCCT & 1749 \\\\ 145 & ENSG00000111325 & 0.74 & AGCTCTCTGG-CCC-AGTTTCCT & 1749 \\\\ & & AGCTCTCTGG-CCC-AGTTTCCT & 1749 \\\\ 146 & ENSG00000174405 & 0.74 & AGCTCTCTCTGGCCCCAGCTTCCG & 206 \\\\ & & AG-CTTCTGGCC-CCA-TTCCT & 9130 \\\\ 147 & ENSG00000177084 & 0.74 & AGCTCTCTCTGGCCCAGTTCAGGT & 9130 \\\\ & & AGCT-TCTGGCCCCAGTTC-CT & 1605 \\\\ & & AGCT-TCTGGCCCCCAGT-TCCT & 1605 \\\\ \\end{tabular} \\end{table} Table 4: Potential target RNAs for svRNA3 (truncated version). \\begin{tabular}{l l l l l} & Gene ID & Score & Matched Sequence & Loci \\\\ \\hline 1 & NC000002.12 chr2 & 0.89 & GAGGAAAGAAGAGTGAT & 134222364 \\\\ & & & GATGAAAGAAGAGGTGAT & \\\\ 2 & NC000003.12 chr3 & 0.89 & GATAAAAGAAGAGGTGAT & 62430918 \\\\ & & & GATGAAAGAAGAAGGTGAT & 43685381 \\\\ 3 & NC000012.12 chr12 & 0.89 & GATGAAAGAAGAGGTGAT & 22739819 \\\\ 4 & NC000012.12 chr12 & 0.89 & GATGAAAGAAGAGGTGAT & 690091 \\\\ 5 & NC000007.14 chr7 & 0.89 & GATGAAAGAAGAGGTGAT & 5993096 \\\\ 6 & NC000007.14 chr7 & 0.89 & GATGAGAAGAAGGTGAT & 33698282 \\\\ 7 & NC000015.10 chr15 & 0.89 & GAGAAGAAGAAGGTGAT & 13363820 \\\\ 9 & NC000015.10 chr15 & 0.89 & GATGAAAGAAGAGGTGAT & 5648009 \\\\ 10 & NC000006.12 chr6 & 0.89 & GATGAAAGAAGAGGTGAT & 39349582 \\\\ & & & GATGAAAGAAGAGGTGAT & 888689121 \\\\ 14 & NC000003.12 chr3 & 0.89 & GATGAAAGAAGGTGAT & 96774073 \\\\ & & & GATGAAAGAAGAGGTGAT & 968689908 \\\\ 15 & NC000006.12 chr6 & 0.89 & GATGAGAAGAAGGTGAT & 68689908 \\\\ 16 & NC000006.12 chr4 & 0.89 & GATGAGAAGAAGAGGTGAT & 6876177 \\\\ 17 & NC000004.12 chr4 & 0.89 & GATGAGAAGAAGAGGTGAT & 6876177 \\\\ 18 & NC000003.12 chr3 & 0.89 & GATGAGAAGAAGAGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG * 18 NC000009.12 chr9 0.83 GATGAAGAAGAGGGTGGT 17792287 GATGAAGA-GAAGGTGAT * 19 NC000009.12 chr9 0.83 GATGCAGAAGACAGGTGAT 39264386 GATGAAGAAGA-AGGTGAT * 20 NW012132919.1 chr7 0.83 AATGAAGAAGACAGGTGAT 106833 GATGAAGAAGA-AGGTGAT * 21 NW018654717.1 chr8 0.83 GATGAAAAGAAGGGGTGAT 3239611 GATGA-AGAAGAAGAAGGTGAT * 22 NC000008.11 chr8 0.83 GATGCAGAAGACAGGTGAT 20240737 GATGAAGAAGA-AGGTGAT * 23 NC000008.11 chr8 0.83 GATGAAGAAGAGGGTGCAT 23109525 GA-TGAAGAAA-GAAGGTG-AT * 24 NC000008.11 chr8 0.83 GAT-AAGAAGAAGGGGTGAT 29218614 GATGAAGAAGA-AGGTGAT * 25 NC000008.11 chr8 0.83 GA-GAAGAAGAAGCGGTGAT 29390573 GATGAAGAAGA-GTGAT * 26 NC000008.11 chr8 0.83 GATGCAGAAGACAGGTGAT 36287121 GATGAAGAAGA-AGGTGAT * 27 NC000008.11 chr8 0.83 GATGAAGCCAAGAAGGTGAT 48086402 GATGAAG-AGAAGG-TGAT * 28 NC000008.11 chr8 0.83 GATGAAGAAGAAGGTACAGAT 49093880 GATGAAGAAGAAGGT--GAT * 29 NC000002.12 chr2 0.83 GATGAAGAAA-AATGGTGT 132283207 GATGAAGAAGA-GGTGAT * 30 NC000006.12 chr6 0.83 GATGAAGAAGAAGGTGT 11193208 GATGAAGAAGAAGGTGT * 31 NC000006.12 chr6 0.83 GATGGAGAAGAAGGTGT 12861402 GATGA-AGAAGAAGGTG-AT * 32 NC000006.12 chr6 0.83 GATGCAGAAGACAGGTGAT 20151370 GATGAAGAAGAAGGTGT * 33 NC000006.12 chr6 0.83 GATGAAGAAGAAGGTGT 20163463 GATGA-AG-AAGAAG-TGAT * 34 NC000006.12 chr6 0.83 GATGCAGAAGACAGGTGAT 21271973 GATGAAGAAGA-AGGTGAT* 35 NC000005.10 chr5 0.83 GAAGAAGAAGAGGTAAT 22996338 GATGAAAGAAGAGGTG-AT * 36 NC000006.12 chr6 0.83 GATGAAGAATGAAGTTGAT 23465955 GATGAAAGA-GAAGGTGAT * 37 NC000005.10 chr5 0.83 GATGAAGCAGAAGGTGT 8462537 GATGAAGAAGAGAGTGAT * 38 NC00005.10 chr5 0.83 GATGAAGAAGGGAAGTAAT 7721853 GATGAAGAAGA-GAAGGTGAT * 39 NC000006.12 chr6 0.83 GATGAAGACAGAATGATGATGATG * 40 NC000006.12 chr6 0.83 AATGAAGAAGCAAGGTGAT 28251421 GATGAAGAAGA-AAGGTGAT * 41 NC00005.10 chr5 0.83 GATGAATGAAAGAAGGTGAAT 6331339 G-ATG GATGAAGAAGA-AGGTGAT * 53 NC000002.12 chr2 0.83 GATGAAAGAAGAAGAGGTGAT 50153821 GATG-AAG-AAGA-AGGTGAT * 54 NC000002.12 chr2 0.83 GATGAAAGAG-AGGTGAT 54757052 GATGAAAGAAGAGGTGGA-T * 55 NC000001.11 chr1 0.83 GATGAAAGAGAGACAGGTGAGGT 8230241 GATGAAAGA-GA-AGGTGAT * 56 NC00001.11 chr1 0.83 GAGTGAAGGAAGAAGAGGGTGT 11457065 GA-TGAA-GAAG-AAGGTGAT * 57 NC000001.11 chr1 0.83 GAATGAAGGAAAGGAAGGATGATG 20218652 G-ATGAA-GA-GAAGGTGAT * 58 NC000014.9 chr14 0.83 GATAGAAGATGAAAGGTGAGGT 3500879 GAT-GAAGA-AGAAGGTGA-T * 59 NC000014.9 chr14 0.83 GATGAAGAAGAGGGGTGAT 5524784 GATGAAGAAA-GAAGGTGAT * 60 NC000014.9 chr14 0.83 GATGCAGAAGACAGGTGAT 33473013 GATGAAGAAGA-AGGTGAT * 61 NC000011.10 chr11 0.83 GATGAAGAATAAGGTGTAT 30461125 GATGAAGAAGAAGGTG-AT * 62 NC000014.9 chr * 70 NC000012.12 chr12 0.83 GATGCAAGAAGACAGGTGAT 4602928 GATGAAAGAAGA-AGGTGAT * 71 NC000012.12 chr12 0.83 GATGAAGAAGAGGTGAT 4904999 GATG-GAAGAAGAAGGTGAT * 72 NC000012.12 chr12 0.83 GATGAGAAGAAGAGTGAT 8026984 GATGA-AGAAGAAGGTGAT * 73 NW015495299.1 chr2 0.83 GATGAAGAAGA-GTGCAT 465928 GATGAAGAAGAAGGTG-AT * 74 NC00011.10 chr11 0.83 GATGCAAGAAGACAGGTGAT 11790646 GATGAAGAAGA-AGGTGAT * 75 NC000011.10 chr11 0.83 GATGAAGAAGTTAAGGCTGAT 13950942 GATGAAGA-AAGG-TGAT * 76 NW019805490.1 chr3 0.83 GATGAAGATGAAGGTTGAAT 212653 GATGA-TGAAGAAGAAGGTGAT * 77 NW019805503.1 chr18 0.83 GATGAAGAAGAAGGTGAT 148518 GATGAAGAAGAAGAAGGTGAT * 78 NC000002.12 chr2 0.83 GAT-AGAAGAAGACAGGTGAT 10335731 GATGAAGAAGAAGGTGAT * 79 NC00003.12 chr3 0.83 GATGAGAAGAAGAGTGAG 10434488 GATGAAGAAGAAGAGTGAT * 80 NC00003.12 chr4 0.83 GATGAGAAGAAGAAGGTGAT 6804109 GATGAGAAGAAGAAGGTGAT * 81 NC000018.10 chr18 0.83 GATGAGAAGACAGGTGAT 16198420 GATGAAGAAGAAGGTGAT * 82 NC000004.12 chr4 0.83 GATGAGAAGAAGAGGTGAT 84784338 GATGAGAAGAAGAG-GTGAT * 83 NC00004.12 chr4 0.83 GATGAGAAGAAGAAGGTGAT 8332038 GATGAGAAGAAGAAGAAGGTGAT * 84 NC00004.12 chr4 0.83 GATGAGAAGAAGAAGAAGAGAGGTGAT * 85 NC00004.12 chr4 0.83 GATGAGAAGAAGAAGGTGAT 78780579 GATGAAGAAGAAGGTGAT * 86 NC00004.12 chr4 0.83 GATGAGAAGAAGAGGTGAT 77419436G-ATGAAGAAGAGGGTGAT * 87 NC000004.12 chr4 0.83 GAGGAAAGAAGAAGGTGAT 76308292 GATGAAAGAAGA-GTGAT * 88 NC000004.12 chr4 0.83 GATGAAAGAAGAAGAGTGAT 73534765 GATGAAAGA-GAAGGGTGAT * 89 NC00004.12 chr4 0.83 GAAATGCAAGAAGAGGGTGAT 62458987 GATGAAAGAAGAAGGTGAT * 90 NC000020.11 chr20 0.83 GATGAAAGAAGAAGAGTGAT 18580374 GATGAA-AGAAGGTGAT * 91 NC000016.10 chr16 0.83 GATGAAAGAAGAAGTTGAT 37225026 GATGAAAGAAGA--GTGAT * 92 NC000016.10 chr16 0.83 GATGAAAGAAGAAGGTGAT 34440755 GATGAAAAAAAAAAAAAAAAAAAAAAAA * [104] NC000004.12 chr4 0.83 GAATGAGAGAAGAGGTGAT 87353341 G-ATGAAGAAGAAGGTGAT * [105] NC000002.12 chr2 0.83 GATTGAAAAGCAAGAAGGTGAT 90988927 GA-TG-AAG-AAGAAGGTGAT * [106] NC000003.12 chr3 0.83 GATTGAAGATGAAGGTGAT 34637960 GA-TGAAGAAGAAGAGTGAT * [107] NC000002.12 chr2 0.83 GATGGAAGAAGAAGGTGCAT 127837384 GAT-GAAGAAGAAG-GTG-AT * [108] NC000002.12 chr2 0.83 GATGAATGAAGAAGGGGAT \\begin{tabular}{l l l l l} & Gene ID & Score & Matched Sequence & Loci \\\\ \\hline 1 & NC000004.12 chr4 & 0.86 & GAAGAAGAGGAAGAAGAATT & 117558370 \\\\ & & & GAAGAAGAGCAAGAAGA-ATT & \\\\ 2 & NC000014.9 chr14 & 0.86 & GAAGAAGAGCAATGAAGAAGACTT & 12719463 \\\\ & & & GAAGAAGAGCAA-GAAG-AT & \\\\ 3 & NW018654725.1 & chrY & 0.86 & GAAGAAGAAGGAAGAAGAAGATT & 21110 \\\\ & patch FIX & & & GAAGAAGCAAGA-AGAATT \\\\ 4 & NC000004.12 chr4 & 0.86 & GAAGAAGAAG-AGAAGAAGATT & 130552436 \\\\ & & & GAAGAAG-AGCAAGAAGAATT & \\\\ 5 & NC000014.9 chr14 & 0.86 & GAAGAAGATGCAAGAAGAAGAT & 79840239 \\\\ & & & GAAGAAGA-G-CAAGAAGAAT-T & \\\\ 6 & NC000018.10 chr18 & 0.86 & GAAGAAGTAGGCAAGAAGAATT & 783992 \\\\ & & & GAAGAAG-A-GCAAGAAGAAT-T & \\\\ 7 & NC000003.12 chr3 & 0.86 & GAAGAAGAG-AAGAAGAATT & 60432276 \\\\ & & & GAAGAAGAAGAAGAAGA-ATT & \\\\ 8 & NC000002.12 chr2 & 0.86 & GAAGATAGAGCAATGGAAGAGATT & 44201986 \\\\ & & & GAAGA-AGAGCAA-GAAGA-AGATT & \\\\ 9 & NC000015.10 chr15 & 0.82 & GAAGAAGAAG-AAGAAGAATT & 5850233 \\\\ & & & GAAGAAG-AGCAAGAAGA-T & \\\\ 10 & NC000006.12 chr6 & 0.82 & GAAGAAGAG-AGAAGAATT & 38233680 \\\\ & & & GAAGAAGAGCAAGAAGAAGATT & 16552894 \\\\ & & & GAAGAAG-AGCAAGA-AAGAATT & 2384918 \\\\ & & & GAAGAAG-AGCAAGAAGA-TT & \\\\ 13 & NC000021.9 chr21 & 0.82 & GAAGAAGAGAG-AGAAGAATT & 3080346 \\\\ & & & GAAGAAGAAGAAGAAGAAGAATT & 42684 \\\\ & & & GAAGAAGAG-AGC-AAGAAGA-GATT & \\\\ 14 & NW018654706.1 & chr1 & 0.82 & GAAGAAGAAGAAGAAGAATT & 21426 \\\\ & patch NOVEL & & GAAGAAGAG-AGCAAGAAGA-GATT & 42684 \\\\ 15 & NW017852928.1 & chr1 & 0.82 & GAAGAAGAAGAAGAAGAAGAATT & 42684 \\\\ & patch NOVEL & & GAAGAAGAG-AGC-AAGAAGA-GATT & \\\\ 16 & NW013171802.1 & chr6 & 0.82 & GAAG-AGAAGAAGAAGAATT & 64033 \\\\ & patch FIX & & GAAGAAGAAGAAGAAGA-AT & 42684 \\\\ & & & GAAGAAGAGCAAG-AGCAAGA-GATT & \\\\ 17 & NW018654706.1 & chr1 & 0.82 & GAAGAAGAAGAAGAAGAATT & 21426 \\\\ & patch NOVEL & & GAAGAAGAGAGAG-AGCAAGAAGAATT * 17 NC000002.12 chr2 0.82 GAAAAAGAG-AAGAAGAAGATT 25750788 GAAGAAGAGAGAAGCAAGAAGATT * 18 NC000002.12 chr2 0.82 GAAGAAGAAG-AAGAAGAATT 80694044 GAAGAAG-AGCAAGAAGA-ATT * 19 NC000005.10 chr5 0.82 GAAGAAGAAGCAAGACAG-AGAGTT 14573711 GAAGAAGAAGCAAGA-AGAAGA-TT * 20 NC00006.12 chr6 0.82 GAAGAAGAAG-AAGAAGAATT 69217989 GAAGAAG-AGCAAGAAGA-ATT * 21 NC000011.10 chr11 0.82 GAAGAAGAAGAAGAAGAAGAATT 25049429 GAAGAAG-AGCAAGA-AAGATT * 22 NC000008.11 chr8 0.82 GAAGAAGAAGAAGAAGAAGA-ATT 34613950 GAAGAAGAAGCAAGAAGAATT * 23 NC000015.10 chr15 0.82 GCAGAAGAAGCAAGCAAGCAAGAGATT 11738083 GAAGAAGAAGAAGAAGAAGAATT * 24 NC000011.10 chr11 0.82 GAAGAAGAAGCAAGCCAAGAAG-TT 16510998 GAAGAAGAAGCAAG-AAGAATT * 25 NC000012.12 chr2 0 * [35] NC000018.10 chr18 0.82 GAAGAAGTAGGCCAAGAAGAAGATC 783991 GAAGAAG-A-GCAAGAAGAAGATT * [36] NC000003.12 chr3 0.82 GAAGAAGAAG-AAGAAGAAGATT 27102255 GAAGAAG-AGCAAGAAGAAT-T * [37] NC000003.12 chr3 0.82 GAAGAAAAAGCAAGAAGA-ATT 56345429 GAAGAAGCAAGAAGAATT * [38] NC000003.12 chr3 0.82 GAAGAAGAG-AAGAAGAAGAAT 60432275 GAAGAAGAAGCAAGAAGAATT * [39] NC000004.12 chr4 0.82 GAAGAAGAGGAAGAAGAAGAATT 117558369 \\begin{tabular}{l l l l} & Gene ID & Score & Matched Sequence & Loci \\\\ \\hline [MISSING_PAGE_POST] CAGAAGAT & 92640053 \\\\ & & AGGAAACTGGGCCAGAAGAAC \\begin{table} \\begin{tabular}{l l l l l} & & & AG-GAACTGGGCCAGAAGCT & \\\\ 19 & NC000003.12 chr3 & 0.84 & AGGAACTGAGGTCTCAGAGAGCT & 33209680 \\\\ & & & AGGAACTG-GGC-CAGA-AGCT & \\\\ 20 & NC000003.12 chr3 & 0.84 & AGTGAAGCATGGGCCAGAAGCT & 47360544 \\\\ & & & AG-GAA-C-TGGGCCAGAAGCT & \\\\ \\end{tabular} \\end{table} Table 7: Potential target regions in the human genome for svRNA3 (truncated version).
**Background** The COVID-19 pandemic clock is ticking and the survival of many of mankind's modern institutions and or survival of many individuals is at stake. There is a need for treatments to significantly reduce the morbidity and mortality of COVID-19. Hence, we delved deep into the SARS-CoV-2 genome, which is the virus that has caused COVID-19. SARS-CoV-2 is from the same family as SARS-CoV in which three small viral RNAs (svRNA) were recently identified [1]; those svRNAs play a significant role in the virus pathogenesis in mice. **Contribution** In this paper, we report potential orthologs of those three svRNAs in the SARS-CoV-2 genome. Instead of off-the-shelf search and alignment algorithms, which failed to discover the orthologs, we used a special alignment scoring that does not penalize C/T and A/G mismatches as much as the other mutations. RNA bases C and U both can bind to G; similarly, A and G both can bind to U, hence, our scoring. We also validate this hypothesis using a novel, independent computational experiment. To validate our results, we confirmed the discovered orthologs are fully conserved in all the tested publicly available genomes of various strains of SARS-CoV-2; the loci at which the SARS-CoV-2 orthologs occur are close to the loci at which SARS-CoV svRNAs occur. We also report potential targets for these svRNAs. We hypothesize that the discovered orthologs play a role in pathogenesis of SARS-CoV-2, and therefore, antagomir-mediated inhibition of these SARS-CoV-2 svRNAs inhibits COVID-19.
Provide a brief summary of the text.
370
arxiv-format/1607_06029v1.md
# Automatic Detection of Solar Photovoltaic Arrays in High Resolution Aerial Imagery Jordan M. Malof Department of Electrical & Computer Engineering, Duke University, Durham, NC 27708 Kyle Bradbury Energy Initiative, Duke University, Durham, NC 27708 Leslie M. Collins Department of Electrical & Computer Engineering, Duke University, Durham, NC 27708 Richard G. Newell Nicholas School of the Environment, Duke University, Durham, NC 27708 ## I Introduction The quantity of solar photovoltaic (PV) arrays has grown rapidly in the United States in recent years [2, 3], with a large proportion of this growth due to small-scale, or distributed, PV arrays [4, 5]. These small-scale installations are often found on the roofs of commercial structures, or private homes [4], and therefore are often referred to as rooftop PV. Distributed PV offers many benefits [6], but integrating it into existing power grids is challenging. To understand and evaluate the factors driving distributed PV, and to aid in its integration, there is growing interest among government agencies, utilities, and third party decision makers in detailed information about distributed PV; including the locations, power capacity, and energy production of existing arrays. As a result, several organizations have begun collecting or publishing such information, including the Interstate Renewable Energy Council (IREC) [7], Greentech Media [8], and the US Energy Information Administration (EIA) [9][10]. Although the available information on distributed PV is expanding, it is nonetheless difficult to obtain. Existing methods of obtaining this information, such as surveys and utility interconnection flings, are costly and time consuming. They are also typically limited in spatial resolution to the state or national level [3, 6]. For example, the EIA began reporting state-level distributed PV data at the end of 2015 [9]. This work investigates a new approach for collecting distributed PV information that relies on using computer algorithms to automatically identify PV arrays in high resolution (\\(\\leq\\) 0.3 meters per pixel) color aerial imagery. Fig. 1 is an example of 0.3 meter resolution imagery where the PV arrays have been annotated. At this resolution, it is possible to visually identify individual PV arrays, as well as their shape, size, and color. This permits the collection of distributed PV information at a very high geo-spatial resolution. Also, because the approach is automated, it is relatively inexpensive to apply (i.e., run a computer program), and to do so repeatedly as new imagery becomes available. ### _Two challenges of collecting PV information in aerial imagery_ There are (at least) two major technical challenges to employing the proposed approach in a practical application. The first challenge involves developing a computer algorithm that can reliably identify the locations, shapes, or sizes of PV installations. The second challenge involves using the identified distributed PV imagery to infer the characteristics of the arrays, particularly power capacity and energy production. This information can then be aggregated into statistics for reporting. In this work we address the first of these challenges, and present an algorithm for detecting PV arrays in aerial imagery, as well as estimating their shape and size. The main component of the algorithm is a supervised Random Forest classifier [11], which assigns a \"confidence\" to each pixel in an image indicating its likelihood of corresponding to a PV array. The performance of the algorithm is measured on a very large dataset of aerial imagery (135 km\\({}^{2}\\) area including more than 2,700 arrays) in which humans have annotated the PV array locations, shapes, and sizes. This dataset is part of a much larger dataset covering 900 km\\({}^{2}\\), including nearly 20,000 annotated PV arrays, and which is publicly available for download for comparing results [1]. Algorithmperformance is evaluated based on correctly identifying PV pixels as well as identifying individual panel objects (and their precise shape and size). ### _Related work: object detection in aerial imagery_ The automatic detection of objects in aerial imagery (e.g., ortho-rectified imagery) has been researched extensively [12, 13, 14, 15]. Some specific examples include the detection of roads [16, 17, 18, 19, 20, 21], buildings [22, 23, 24, 25, 26, 27, 28, 29], and vehicles [30, 31, 32, 33]. In this published body of work, a large variety of algorithms have been proposed, employing techniques such as image processing, statistical modeling, machine learning classifiers, heuristic rules, and more. The main component of the PV array detection algorithm proposed here is a supervised machine learning classifier called a Random Forest (RF) [11]. Supervised classifiers have previously been used for object recognition, including the RF [34, 35], support vector machine (SVM) [31] and various types of neural networks [16, 33, 36]. The RF in particular has been applied for land cover classification [34] and object detection [35]. In [35] it was used to classify individual pixels into one of four classes: building, street, trees, and grass. The RF takes a similar role in this work, where it is used to classify individual pixels as PV, or not PV. An important resource in aerial imagery recognition is a labeled dataset. Such datasets consist of imagery where each instance of the target object is indicated by a bounding box, polygon, or similar annotation. Such datasets are used for (i) the development of effective detection algorithms, and (ii) an accurate assessment of their performance. Ideally a labeled dataset will cover a large surface area, and have a large number of labeled objects. Large datasets better represent practical operating conditions, which involve large volumes of data that is collected for diverse environments and imaging conditions. Datasets used in recently published research typically include a few hundred labels, and a few hundred square meters of surface area [29, 35, 37, 38]. Some specific recent examples include the SZTAKI-INRIA dataset for building detection, which includes 665 labeled buildings [37]. The OIRDS dataset for automotive detection consists of 1,800 labels [38]. In this work we utilize a dataset of color aerial imagery that encompasses 135 km\\({}^{2}\\) of area with more than 2,700 labels. ### _Contributions of this work_ The idea of automatically detecting PV arrays in aerial imagery was first investigated in a feasibility study [39] that employed a simple algorithm and a small dataset (< \\(1\\)k\\(\\mathrm{m}^{2}\\) area, with 53 PV array annotations). This work builds on that initial investigation with several contributions: * A more sophisticated rooftop PV detection algorithm is developed, employing pixel-wise classification with an RF classifier, and post-processing steps that improve performance. * The proposed algorithm is tested on a substantially larger dataset, covering 135 km\\({}^{2}\\), and including more than 2,700 PV array annotations. This dataset is also substantially larger than most datasets for similar object recognition tasks. * The algorithm performance is measured at both a pixel level, and an object level. Unlike the previous study, the algorithm's ability to accurately measure both the shape and size of the target objects is assessed. * The results are the first of their kind for PV array detection. Since the ground truth data are now publicly available [1], it is our hope that these findings serve as a baseline for further work. The remainder of the paper is organized as follows. Section II describes the aerial imagery data that is used for algorithm development. Section III presents the proposed solar PV detection algorithm. Section IV presents the algorithm performance evaluation and Section V presents experimental results on the dataset. Section VI presents our conclusions and suggestions for future work. ## II The Aerial Imagery Dataset All experiments and algorithm development in this work utilize a large dataset of color (RGB) aerial imagery collected over the US city of Fresno, California. The imagery covers a total spatial area of 135 km\\({}^{2}\\). All of the imagery was collected in the same month in 2013, using aerial photography. The imagery has a spatial resolution of 0.3 meters per pixel, and all the imagery has been ortho-rectified. An example of the imagery is shown in Fig. 1, where the solar PV locations are annotated in red. Further details about the data can be found at [1], where the data is also publicly available for download. The full imagery dataset is composed of 601 images that are each 5000 by 5000 pixels, across three cities, and with varying resolution. We chose to use imagery from Fresno, California because recently Fig. 1: An example of a color orthographic image (top) from the orthomipary dataset, with human annotations shown in red outlines. Three of the annotations are enlarged in smaller images (bottom) so that the rooftop PV are more clearly visible. collected imagery was available (from 2013), with a high resolution (0.3 \\(m\\)), and because Fresno has a large number of solar PV installations. Over 100 images of Fresno are available in [1], from which we randomly sampled 60 of the available images for the analysis presented here. The identification tags of these images are provided in the appendix for future investigations. ### _Human annotations of true rooftop PV locations_ In order to develop an effective computer vision algorithm, as well as accurately assess its performance, it is necessary to have the precise locations where PV installations appear in the aerial imagery. In order to obtain this information, human observers visually scanned the imagery and annotated all of the (visible) PV arrays. For improved quality, two annotators scanned each part of the imagery, and their annotations were combined by taking a union of each observer's annotations. There were a total of 2,794 individual solar PV regions in the imagery after the merging process. Note again that this is a subset of the 19,863 annotations available in [1]. Some examples of annotated regions are shown at the bottom of Fig. 1 and in Fig. 4a. To avoid a positive bias in the performance evaluation of the proposed detection algorithm, we split the available imagery into two disjoint datasets: Fresno Training and Fresno Testing. This is a common approach for validating supervised machine learning algorithms, such as the RF model used in our detection algorithm. A summary of the imagery in each dataset is presented below in Table 1. The data was split between training and testing at a ratio of 2:1, in order to provide enough solar array examples to effectively train the RF model (see Section III.C for details about the RF). ## III The proposed PV detection algorithm In this section we present the details of the proposed solar PV detection algorithm. We begin with a brief overview of the primary processing steps, followed by individual sections providing more details about each step. ### _Algorithm overview_ The proposed rooftop PV algorithm takes RGB color aerial imagery as input and performs four major processing steps, as illustrated in Fig. 2. 1) Feature extraction. This step consists of extracting image statistics, or features, around each pixel that characterize the colors, textures, and other patterns surrounding the pixel. The feature extraction step effectively maps the 3-channel RGB image into an M-channel image, where \\(M\\) is the number of features extracted around each pixel location. 2) Random Forest Classifier. The image statistics computed in the feature extraction stage are the input to a trained RF classifier. The RF is a machine learning classification model that assigns a probability, or \"confidence\", to each pixel in the imagery. The confidence value indicates how likely the pixel is to correspond to a PV array. The output of this step is a single channel image, or spatial map, of where PV arrays are likely to be located. An example image and associated confidence map are shown in Fig. 4. 3) Post-processing. This step is designed to improve the accuracy of the confidence map that was generated in the RF classification step. This process consists of identifying high confidence individual pixels (i.e., local maxima locations) and then growing regions of pixels around them. All pixel confidence values outside of these grown regions are then set to zero. 4) Object detection. This step identifies groups of contiguous high confidence pixels that are likely to correspond to a single PV array. Each identified group of contiguous pixels is returned from this step as a detected object, and the confidence of that object is set to the value of the maximum pixel confidence value in that object. The output of this step is a list of objects and their confidence values, which is used to perform object-based scoring. ### _Feature extraction_ The feature extraction process takes a 3-channel RGB aerial image as input, and returns an M-channel \"feature image\". The feature image, as it is called here, can be thought of as a new representation of the original image, where each pixel in the original image is now represented by an \\(M\\)-dimensional vector of feature values that are computed using the original RGB image. An important consideration in this work is computational efficiency, so that the proposed algorithm can be applied at a national scale. It should therefore be possible to compute the features quickly, with few computations. One Fig. 2: A flowchart of the PV detection algorithm. Each gray block corresponds to a major processing step. Additionally, the input and output of each stage is also shown on the right or left of each block. class of image features that achieves this objective consists of image statistics (e.g., means and variance) computed in rectangular windows. These features can be computed very efficiently using integral images [40, 41]. In the proposed feature extraction approach, each pixel is represented by the means, \\(\\mu\\), and variances, \\(\\sigma^{2}\\), computed in several 3x3 windows surrounding it, for each channel of the RGB image. A window size of 3x3 was chosen because it roughly corresponds to the size of the smallest PV array in the dataset. A larger choice of window size would risk mixing PV pixels with background pixels, and thereby obscuring the information useful for identifying individual PV panels. A smaller window size would be too small to compute any statistics. Note that each window results in 6 features total (2 features for each of the 3 RGB image channels). The feature set consists of features extracted from several of these windows surrounding a center pixel, \\(p_{0}\\), which is illustrated in Fig. 3. The windows are organized into two rings around \\(p_{0}\\), and each ring consists of 9 windows. The goal of this structure is to capture image statistics in the area surrounding the pixel location. To precisely describe the extraction location of each window, we can characterize each window's location by its vertical offset, \\(y\\), and horizontal offset, \\(x\\), from \\(p_{0}\\). This is illustrated in Fig. 3. A set of 9 windows (one ring) is denoted here by \\(S_{r}\\), where the subscript \\(r\\) parameterizes the locations of the windows in the ring. The locations of the windows in \\(S_{r}\\) are then given by \\[S_{r}=\\big{\\{}(x,y)\\colon x\\in\\{0,-r,r\\},y\\in\\{0,-r,r\\}\\big{\\}}. \\tag{1}\\] There are 9 windows in each ring, and each window yields 6 features, resulting in 54 total features per ring. In this work, we extracted features in two rings, given by \\(S_{2}\\) and \\(S_{4}\\). Note that \\(S_{2}\\) and \\(S_{4}\\) share a window location at \\((x,y)=(0,0)\\). One of these duplicates is removed, leaving a total of 54+54-6=102 total features. These two rings, \\(S_{2}\\) and \\(S_{4}\\), were found to provide a good compromise between measuring useful local image statistics, and increasing the computation time due to increases in the feature-space size. ### _The Random Forest Classifier_ Random Forests (RFs) [11] are a state-of-the-art supervised statistical classification method. They have been successfully applied to a variety of problems, such as image processing [42], medical diagnosis [43], pose recognition [44, 45], and remote sensing [34, 35]. In this work we use the RF to classify each pixel in the imagery, similar to the way it was used in some other contexts [35, 45]. The RF receives the feature vector for each pixel and assigns it a probability, or confidence, indicating the likelihood that it corresponds to a PV array. Although the RF has many advantages, there are two primary reasons the RF was employed in this work. First is the ability of the RF to learn complex nonlinear relationships between input and output variables. This is important because the relationship between the image features and the pixel labels (i.e., PV or non-PV) are very complex. The second motivation for the RF is its known computational speed during training and testing [34, 35, 45]. It can also be implemented on graphics processing units for further speed improvements [41]. This computational efficiency is important for handling the massive amounts of data common in high resolution aerial imagery applications. For example, our datasets consists of 1.5 billion pixels, however this encompasses only 135 km\\({}^{2}\\) of the United State's nearly 9.857\\(\\times 10^{6}\\) km\\({}^{2}\\) area. The RF actually consists of an ensemble of \\(T\\) simpler supervised classifiers called decision trees [46]. An illustration of an RF is provided in Fig. 5. Each tree consists of a series of decision nodes which terminate (at the bottom of the tree) in a leaf node. To classify a new feature vector, \\(x\\), it is presented to the top decision node, and it is subsequently Fig. 4: An example of an aerial image (top) and its corresponding confidence map (bottom). In both images the true solar PV locations have been annotated in red. The confidence map is the output of stage two in the detection algorithm (see Fig. 2). Fig. 3: Illustration of pixel-based feature extraction at a single pixel location, \\(p_{\\sigma}\\). Features are extracted in several windows around \\(p_{0}\\), and each window is 3x3 pixels in size. Each window is described by its horizontal and vertical offset from \\(p_{0}\\), given by \\(x\\) and \\(y\\) respectively. A mean, \\(\\mu\\), and variance, \\(\\sigma\\), are computed in each window, and across each of the three channels of the image (so 6 total features for each window). All the features are combined into a single _M_-dimensional vector representing the pixel at \\(p_{0}\\). directed down the tree, based on the values in the feature vector, until it reaches a leaf node. At the leaf node a probability is assigned to the vector indicating to which class (e.g., PV or non-PV) it belongs. During training, each tree is \"grown\" independently of the other trees, in a top-down manner, using a random bootstrap sample of pixels from the training data. The decision nodes are learned such that they best separate the training data according to some performance measurement (e.g., the Gini impurity index or information gain). In this work we use the Gini index. Each node of each tree considers only a random subset of the input features (of size \\(m\\)) when inferring how to split the data. The parameter \\(m\\) is often cited as the only major adjustable parameter of the RF, and a conventional setting that usually works well is \\(m=\\sqrt{M}\\), where \\(M\\) is the number of feature dimensions [34]. Decision nodes are created until (i) splitting no longer improves the Gini index or (ii) it would result in fewer than 5 observations in a leaf node. The parameter settings for the RF, and other algorithms in this work, are presented in Table 3. ### _Post-processing_ The goal of the post-processing (PP) step is to improve the pixel-wise classification accuracy of the raw confidence maps, as well as to make them better suited for the object detection step. The algorithm for PP is outlined in Table 2, and an example of the input and output of this process is shown in Fig. 6. Broadly speaking, the PP algorithm identifies individual high confidence pixels (local maxima) and then grows a new smooth region (in terms of confidence values) around them. Local maxima are retained for region growing only if they (i) exceed some minimum threshold, \\(c_{0}\\), and (ii) they are the largest maxima in a surrounding square window of length \\(L_{s}\\). This filtering criterion is designed to remove maxima locations that are likely to be false alarms. Regions are grown around each remaining maxima location using Otsu's method [47], which automatically segments pixels into foreground (high confidence) and background (low confidence) regions. The region growing takes place in a square window of length \\(L_{g}\\) that is centered on each maxima location. The output of this PP operation, \\(I^{\\prime}\\), consists of many small connected regions that all have the same confidence value. This makes object extraction easier. The final step of post-processing is to apply morphological closing and dilation [48], respectively, in order to smooth the grown regions. These operations are performed with a disk with radius \\(r_{1}\\) and \\(r_{2}\\), respectively. The parameter values of the PP algorithm are provided in Table 3. These values were set by optimizing the performance of the algorithm on the Fresno Training data (see Section IV.C). ### _Object Detection_ The object detection phase identifies groups of adjacent, or neighboring, high confidence pixels and identifies them as detected objects. This is achieved by first thresholding the confidence map: any pixel confidence greater than zero is set to one, and all others are set to 0. The result is a binary image, which is used for finding contiguous groups of pixels. The resulting connected regions are all taken as detected objects. The confidence of each region is given by the maximum confidence pixel in that region. An example output from this process is shown in Fig. 6. Fig. 5: Illustration of a RF classifier architecture. The RF consists of several decision trees, where each tree consists of decision nodes (blue circles) and leaf nodes (green nodes). To classify an input feature vector, \\(x\\), it is presented to each tree independently. For a given tree, \\(x\\), is passed down the decision nodes based on its values until it reaches a leaf node. At the leaf node it receives a probability indicating to which class it belongs. Here \\(y_{t}\\) indicates the probability of belonging to the PV class. The RF output is creating by averaging the probabilities returned from each tree. ## IV Experimental design This section begins with an overview of the experimental design used for the experiments in Section V, followed by a description of our performance evaluation methodology. We conducted two experiments, with the primary goal of measuring the performance of the proposed PV array detection algorithm. The first experiment measures how well the algorithm identifies individual PV pixels: pixel-based classification performance. The second experiment measures how well the algorithm can identify objects (groups of pixels) that correspond to PV array annotations, as well as their precise shape and size. The experiments are conducted on two datasets of aerial imagery denoted as Fresno Training, and Fresno Testing (see Table 1). As described in Section II, all PV arrays visible in the imagery were annotated by humans to provide ground truth pixels/objects for use in scoring the detector. The primary role of the Fresno training dataset was to train the RF classifier, as well as optimize other parameters associated with the detection algorithm. The Fresno Testing dataset was used to obtain an unbiased performance estimate for the detector. This is a common approach for supervised machine learning algorithms [49]. The performance metric used to evaluate the performance of the algorithm is the precision recall (PR) curve. The PR curve is a popular performance metric for object detection in aerial imagery [16,22,50,51], and therefore it is adopted here. The next few sections describe PR curves, the Jaccard index (used to measure the degree of overlap between detected objects with human annotations), and how the algorithm parameters were optimized. ### _Performance metrics_ PR curves measure the performance tradeoff between making correct detections and false detections, as the sensitivity of a detector, or classifier, is varied. An illustration of a PR curve is shown in Fig. 7. The x-axis of a PR curve is the recall, \\(R\\), which is the proportion of all true target objects (e.g., PV arrays) in the data that were returned by the algorithm as detections. The y-axis is the precision, \\(P\\), which is the proportion of all detected objects (i.e., both true and false) which are true targets. An effective detector will tend towards the top right corner of the PR curve, thereby maximizing both recall and precision. A detector that detects objects randomly (i.e., it is ineffective) will achieve a precision that is equal to the proportion of objects in the dataset that are targets. For example, in the pixel-based PV detection experiments, roughly 0.07% of the Fresno Testing pixels correspond to PV arrays. Therefore a random detector would achieve \\(P=0.0007\\), for all values of \\(R\\). The sensitivity for a given detection algorithm can be varied by raising or lowering a threshold, \\(t_{0}\\), that is applied to the confidence values of the list of potential detections (e.g., pixels or objects). All potential detections above \\(t_{0}\\) are accepted as detections, and all potential detections below Fig. 6: Example output of the rooftop PV detection algorithm after several of the major processing steps. Four different images are shown, (a)-(d), and each image shows the human PV annotations in red. (a) is the original RGB image. (b) is the confidence map output from the Random Forest classifier, without post-processing; brighter pixels indicate higher confidence. (c) is the confidence map after post-processing. (d) shows the objects detected after the object detection stage of processing. Given the detection rate and false alarm rate employed in this example, the detector correctly removes all of the false alarms, while losing one of the true panel regions. are rejected. \\(P\\) and \\(R\\) are then computed based on the group of accepted detections. ### _Linking detections to human annotations_ One issue that arises with object-based scoring is determining when a detected object should be considered a correct detection. A detected object (i.e., a region labeled as a PV array) may overlap with a PV annotation from the ground truth data, but how much overlap should be required to declare it as detected correctly? This problem is apparent in algorithm Fig. 6 Fig. 6d, where none of the detected objects match _perfectly_ with the human annotations, but they might reasonably be considered correct detections. To address this issue, a metric is needed to measure the shape/size similarity between two objects (i.e., groups of pixels). One metric that has been utilized for this purpose is the Jaccard index [52]. The Jaccard index, \\(J\\), for two image objects (groups of pixels), denoted \\(A\\) and \\(B\\) respectively, is given by \\[J(A,B)\\ =\\frac{|A\\cap B|}{|\\text{A}\\cup\\text{B}|}. \\tag{2}\\] Fig. 8 below shows the Jaccard index for two objects as their level of overlap varies. The Jaccard index allows us to measure precisely how similar a detected object is to a human annotation. A threshold can then be applied where only detected objects above the threshold (with respect to a human annotation) can be considered a correct detection. Ultimately, the choice for this threshold should depend on the final application of the detector, and the corresponding level of shape/size accuracy that is needed. Therefore, we report object-based performance for multiple thresholds. A similar approach was recently adopted in [53] for building detection. In some instances a detected object will overlap with multiple ground truth annotations (see the left-most three annotations in Fig. 6d for an example). In this case the multiple annotations are treated as one annotation composed of the union of the individual annotations. If the union of the annotations has a sufficiently high Jaccard index with respect to a detection, all three annotations are considered to be detected by the detector. ### _Algorithm training and optimization_ All training and parameter optimization was performed on the Fresno Training dataset. The final set of chosen parameters for the algorithm is shown in Table 3. These parameters are used in all experiments. The RF classifier itself was trained using five million pixels from the Fresno Training dataset. This subset of pixels was chosen by first selecting all of the available solar PV pixels (roughly 500,000), and then randomly sampling the remaining non-PV pixels from the training imagery. Using increasing numbers of pixels improves performance but at the cost of increasing the computation time of the RF. Five million was found to be a good tradeoff between performance and computation time on the training data. The parameters were chosen in order to optimize performance on the training data. This parameter optimization was done by measuring the performance of the algorithm (on the Fresno Training dataset) as the parameters were varied over a coarse grid of potential values. Note that the parameter \\(m\\) was set to the conventional value of \\(\\sqrt{M}\\), rather than being optimized. ## V Experimental Results This section describes the experimental results obtained using the experimental design discussed in Section IV. First, Fig. 8: **Illustration of the Jaccard Index, \\(J\\). The gray and white boxes represent two sets of pixels, such as a detected object from the solar PV detector, and a true solar PV annotation. As the degree of overlap of the two sets increases, \\(J\\) increases from 0 toward 1.** Fig. 7: **Illustration of a PR curve. A good detector will obtain a curve that is closer to the top right corner of the graph. Random guessing results in a line that achieves constant precision for all values of \\(R\\), where the precision is equal to the proportion of total detections returned by the algorithm that are from the target class (e.g., PV arrays).**pixel-based performance results are presented, followed by object-based performance. ### _Pixel-based performance_ The pixel-based performance for the PV detection algorithm, on both the training and testing data, is shown in Fig. 9. Results are shown for the RF, and the RF after PP has been applied (RFPP). The primary goal of this experiment was to demonstrate that the RFPP algorithm can effectively detect PV array pixels. The results on the Fresno Testing dataset provide an unbiased estimate of the performance of the RF and RFPP algorithms. The results indicate that the solar PV detector is very effective at discriminating non-panel pixels from panel pixels. This is made most clear by considering how well a random detector (i.e., a completely ineffective detector) would perform. Recall from Section IV.A that, because PV arrays constitute only 0.07% of the pixels in Fresno Testing, the random detector achieves \\(P=0.0007\\) for all values of \\(R\\). Both the RF and RFPP detectors achieve performance far above this. Further insight can be obtained from the results in Fig. 9 by comparing the performance of the detectors on the training data and testing data, respectively. As is expected, the results indicate that there is an overall performance drop between the training data and the testing data. Quantitatively this means that, for each value of \\(R\\), the algorithm typically obtains a lower \\(P\\) on the testing data than it does on the training data. One exception to this occurs for the RFPP algorithm when \\(R\\) is below 0.6, however the testing and training performance is similar at these operating sensitivities. The results also suggest that the main contributor to the performance loss incurred on the testing data is the RF classifier (as opposed to RFPP). This is because the RF algorithm performance drops between the training and testing dataset, however, the RFPP algorithm offers the same advantages on both the training and testing dataset; relative to the performance of the RF alone. This suggests that the RF is overfitting to the training data, or in other words, the RF learned to recognize patterns that are too unique to the training data, and as a consequence it less effectively identifies previously unseen PV arrays in the testing data. This can be addressed in many ways, and is an important consideration for future work. ### _Object-based performance_ The primary goal of this experiment is to demonstrate the effectiveness of the detector. Further, we want to examine how well the detector can identify the precise shape/size of individual PV arrays. As a result, we measure the object-based performance of the detector on the Fresno Testing dataset for varying settings of the Jaccard index during scoring. These resulting PR curves are shown in Fig. 10. The results indicate that the object-based performance of the detector is once again well above that of the baseline random detector performance. Although this is true for all values of \\(J\\), the performance of the detector decreases rapidly as \\(J\\) increases. As a specific example, when \\(J=0.1\\) the detector achieves \\(R=0.7\\) with \\(P=0.6\\), while at \\(J=0.5\\), \\(R=0.55\\) at the same value of \\(P\\). When \\(J=0.7\\), the detector never reaches \\(P=0.6\\). This outcome is expected because, as \\(J\\) is increased, many of the objects detected that are near true PV locations are no longer considered correct detections. This also results in more PV annotations remaining undetected, even when the detector is operated with high sensitivity. This is why the maximum \\(R\\) obtained for each detector decreases as \\(J\\) increases. Different values for \\(J\\) are likely to be appropriate depending on the intended purpose of the detector. For example, lower \\(J\\) values (e.g., \\(J=0.1\\)) are appropriate for applications where only the general location of target objects is important, and obtaining the precise shape/size is not. In the context of solar PV array detection, this may be the case if the detector is used as a preprocessing step for further, and more sophisticated (but slower), detection algorithms. Note that when operated with \\(J\\) = 0.1 the detector is capable of detecting roughly 90% of the targets, with \\(P\\cong 0.1\\). Since there are roughly 1000 PV arrays in the testing data, this corresponds to roughly 10000 total detections returned by the detector (900 true detections and 9,100 false detections) over the 45 km\\({}^{2}\\) testing area. This dramatically reduces the amount of image locations that must be considered for further processing, facilitating the use of more sophisticated subsequent processing. The detector proposed here is designed to operate quickly on large datasets, and therefore could be used in this role. In contrast to lower \\(J\\) values, a higher value (e.g. \\(J=0.7\\)) is appropriate for detection applications where it is important to accurately estimate the size and shape of target objects. In the context of solar PV array detection, this may be the case, for example, if the goal is to estimate the power capacity of individual solar PV arrays. Setting \\(J\\) to higher values will lead to a performance measure that better reflects the capability of a given detector to achieve that goal, which is a much more difficult task than simply detecting the likely presence of an object (using, e.g., \\(J=0.1\\)). This difficulty is reflected in the much poorer performance of the proposed detector on this task (e.g., see Fig. 10 with \\(J=0.7\\)). Looking forward, the performance reported here for \\(J=0.7\\) establishes a baseline Fig. 9: **PR curves for the pixel-wise performance of the PV detector on the Fresno Training dataset (red), and the Fresno Testing dataset (black). For each dataset, the performance of the detector is shown before post-processing (solid lines), and after post-processing (dashed lines. The random detector for this problem achieves \\(P=0.0007\\) for all values of \\(R\\), but this is not shown due to its small magnitude.**for future improvement in achieving this type of goal. ## VI Conclusions and Future Work We investigated a new approach for the problem of collecting information for small-scale solar PV arrays over large areas. The proposed approach employs a computer algorithm that automatically detects solar PV arrays in high resolution (\\(\\leq 0.3\\;m\\)) color (RGB) imagery data. A detection algorithm was developed and validated on a very large collection of aerial imagery (\\(\\geq 135km^{2}\\)) collected over the city of Fresno, CA. Human annotators manually scanned and annotated solar PV locations to provide ground truth for evaluating the performance of the proposed algorithm. Performance was measured in a pixel-based and object-based manner, respectively, using PR curves. In the case of object-based scoring, the algorithm was also scored based on how well it can identify the shape and size of the true panel object. The results demonstrate that the algorithm is highly effective on a per-pixel basis. The PR measures indicate it can detect most of the true PV pixels while removing the vast majority of the non-PV pixels. The object-based PR curves indicated that the algorithm was likewise effective at object detection, however, it was far less effective at estimating the precise shape/size of the PV arrays. The results presented here are the first of their kind for distributed PV detection in aerial imagery, and demonstrate the feasibility of collecting distributed PV information over large areas using aerial or satellite imagery. This may ultimately yield a faster, cheaper, and more scalable solution for the large scale collection of distributed PV information, and potentially information for other aspects of energy production and consumption as well. While the results here demonstrate the promise of this approach to information collection, several challenges remain as opportunities for future work. ### _Future work_ #### Vi-A1 Improved detection algorithms Because the results here are the first of their kind for this problem they establish a baseline performance, or benchmark, for future algorithm development. To facilitate such efforts, the data used in this work is freely available for download [1], and the exact images used in our experiments are listed in the supplemental materials. It is our hope that others will build upon these results, and develop increasingly effective detection algorithms. #### Vi-A2 Inferring capacity and energy production Another important line of future work is the inference of PV array capacity, energy production, and other characteristics from the imagery. Recall from the Section I, that this is the second major technical challenge for creating a complete system for extracting PV information from aerial imagery. This second challenge could be pursued using the imagery detected from the PV detector, or otherwise using the ground truth annotations in the aerial imagery dataset. #### Vi-A3 Establishing practical performance needs While the results here demonstrate the ability of an algorithm to discriminate between PV and non-PV imagery (as compared to a random detector), it is unclear what levels of performance would be needed for different practical applications. Creating a complete system for inferring distributed PV information would help reveal what level of performance is needed from the detection stage in order to obtain practically useful energy information; with qualities and reliability that is similar, or better than, current estimation strategies (e.g., the EIA [9]). #### Vi-A4 Information collection for other energy resources Finally, we hope this work also motivates the collection of other energy information from aerial imagery, in addition to distributed PV. Other examples might include inferring the energy consumption of individual households, and (from that information) counties, or cities. This could be pursued, for example, by estimating the volume of a household from aerial imagery. Other elements of the energy system could also potentially be detected, such as power lines, or power plants. This work was supported in part by the Alfred P. Sloan Foundation and the Wells Fargo Foundation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Alfred P. Sloan Foundation or the Wells Fargo Foundation ## References * [1] Bradbury K, Sahoo R, Malof J, Johnson T, Devarajan A, Zhang W, et al. Distributed Solar Photovoltaic Array Location and Extent Data Set for Remote Sensing Object Identification. Figshare 2016, [https://dx.doi.org/10.6084/m9.figshare.3385780.v1](https://dx.doi.org/10.6084/m9.figshare.3385780.v1) (accessed June 1, 2016). * [2] Alam MJE, Muttaqi KM, Sutanto D. An approach for online assessment of rooftop solar PV impacts on low-voltage distribution Fig. 10: PR curves for the object-based performance of the rooftop PV detector on the Fresno Testing dataset. Each PR curve corresponds to a different setting of the Jaccard index, \\(J\\), during scoring. The left-most point of the curves represents the performance when classifying every object with confidence of one (i.e., the maximum RF output) to a detection. With object-based scoring, the detectors are not guaranteed to place objects over all true PV array locations, and indeed, none of the detectors reach \\(R=1\\). networks. IEEE Trans Sustain Energy 2014;5:663-72. doi:10.1109/TSTE.2013.2280635. * [3] Chernin A, Ongaskul W, Mitra J, Member S. Improving of Uncertain Power Generation of Rooftop Solar PV Using Battery Storage. Int. Conf. Util. Exhib. Green Energy Sustain. Dev., IEEE; 2014, p. 1-4. * [4] Electric Power Monthly. US Energy Information Administration, 2016. * [5] Net Generation from Renewable Sources: Total (All Sectors), 2006-February 2016. US Energy Information Administration, 2016. * [6] Singh GK. Solar power generation by PV (photovoltaic) technology: A review. Energy 2013;53:1-13. doi:10.1016/j.energy.2013.02.057. * [7] Sherwood L U.S. Solar Market Trends 2013. 2013. * [8] Association SEL U.S. Solar Market Premages for Biggest Quarter in History 2015. [http://www.seia.org/news/us-solar-market-prepares-biggs-quarter-history](http://www.seia.org/news/us-solar-market-prepares-biggs-quarter-history). * [9] EIA electricity data now include estimated small-scale solar PV capacity and generation. EIA (US Energy Inf Adm 2015). [https://www.eia.gov/todayinenergy/detail.cfm?id=23972](https://www.eia.gov/todayinenergy/detail.cfm?id=23972). * [10] Electric Power Monthly: with data for January 2015. 2015. * [11] Breiman L. Random forests. Mach Learn 2001;45:5-32. doi:10.1023/A:101093404324. * [12] Mayer H. Object extraction in photogrammetric computer vision. ISPRS J Photogram Remote Sens 2008;63:213-22. doi:10.1016/j.isprps.2007.08.008. * [13] Blaschke T. Object based image analysis for remote sensing. ISPRS J Photogramm Remote Sens 2010;65:2-16. doi:10.1106/j.isprps.2009.06.004. * [14] Toshev A, Taskar B, Danilidis K. Shape-based object detection via boundary structure segmentation. Int J Comput Vis 2012;99:123-46. * [15] Baltsavajs EP. Object extraction and revision by image analysis using existing geodank and knowledge: current status and steps towards operational systems. ISPRS J Photogramm Remote Sens 2004;58:129-51. doi:10.1016/j.isprpsps.2003.09.002. * [16] Mnih V, Hinton GE. Learning to detect roads in high-resolution aerial images. Lect Notes Comput Sci (including Subser Lect Notes Bioinformatics) 2010;6316 LCNS:210-23. doi:10.1007/978-3-642-15567-3.16. * [17] Mokhtarzade M, Zooj MV. Road detection from high-resolution satellite images using artificial neural networks. Int J Appl Earth Obs Geomt 2007;9:32-40. * [18] Bhattacharya V, Parii SK. An improved backpropagation neural network for detection of road-like features in satellite imagery. Int J Remote Sens 1997;18:3379-94. doi:10.1080/014311697216937. * [19] Laptev I, Mayer H, Lindeberg T, Eckstein W, Steger C, Baumgartner A. Automatic extraction of roads from aerial images based on scale space and snakes. Mach Vis Appl 2000;12:23-31. doi:10.1007/s001380050121. * [20] Bajesy R, Tavakoli M. Computer recognition of roads from satellite pictures. Syst Man Cybern IEEE Trans 1976;623-37. * [21] Hu J, Razdan A, Femiani JC, Cui M, Wonka P. Road Network Extraction and Intersection Detection From Aerial Images by Tracking Road Footprints. IEEE Trans Geosci Remote Sens 2007;45:4144-57. doi:10.1109/TGRS.2007.906107. * [22] Chaudhuri D, Kushwha NKK, Samal A, Agarwal RCC. Automatic Building Detection From High-Resolution Satellite Images Based on Morphology and Internal Gray Variance. Sel Top Appl Earth Obs Remote Sensing, IEEE J 2015;PP:1-13. doi:10.1109/ISTARS.2015.2425655. * [23] Youssef MMS, Mallet C, Chehata N, Le Bris A, Gressin A. Combining top-down and bottom-up approaches for building detection in a single very high resolution satellite image. Geosci. Remote Sens. Symp. (IGARSS), 2014 IEEE Int., 2014, p. 4820-3. doi:10.1109/IGARSS.2014.6947573. * [24] Jabari S, Zhang Y, Suliman A. Stereo-based building detection in very high resolution satellite imagery using IHS color system. Geosci. Remote Sens. Symp. (IGARSS), 2014 IEEE Int., 2014, p. 2301-4. doi:10.1109/IGARSS.2014.6946930. * [25] Ghaffarian S, Ghaffarian S. Automatic building detection based on Purposive FastICA (PPICA) algorithm using monocular high resolution Google Earth images. ISPRS J Photogramm Remote Sens 2014;97:152-9. doi:10.1016/j.isprpsps.2014.08.017. * [26] Wang M, Yuan S, Pan J. Building detection in high resolution satellite urban image using segmentation, corner detection combined with adaptive windowed Hough Transform. Geosci. Remote Sens. Symp. (IGARSS), 2013 IEEE Int., 2013, p. 508-11. doi:10.1109/IGARSS.2013.6721204. * [27] Izadi M, Saeedi P. Automatic Building Detection in Aerial Images Using a Hierarchical Feature Based Image Segmentation. 2010 20th Int. Conf. Pattern Recognit., 2010, p. 472-5. doi:10.1109/ICPR.2010.123. * [28] Nosrati MS, Saeedi P. A novel approach for polygonal rooftop detection in satellite/aerial imageries. 2009 16th IEEE Int. Conf. Image Process., 2009, p. 1709-12. doi:10.1109/ICIP.2009.5413641. * [29] Khoshelham K, Nardinocchi C, Frontoni E, Mancini A, Zingeratti P. Performance evaluation of automated approaches to building detection in multi-source aerial data. ISPRS J Photogramm Remote Sens 2010;65:123-33. doi:10.1016/j.isprpsps.2009.09.005. * [30] Holt AC, Seto FYW, Rivard T, Gong P. Object-based Detection and Classification of Vehicles from Highresolution Aerial Photography. Photogrammetric Engineering and Remote. Sensing 2009;871-80. * [31] Madhogia S, Baggness P, Schikora M, Koch W, Cremers D. Car detection by fusion of HOG and causal MRF. IEEE Trans Aerospc Electron Syst 2015;51:575-90. doi:10.1109/TAES.2014.2141. * [32] Kembhavi A, Harwood D, Davis LS. Vehicle detection using partial least squares. IEEE Trans Pattern Anal Mach Intell 2011;33:1250-65. doi:10.1109/TPAMI.2010.182. * [33] Chen X, Xiang S, Liu C-L, Pan C-H. Vehicle Detection in Satellite Images by Parallel Deep Convolutional Neural Networks. 2013 2nd IAPR Asian Conf Pattern Recognit 2013:181-5. doi:10.1109/ACPR.2013.33. * [34] Gislason PO, Benedickson JA, Sevinsson JR. Random forests for land cover classification. Pattern Recognit Lett 2006;27:294-300. doi:10.1016/j.patrec.2005.08.011. * [35] Tokarczyk P, Montoya J, Schindler K. An Evaluation of Feature Learning Methods for High Resolution Image Classification. ISPRS Ann Photogram Remote Sens Spat Inf Sci 2012;1:3:389-94. doi:10.5194/isprsnannals-13-389-2012. * [36] Mokhtarzade M, Zooj MV, Road detection from high-resolution satellite images using artificial neural networks. Int J Appl Earth Obs Geomt 2007;9:32-40. doi:10.1016/j.jag.2006.05.001. * [37] Benedek C, Descombes X, Zerubia J. Building development monitoring in multitemporal remotely sensed image pairs with stochastic birth-death dynamics. Pattern Anal Mach Intell IEEE Trans 2012;34:33-50. * An annotated data library & tools to aid in the development of computer vision algorithms. Proc - Appl Imag Pattern Recognit Work 2009;3-10. doi:10.1109/AIPR.2009.5466304. * [39] Malof JM, Hou R, Collins LM, Bradbury K, Newell R. Automatic solar photovoltaic panel detection in satellite imagery. Int. Conf. Renew. Energy Res. Appl., IEEE; 2015, p. 1428-31. * [40] Viola P, Way OM, Jones MJ. Robust Real-Time Face Detection 2004;57:137-54. * [41] Sharp T. Implementing decision trees and forests on a GPU. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 2008;5305 LCNS:595-608. doi:10.1007/978-3-540-88693-8-44. * [42] Lepetit V, Fua P. Keypoint recognition using randomized trees. IEEE Trans Pattern Anal Mach Intell 2006. doi:10.1017/CBO9781107415324.004. * [43] Lempitsky V, Verhoek M, Noble JA, Blake A. Random forest classification for automatic delineation of myocardium in real-time 3D echocardiography. Lect Notes Comput Sci (including Subser Lect Notes Bioinformatics) 2009;5528:447-56. doi:10.1007/978-3-642-01932-6_48. * [44] Fanelli G, Gall J, Van Gool L. Real time head pose estimation with random regression forests. IEEE Conf Comput Vis Pattern Recognit 2011;617-24. doi:10.1109/CVPR.2011.5995458. * [45] Shotton J, Fitzgboboa W, Cook M, Sharp T, Finocchio M, Moore R, et al. Real-time human pose recognition in parts from single depth images. Cvpr 2011:1297-304. doi:10.1007/978-3-642-28661-2-5. * [46] Breiman L, Friedman J, Stone CJ, Olshen RA. Classification and regression trees. CRC press; 1984. * [47] Otsu N, Smith PL, Reid DB, Environment C, Palo L, Alto P, et al. A Threshold Selection Method from Gray-Level Histograms 1979;20:62-6. * [48] Forsyth D, Ponce J. Computer vision: a modern approach. 2nd ed. Pearson; 2012. * [49] Bishop CM. Pattern recognition and machine learning. vol. 1. springer New York; 2006. * [50] Cheriyadat A. Unsupervised Feature Learning for Aerial Scene Classification. Geosci Remote Sensing, IEEE Trans 2014;52:439-51. doi:10.1109/TGRS.2013.2241444. * [51] Manno-Kovacs A, Ok AO. Building Detection From Monocular VHR Images by Integrated Urban Area Knowledge. IEEE Geosci Remote Sens Lett 2015;12:2140-4. doi:10.1109/LGRS.2015.2452962. * [52] Levandowsky M, Winter D. Distance between Sets. Nature 1971;234:34-5. doi:10.1038/234034a0. * [53] Ok AO, Senaras C, Yuksel B. Automated detection of arbitrarily shaped buildings in complex environments from monocular VHR optical satellite imagery. IEEE Trans Geosci Remote Sens 2013;51:1701-17. doi:10.1109/TGRS.2012.2207123. ## Supplemental Materials Training Image Tags: 11ska505665 11ska580710 11ska475635 11ska580860 11ska475875 11ska565845 11ska565905 11ska490860 11ska490860 11ska4910725 11ska490605 11ska490615 11ska4910740 11ska580875 11ska655725 11ska595860 11ska595860 11ska460890 11ska655695 11ska640605 11ska640605 11ska640605 11ska580605 11ska595665 11ska5595665 11ska5505755 11ska475650 11ska595755 11ska625755 11ska490740 11ska565755 11ska520725 11ska595785 11ska580755 11ska445785 11ska595800 11ska625710 11ska520830 11ska640800 11ska535785 11ska430905 11ska460755 11ska505695 11ska565770
The quantity of small scale solar photovoltaic (PV) arrays in the United States has grown rapidly in recent years. As a result, there is substantial interest in high quality information about the quantity, power capacity, and energy generated by such arrays, including at a high spatial resolution (e.g., counties, cities, or even smaller regions). Unfortunately, existing methods for obtaining this information, such as surveys and utility interconnection flings, are limited in their completeness and spatial resolution. This work presents a computer algorithm that automatically detects PV panels using very high resolution color satellite imagery. The approach potentially offers a fast, scalable method for obtaining accurate information on PV array location and size, and at much higher spatial resolutions than are currently available. The method is validated using a very large (135 km\\({}^{2}\\)) collection of publicly available [1] aerial imagery, with over 2,700 human annotated PV array locations. The results demonstrate the algorithm is highly effective on a per-pixel basis. It is likewise effective at object-level PV array detection, but with significant potential for improvement in estimating the precise shape/size of the PV arrays. These results are the first of their kind for the detection of solar PV in aerial imagery, demonstrating the feasibility of the approach and establishing a baseline performance for future investigations. solar energy, detection, object recognition, satellite imagery, photovoltaic, energy information.
Summarize the following text.
283
arxiv-format/2307_01359v1.md
Toward an accurate equation of state and B1-B2 phase boundary for magnesium oxide to TPa pressures and eV temperatures Shuai Zhang [email protected] Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA Reetam Paul Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA Lawrence Livermore National Laboratory, Livermore, California 94550, USA S. X. Hu Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA Miguel A. Morales [email protected] Lawrence Livermore National Laboratory, Livermore, California 94550, USA Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA November 3, 2021 ## I Introduction Materials structures and behaviors under very high pressure (\\(\\sim\\)100 GPa to 1 TPa) are an important topic in high-energy-density sciences and earth and planetary sciences. At such conditions, materials are strongly compressed, which can lead to transitions into phases with different structures (by lowering the thermodynamic energies) and chemistry (by modifying the bonding). The past two decades have seen advances in computing and compression technologies that have added important knowledge to this subject by unveiling new structures (e.g., MgSiO\\({}_{3}\\) post-perovskite [1, 2, 3]) or chemical stoichiometry (such as H\\({}_{4}\\)O [4] and Xe-FeO\\({}_{2}\\)[5]) with notable changes to properties of chemical systems (particularly the insulator-metal transition [6, 7] and high-temperature superconductivity [8]). However, accurate determination of phase transitions at such extreme conditions remains challenging. Experimentally, static compression experiments based on diamond-anvil cells (DACs) [9, 10] are limited by sample sizes and diagnostics, while dynamic compression experiments are limited by the time scale and regime of the thermodynamic paths that can be achieved [11, 12, 13]. Theoretically, state-of-art investigations often rely on calculations based on Kohn-Sham density functional theory (DFT) [14]. Despite the tremendous success of DFT in predicting many structures and properties to moderately high pressures, errors associated with the single-particle approximation and exchange-correlation (xc) functionals render DFT predictions dubious where precise experimental constraints do not exist. Recent studies have shown quantum Monte Carlo (QMC) methods to be able to benchmark solid-state equation of state (EOS) and phase transitions [15; 16; 17] by directly solving the many-electron Schrodinger equation. Auxiliary-field quantum Monte Carlo (AFQMC) is one such QMC method that has shown great promise with flexibility and scalability for simulating both real and model many-body systems with high accuracy [18; 19; 20; 21; 22; 23; 24; 15]. In this work we apply the phaseless AFQMC [25; 26] method, in combination with optimized periodic Gaussian basis sets [15; 18; 27], to investigate high-pressure EOS and phase transition in solid-state materials by using magnesium oxide (MgO) as an example. This provides theoretically accurate cold-curve results for MgO, which we then use to benchmark against various predictions by DFT calculations. We then use DFT-based lattice dynamics and molecular dynamics with one of the best xc functionals to calculate the thermal contributions to the EOS. Finally, we combined the thermal with the cold-curve results to determine the finite-temperature EOS and B1-B2 phase boundary for MgO to eV temperatures. MgO is a prototype rock-forming mineral in planets, a pressure calibrator in DAC experiments, and a window material in shock experiments. From ambient pressure up to about 500 GPa, MgO is stabilized in the sodium chloride (NaCl, or B1) structure. Beyond that, it transforms into the cesium chloride (CsCl, or B2) structure, which is characterized by smaller coordination and lower viscosity that may be associated with mantle convection and layering in super-Earths different from those in the Earth. A benchmark of the EOS and phase transition of MgO would be important for modeling the interior dynamics and evolution of super-Earths, testing the degree of validity of various theoretical EOS or models at extreme conditions, as well as elucidating materials physics at extreme conditions by answering such questions as: is thermodynamic equilibrium reached in the experiments, or to what degree are the phase transformations subject to kinetic effects, chemistry/composition changes, or a combination of them, leading to various observations in experiments? This problem of the B1-B2 transition in MgO has been studied for over 40 years but remains uncertain in experiments [28; 12] and there is a discrepancy of \\(\\sim\\)20% between state-of-the-art DFT calculations [29]. In addition to the debates over phase relations near the triple point near 500 GPa [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40], recent double-shock experiments also suggest an inconsistency exists between theoretical predictions and experiments of the melting curve at TPa pressures [41]. The main goal of this work is to provide an accurate EOS and phase diagram for MgO to TPa pressures and eV temperatures by jointly combining an accurate many-body electronic structure approach (AFQMC) and finite-temperature quantum molecular dynamics (QMD) based on DFT to fully address various details of physics (electronic correlation, anharmonic vibration, EOS models, finite sizes of the simulation cell, and Born effective charge) that can affect the thermal EOS results. This paper is organized as follows: Section II outlines the methodologies used in this study, including those for zero-K and finite-temperature calculations; Sec. III presents the cold curve, thermal EOS, and phase boundary results for MgO, and discusses the errors and their sources; finally, Sec. IV concludes the paper. ## II Methods In the following, we present descriptions and settings of the computational approaches used in this study, including AFQMC, Hartree-Fock (HF), and DFT for the zero-K internal-energy calculations, and quantum molecular dynamics (QMD) and thermodynamic integration (TDI) for the thermodynamic free energies at nonzero temperatures. ### Zero-K static lattice calculations For the internal energy-volume \\(E(V)\\) relations at 0 K (often called the \"cold curve\"), we perform static lattice calculations for MgO in the B1 and B2 structures at a series of volumes by using a combination of AFQMC, HF, and DFT with various xc functionals. AFQMC is a zero-temperature quantum Monte Carlo approach. It is based on the stochastic propagation of wavefunctions in imaginary time using an ensemble of walkers in the space of non-orthogonal Slater determinants. It uses the Hubbard-Stratonovich transformation [42] to rewrite the expensive two-body part of the propagator into an integral over auxiliary fields coupled to one-body propagators, which are then integrated with Monte Carlo techniques. Like other QMC methods, AFQMC also faces an obstacle for fermionic systems, namely the \"phase (or sign) problem\" which arises because the fields led by Coulomb interaction are complex. Control of the sign problem can be achieved using constraints based on trial wavefunctions, like the fixed-node approximation in diffusion MC (DMC) [43] or, in the case of AFQMC, the constrained-path [44] and phaseless approximation [25]. When combined with appropriate trial wavefunctions, these methods have been shown to provide benchmark-quality results across a range of electronic structure problems including atoms, molecules, solids, and correlated model Hamiltonians, including cases with strong electronic correlations [20; 22; 45], known to be challenging to alternative approaches like Kohn-Sham DFT. Recent advances in the development of accurate and flexible trial wavefunctions include the use of multi-determinant expansions [46; 47; 48] and generalized HF [49; 50]. In this work, we use the phaseless AFQMC (ph-AFQMC) method [25; 26] to calculate the ground state properties of bulk MgO. In our ph-AFQMC calculations, the trial wavefunction is constructed from a Slater determinant of HF orbitals (the HF solution for each MgO structure at every density), which were found to yield accurate energy results. We use QUANTUM ESPRESSO (QE) [51; 52] for the calculation of the trial wavefunction and for the generation of the one- and two-electron integrals. The modified Cholesky decomposition [53; 54; 55; 56] is used to avoid the \\(\\mathcal{O}(M^{4})\\) cost of storing the two-electron repulsion integrals. All QE simulations were performed using optimized norm-conserving Vanderbilt (ONCV) pseudopotentials [57], constructed with the Perdew-Burke-Ernzerhof (PBE) [58] xc functional. We used the recently developed optimized Gaussian basis sets [27] in all AFQMC calculations. The calculations were based on primitive unit cells and performed using \\(\\Gamma\\)-centered 2\\(\\times\\)2\\(\\times\\)2, 3\\(\\times\\)3\\(\\times\\)3, and 4\\(\\times\\)4\\(\\times\\)4 \\(k\\) grids to extrapolate to the thermodynamic limit at each density. Results from multiple basis sets were used, in combination with corrections based on periodic second-order Moller-Plesset perturbation theory (MP2) calculations, to obtain results extrapolated to the complete basis set (CBS) limit (see Appendix A for more details). This was shown to be a successful approach to removing basis and finite size errors in previous studies, to which we refer readers for additional details [27]. All AFQMC calculations were performed using the open-source QMCPACK software package [59]. We used \\(\\sim\\)1000 walkers and a time step of 0.005 Ha\\({}^{-1}\\), which we found sufficient to control any potential population and finite time-step biases, respectively. Kohn-Sham DFT [14] follows the Hohenberg-Kohn theorem [60] and simplifies the many-body problem into a single-particle mean-field equation that can be solved self-consistently via iteration over the electron density. The real complicated electron-electron interactions are simplified into the xc functional term. Since the accurate QMC solution for the uniform electron gas [43], there have been developments of many forms of xc functionals for various applications, which form a \"Jacob's ladder\" with different rungs [local density approximation (LDA), generalized gradient approximation (GGA), meta-GGA, etc.] that lead to chemical accuracy at the expense of increasing computational cost. Our DFT calculations of the cold curve are performed with Vienna _Ab initio_ Simulation Package (vasp) [61]. In our vasp simulations, we use a two-atom unit cell, \\(\\Gamma\\)-centered 16\\(\\times\\)16\\(\\times\\)16 Monkhorst-Pack \\(k\\) mesh, a plane-wave basis with cutoff of 1200 eV, and convergence criteria of \\(10^{-7}\\) eV/cell for the self-consistent iteration. The simulations use the projector augmented wave (PAW) [62] method with pseudopotentials labeled with \"sv_GW\" and \"h\", 1.75 and 1.1-Bohr core radii, and treating the outermost 10 and 6 electrons as valence for Mg and O, respectively. We consider five different xc functionals: LDA [63], PBE [58], PBEsol [64], strongly constrained and appropriately normed meta-GGA (SCAN) [65], and the Heyd-Scuseria-Ernzerhof-type HF/DFT hybrid functional (HSE06) [66]. The DFT calculations also produce pressures that are not directly available from our AFQMC calculations because of the difficulties in QMC to calculate forces. For consistency in data comparison between different approaches and determination of the B1-B2 transition, we fitted the \\(E(V)\\) data to EOS models that are widely used in high-pressure studies. It has long been known that high-order elastic moduli may be required to parameterize materials EOS under extreme (e.g., near 2-fold) compression [67]. Therefore, we have considered multiple EOS models and cross-checked them with a numerical (spline fitting) approach to ensure the accuracy of the EOS and phase-transition results. We have considered two different analytical EOS models: one is the Vinet model [68], which follows \\[E(V)=E(V_{0})+\\int_{V}^{V_{0}}P(V)\\mathrm{d}V, \\tag{1}\\] with \\[P(V)=3B_{0}\\frac{1-x}{x^{2}}e^{1.5(B_{0}^{\\prime}-1)(1-x)}, \\tag{2}\\] where \\(x=(\\frac{V}{V_{0}})^{1/3}\\) and \\(V_{0}\\) and \\(B_{0}^{\\prime}\\) are, respectively, the volume and first-order pressure derivative of the bulk modulus at zero pressure; the other is the Birch-Murnaghan model [69] to the fourth order, which follows \\[E(V)=E_{0}+9B_{0}V_{0}(f^{2}/2+a_{1}f^{3}/3+a_{2}f^{4}/4), \\tag{3}\\] where \\(f=[(\\frac{V_{0}}{V})^{2/3}-1]/2\\) is the Eulerian finite strain, \\(a_{1}=1.5(B_{0}^{\\prime}-4)\\), and \\(a_{2}\\) is another parameter. We have also tested the third-order Birch-Murnaghan model, which does not include the \\(a_{2}\\) term in Eq. 3, for comparison with the other models in selected cases (see Appendix B). ### Finite-temperature thermodynamic calculations Thermodynamic calculations at nonzero temperatures are performed in two different ways: one is from lattice dynamics by using the quasiharmonic approximation (QHA), and the other is based on QMD. Within QHA, lattice vibrations are considered to be dependent on volume but independent of temperature. In practice, one can use the small-displacement approach or density functional perturbation theory (DFPT) to calculate phonons at 0 K and then compute the thermodynamic energies analytically from quantum statistics. Despite its wide usage and success in giving improved thermodynamic properties over the fully harmonic approximation for materials at relatively low temperatures, the applicability of QHA is questionable at high temperatures and for systems with light elements at low temperatures. In comparison, QMD simulations significantly improve the description of lattice dynamics by naturally including all anharmonic vibrations. By employing a TDI approach, the free energies can also be accurately calculated, which makes it possible to chart the phase-transition boundaries at finite temperatures. We use the PHONOPY program [70; 71] and VASP to calculate the phonons of MgO at 0 K with DFPT and under QHA. We have tested the effects of including the Born effective charge (which is necessary to correctly account for the splitting between longitudinal and transverse optical modes) and different xc functionals on the phonon band structures and vibrational energies (see Appendices C and D). The calculation is performed at a series of volumes \\(V\\). This allows estimation of the ion thermal contributions \\(F_{\\rm i-th}(V,T)\\), at any temperature \\(T\\), to be added to the free energies \\(F(V,T)\\) via \\[F_{\\rm QHA}(V,T) =E_{\\rm QHA}(V,T)-TS_{\\rm QHA}(V,T)\\] \\[=k_{\\rm B}T\\sum_{\\mathbf{q},s}\\ln\\left[2\\sinh\\left(\\hbar\\omega_{\\mathbf{ q},s}/2k_{\\rm B}T\\right)\\right], \\tag{4}\\] where \\(E_{\\rm QHA}(V,T)=\\sum_{\\mathbf{q},s}\\left(\\tilde{n}+1/2\\right)\\hbar\\omega_{\\mathbf{q},s}\\) is the vibrational internal energy, \\(\\tilde{n}=1/\\left(e^{\\hbar\\omega_{\\mathbf{q},s}/k_{\\rm B}T}-1\\right)\\) is the effective number of the phonon mode with frequency \\(\\mathbf{q}\\) and index \\(s\\), and \\(S_{\\rm QHA}(V,T)=k_{\\rm B}\\sum_{\\mathbf{q},s}\\left[(\\tilde{n}+1)\\ln\\left(\\tilde{n }+1\\right)-\\tilde{n}\\ln\\tilde{n}\\right]\\) is the vibrational entropy. Each calculation employed a 54-atom supercell and was performed using a \\(\\Gamma\\)-centered 4\\(\\times\\)4\\(\\times\\)4 \\(k\\) mesh (for both B1 and B2 phases). In QMD calculations, we use the Mermin-Kohn-Sham DFT approach [72] with the PBEsol xc functional. Ion temperatures are controlled by using the Nose-Hoover thermostat [73], while electron temperatures are defined by the Fermi-Dirac distribution via a smearing approach. \\(NVT\\) ensembles are generated that consist of 4000 to 10,000 MD steps with time step of 0.5 fs. Mg_sv_GW and O_h potentials are used, the same as the DFT calculations at 0 K. The energy cutoff is 1000 eV, which defines the size of the plane-wave basis set. It requires large enough cells in combination with proper/fine \\(k\\) meshes to ensure the accuracy of the DFT calculations (see Appendix E). In our simulations, we use cubic cells with 64 and 54 atoms that are sampled by a special \\(k\\) point \\((1/4,1/4,1/4)\\) and \\(\\Gamma\\)-centered \\(2\\times 2\\times 2\\)\\(k\\) mesh for B1 and B2 phases, respectively, in order to obtain results reasonably close to the converged setting while computational cost is relatively low. Structure snapshots have been uniformly sampled from each QMD trajectory and recalculated with denser \\(k\\) meshes of 2\\(\\times\\)2\\(\\times\\)2 (for B1) and 3\\(\\times\\)3\\(\\times\\)3 (for B2) to improve the accuracy of the thermal EOS and their volume dependence and reduce the error in the calculation of the phase transition. The QMD calculations are performed at temperatures between 500 and 12,000 K, in steps of 500 to 1500 K, with more calculations at low to intermediate temperatures to improve the robustness of the TDI for anharmonic free-energy calculations. Large numbers of 360 and 320 electronic bands are considered, respectively, for B1 and B2 simulations to ensure the highest-energy states remain unoccupied. The EOS obtained from the QMD or QHA calculations produces \\(E(V,T)\\) and \\(P(V,T)\\) data that allow the calculation of the Hugoniot. The analysis of the QMD EOS data follows the procedure that was introduced in detail in our recent paper on liquid SiO\\({}_{2}\\)[74]. The Hugoniot is calculated by solving the Rankine-Hugoniot equation using the numerically interpolated EOS. The different theoretical predictions are then compared to the experimentally measured Hugoniot to benchmark the performance of the computational approaches and the xc functionals in the corresponding thermodynamic regime. With the assistance of QHA and QMD, the entire ion thermal contribution to the free energy can be calculated by \\[F_{\\rm i-th}(V,T)=F_{\\rm QHA}(V,T)+F_{\\rm anharm}(V,T). \\tag{5}\\]In Eq. 5, \\[F_{\\rm anharm}(V,T)=-T\\int_{T_{\\rm ref}}^{T}\\frac{E_{\\rm anharm}(V,\\mathcal{T})}{ \\mathcal{T}^{2}}\\mathrm{d}\\mathcal{T} \\tag{6}\\] denotes the anharmonic term as calculated by TDI, where \\(E_{\\rm anharm}=E_{\\rm QMD}-E_{\\rm cold+QHA}\\), \\(E_{\\rm QMD}\\) is the internal energy from QMD simulations, and \\(T_{\\rm ref}\\) is a reference temperature. We note that QMD misses the quantum zero-point motion of ions while QHA does not. This leads to increased discrepancy between QMD and QHA internal energies as temperature drops near zero, associated with decreasing heat capacity \\(C_{\\rm V}\\) of the real system (from \\(\\sim 3k_{\\rm B}\\)/atom to zero, as captured by QHA, whereas QMD gives \\(C_{\\rm V}=3k_{\\rm B}\\)) since fewer lattice vibration modes can be excited. In order to eliminate the resultant artificial exaggeration of the integrand, we have replaced \\(E_{\\rm cold+QHA}\\) with \\(E_{\\rm cold}+3k_{\\rm B}T\\) in our calculations of Eq. 6 (see Appendix F). This effectively treats the ions classically in the evaluation of \\(F_{\\rm anharm}\\) at temperatures higher than \\(T_{\\rm ref}\\), which we believe is a reasonable approximation for phonon interactions (the anharmonic term). Our calculated results for \\(E_{\\rm anharm}\\) as a function of temperature are then fitted to high-order polynomials (to the sixth-order for B1 and eighth-order for B2) to compute the numerical integration in Eq. 6. The functionality of TDI also requires choosing the proper reference point \\(T_{\\rm ref}\\). In this work, we consider \\(T_{\\rm ref}\\) to be low by following the idea that QHA is valid and other anharmonic contributions (beyond the volume-dependence vibration changes as have been included in QHA) are zero for MgO at low temperatures. For consistency among different isochores, we make the choice of \\(T_{\\rm ref}\\) such that the heat capacity is 10% of \\(3k_{\\rm B}\\). The corresponding \\(T_{\\rm ref}\\) is 100 to 200 K. We have also tested other choices of \\(T_{\\rm ref}\\) and examined their effects on \\(F_{\\rm anharm}\\) and the B1-B2 phase boundary. The results are summarized in Appendix F. We note that when analyzing the QMD trajectory to calculate the EOS, we disregarded the beginning part (20%) of each MD trajectory to ensure the reported EOS represents that under thermodynamic equilibrium. Ion kinetic contributions to the EOS are manually included by following an ideal gas formula (i.e., internal energy \\(E_{\\rm ion\\,\\,kin.}=3Nk_{\\rm B}T/2\\) and pressure \\(P_{\\rm ion\\,\\,kin.}=Nk_{\\rm B}T/V\\), where \\(N\\) is the total number of atoms in the simulation cell and \\(k_{\\rm B}\\) is the Boltzmann constant). Although MgO is an insulating solid with a wide electronic band gap in all conditions considered in this study, we have still carefully considered the effect of electron thermal effects in the free-energy calculations (see Fig. 14(b)). By following the idea of Vinet [68], we consider the EOS at each temperature and fit the Helmholtz free energy-volume data \\(F(V)\\) to various EOS models, including the Vinet and fourth-order Birch-Murnaghan model as introduced in the previous Sec. II.1, as well as a numerical approach using cubic splines. The B1-B2 transition pressures and volumes of the two phases upon transition can then be determined by the common tangents of \\(F(V)\\) of the two phases (see Appendix B). ## III Results ### Cold-curve equation of state The cold-curve EOS of B1 and B2 MgO based on static-lattice HF and AFQMC calculations are listed in Table 1. The data for each phase at every volume is based on calculations using basis sets and simulation cells with finite sizes, which have then been extrapolated to the thermodynamic and CBS limits. The results show that, for both B1 and B2 phases, the energy minimum locates at 17.0 to 18.7 A\\({}^{3}\\) when the calculation takes into account only exchange interactions of the electrons (\\(E_{\\rm HF}\\)), the correlation energy is about \\(-0.60\\) Ha (1 Ha=27.211386 eV) at above 10.5 A\\({}^{3}\\) and decreases to \\(-0.63\\) eV as the cell volume shrinks to \\(\\sim\\)7 A\\({}^{3}\\), and the standard errors of the AFQMC data are small (\\(\\sim\\)0.1 mHa). The energy-volume curves \\(E(V)\\) are obtained by fitting the AFQMC static lattice data to EOS models, which gives rise to the equilibrium volume \\(V_{0}\\) and bulk modulus \\(B_{0}\\) of each phase. The results are summarized in Table 2 and compared to those from HF and DFT simulations in order to investigate the importance of the xc functionals. We then calculated the B1-B2 transition pressure \\(P_{\\rm tr}\\) and volumes of the two phases upon transition \\(V_{\\rm tr}\\) from the common tangent of the \\(E(V)\\) curves. This is equivalent to another common approach for determining the transition pressure using the enthalpy-pressure relation (see Appendix B). Our results show that DFT predictions vary by up to \\(\\sim\\)7% in \\(V_{0}\\), \\(\\sim\\)15% in \\(B_{0}\\), \\(\\sim\\)7% in \\(P_{\\rm tr}\\), and \\(\\sim\\)10% in volume change upon B1-B2 transition, due to usage of different xc functionals. To directly compare theoretical EOS with DAC experiments, corrections to the static-lattice results are needed to account for the differences due to lattice vibration and thermal contributions. We have added ion thermal contributions to the cold curve EOS via lattice vibration calculations under QHA, which is generally considered a good approximation for MgO under room temperature. The cold-curve EOS is re-evaluated by fitting the corrected 300-K data for each phase to the EOS models. The equilibrium volume and the pressure-volume results from theoretical calculations (AFQMC, DFT, and HF) are shown in Fig. 1 and compared to experimental results. It shows remarkable agreement between AFQMC and experimental results for both the equilibrium volume and compression curve. In contrast, the HF and DFT results scatter around the experimental values and vary significantly. Explicitly, DFT results exhibit a strong dependence on the choice of the xc functional, with HSE06, SCAN, and PBEsol performing better than PBE and LDA when compared with the experimental data. \\begin{table} \\begin{tabular}{c c c c} \\(V\\) (Å\\({}^{3}\\)) & \\(E_{\\rm HF}\\) (Ha) & \\(E_{\\rm correlation}\\) (Ha) & \\(\\sigma\\) (Ha) \\\\ \\hline B1 & & & \\\\ [MISSING_PAGE_POST] \\end{tabular} \\end{table} Table 1: Hartree–Fock (\\(E_{\\rm HF}\\)) and correlation (\\(E_{\\rm correlation}=E_{\\rm AFQMC}-E_{\\rm HF}\\)) energies of MgO in B1 and B2 phases at a series of volumes. The data are in the thermodynamic and CBS limits. \\(\\sigma\\) denotes the standard error of the AFQMC energy. Numbers are in units per Mg-O pair. Figure 2 compares the AFQMC cold curve of MgO at higher densities (near the B1-B2 transition) with those calculated using HF and DFT with various xc functionals. It shows HF, LDA and PBE demonstrate large deviations (approximately \\(\\pm 0.5\\) to 1 eV/MgO in energy and 0 to 40 GPa in pressure, depending on the phase and the density) for the cold curves, while PBEsol, SCAN, and HSE06 show significantly improved agreements, in comparison to the AFQMC results. These findings are overall consistent with normal expectations based on Jacob's ladder (precision relation: hybrid\\(>\\)meta-GGA\\(>\\)GGA/LDA\\(>\\)HF). Figure 3 summarizes the B1-B2 phase transition pressures (red) and volume changes upon the phase transition (black) of MgO calculated using HF and DFT with various XC functionals in comparison to AFQMC. Due to the reconciliation of the EOS errors for the \\begin{table} \\begin{tabular}{l c c c c c c c} & \\(V_{0}^{\\rm B1}\\) (Å\\({}^{3}\\)) & \\(B_{0}^{\\rm B1}\\) (GPa) & \\(V_{0}^{\\rm B2}\\) (Å\\({}^{3}\\)) & \\(B_{0}^{\\rm B2}\\) (GPa) & \\(V_{tr}^{\\rm B1}\\) (Å\\({}^{3}\\)) & \\(V_{tr}^{\\rm B2}\\) (Å\\({}^{3}\\)) & \\(P_{\\rm tr}\\) (GPa) \\\\ \\hline LDA & 18.054 & 172.1 & 17.676 & 158.8 & 8.849 & 8.445 & 531.7 \\\\ PBE & 19.266 & 149.0 & 19.000 & 133.7 & 9.076 & 8.669 & 523.4 \\\\ PBEsol & 18.737 & 157.4 & 18.366 & 145.1 & 9.013 & 8.609 & 517.5 \\\\ SCAN & 18.469 & 166.2 & 19.720 & 75.3 & 8.914 & 8.483 & 546.5 \\\\ SCAN a & 18.474 & 150.8 & 16.910 & 177.9 & 8.918 & 8.470 & 549.8 \\\\ HSE06 & 18.564 & 166.7 & 18.130 & 153.9 & 8.983 & 8.568 & 530.6 \\\\ HF b & 18.565 & 179.1 & 17.795 & 176.4 & 9.141 & 8.669 & 535.1 \\\\ AFQMC b & 18.407 & 175.7 & 17.940 & 154.6 & 9.201 & 8.739 & 499.2 \\\\ DMC c & 18.788 \\(\\pm\\) 0.093 & 153.8 \\(\\pm\\) 4.5 & – & – & – & – & 493 \\(\\pm\\) 8 (503 \\(\\pm\\) 8 d) \\\\ Expt. e & – & – & – & – & – & – & 429-562 (439–572 f) \\\\ Expt. g & – & – & – & – & – & 410–600 \\\\ \\end{tabular} \\end{table} Table 2: Equilibrium volume (\\(V_{0}\\)), bulk modulus (\\(B_{0}\\)), and volumes of transition (\\(V_{\\rm tr}\\)) of MgO in B1 and B2 phases, and the transition pressure (\\(P_{\\rm tr}\\)), determined from the fitting of the \\(E(V)\\) data from static lattice calculations using HF, DFT with different xc functionals, and AFQMC. Fittings are based on the fourth-order Birch–Murnaghan EOS model unless specified. Volumes are in units per Mg-O pair. Also listed for comparison are results from the latest DMC calculations, which agree with our AFQMC predictions, and experimental values of \\(P_{\\rm tr}\\), which are not precise enough to constrain theoretical predictions. B1 and B2 phases in the calculation of the B1-B2 transition, the proximity of HF and DFT to AFQMC results no longer follows expectations of Jacob's ladder. The AFQMC predicted transition pressure is lower and volumes upon transition are larger than all other methods. PBEsol prediction of the transition pressure is closer to AFQMC than HF or other DFT xc functionals, with a difference of 20 GPa. ### High-temperature EOS High-temperature EOS of solid-state MgO is obtained from QMD and QHA calculations. The QHA results are based on a combination of phonon and cold-curve EOS, where the cold curves are obtained by static DFT calculations using four different xc functionals (LDA, PBE, PBEsol, and SCAN), while phonon calculations are performed by using the DFPT approach, PBEsol xc functional, Mg_sv_GW and O_h pseudopotentials, and including the Born effective charge. Tests show negligible differences in vibrational energies if the phonon calculations are done by using other xc functionals or ignoring the splitting between longitudinal and transverse optical modes (see Appendices C and D). We first used the EOS to calculate the principal Hugoniot and compared them with experiments. Figure 4 shows comparisons of the Hugoniots in pressure-density and temperature Figure 1: AFQMC, various DFT, and HF predictions of (a) the energy-volume relation and equilibrium volume \\(V_{0}\\) (colored triangles) and (b) compression curve of B1 MgO at 300 K, benchmarked against experimental values (gray symbols) from Refs. [76, 77, 78]. In (a), all energy curves are plotted relative to their respective minimum values. pressure spaces. Similar to previous QMD calculations that used the Armiento-Mattsson (AM05) xc functional [38], our present QMD results based on the PBEsol functional show excellent agreement with experimental Hugoniots in stability regimes of both B1 and B2 in the pressure-density relation, as well as for B1 in the temperature profile. In comparison, QHA results show consistency with experiments at low pressures but give increasingly higher density at high pressures along the Hugoniot, more so for the B2 than the B1 phase. The breakdown of QHA as shown in the pressure-density results can be attributed to the anharmonic vibration effect that is naturally included in QMD but missing in QHA and becomes more significant at higher temperatures. By comparing the thermal EOS along an isotherm, we found similar energies but higher pressures given by QMD than by QHA; Figure 2: Comparison between AFQMC and various DFT or HF predictions of (a) internal energies and (b) pressures from static-lattice calculations of MgO around the B1-B2 transition. Top: direct comparisons; bottom: differences relative to AFQMC. Solid and dashed lines denote results for the B1 and B2 structures, respectively. according to the Rankine-Hugoniot equation, this must be reconciled by less shrinking in volume, which explains the Hugoniot density relations between QMD and QHA as shown in Fig. 4(a). In the temperature-pressure space, QMD and QHA results of the Hugoniot are less distinct from each other than in the pressure-density space. QHA results based on LDA xc functional clearly lie below the range of the experimental data for the B1 phase, PBE significantly improves the agreement with experiments, while PBEsol and SCAN functionals and AFQMC data fall between LDA and PBE and near the lower bound of the experimental data. QMD predictions of the temperature are higher and improve over that by QHA using PBEsol. In addition, QMD predicts smaller differences between the Hugoniot of the B1 and B2 phases than QHA; AFQMC predictions of the B2 Hugoniot show good agreement with SCAN and LDA under QHA, following the trend of experimental Hugoniot after the turnover, while the QHA-PBEsol predictions are slightly higher. Our QMD results of the Hugoniot are overall consistent with previous calculations and align with experiments. More discussions will be given in the following and in Appendix G regarding the B1-B2 phase Figure 3: Comparison between AFQMC and various DFT or HF predictions of the volumes of the B1 and B2 phases upon transition and the transition pressure. Lighter colors denote 300-K results based on QHA; darker colors denote static-lattice results. The transition occurs at 429 to 562 GPa according to room-temperature experiments [28]. boundary and comparison between our prediction and the experiments. The agreement with experiments in both the thermal (along the Hugoniot) and the cold-curve EOS (as shown in the previous Sec. III.1) validates PBEsol as an optimal choice for the xc functional for calculations of MgO at both the ground state and finite temperatures. In the following, we have added the QHA and QMD-derived (using the TDI approach) thermal free energies based on DFT-PBEsol calculations to various cold curves (by AFQMC and DFT-PBEsol/SCAN) to estimate the total free energies of MgO in both B1 and B2 phases [83]. Based on these results, we charted the B1-B2 transition and calculate the volumes of the two phases upon transition. The results provide a preliminary reference for the B1-B2 phase boundary and its uncertainty based on state-of-art theoretical computations. Figure 5 shows the volume of MgO collapses by \\(\\sim 4.75(\\pm 0.25)\\%\\) at 0 K [from \\(\\sim\\)9.2 A\\({}^{3}\\)/MgO for B1 to \\(\\sim\\)8.7 A\\({}^{3}\\)/MgO for B2 (error associated with using different methods Figure 4: Comparison between QMD and QHA Hugoniots of MgO in (a) pressure-density and (b) temperature-pressure spaces. Different colors denote the different calculation methods. Solid and dashed curves represent results for B1 and B2 phases, respectively. Also included (in gray) are the experimental Hugoniot data from Marsh _et al._ (squares) [79], Vassiliou _et al._ (up triangles) [80], Fratanduono _et al._ (down triangles) [81], Root _et al._ (circles) [38], Svendsen and Ahrens _et al._ (diamonds) [82], McWilliams _et al._ (ovals) [11], and Bolis _et al._ (shaded areas, with a horizontal bar denoting the error in pressure and a cross denoting the condition interpreted as the melting point) [40], previous DFT-MD predictions based on the Armiento–Mattsson xc functional (line-crosses) [38], and a thermodynamic EOS model by de Koker and Stixrude (thick dashed lines) [31]. AFQMC, PBEsol, and SCAN: \\(\\pm 0.1\\) A\\({}^{3}\\)/MgO)] and \\(3.7(\\pm 0.2)\\%\\) at 10,500 K [from \\(\\sim\\)9.8 A\\({}^{3}\\)/MgO for B1 to \\(\\sim\\)9.4 A\\({}^{3}\\)/MgO for B2 (error: \\(\\pm 0.2\\) A\\({}^{3}\\)/MgO)] for the B1\\(\\rightarrow\\)B2 transition, and the transition pressure decreases from \\(\\sim 515(\\pm 25)\\) GPa to \\(\\sim 490(\\pm 25)\\) GPa as temperature increases from 0 to 10,500 K. We found the \\(V_{\\rm tr}-T\\) curves are similar between the three sets of predictions based on AFQMC and DFT-PBEsol/SCAN cold curves, with the AFQMC predicted volumes and volume collapses larger (and transition pressures lower) than the DFT predictions. The d\\(T\\)/d\\(P\\) Clapeyron slope of the B1-B2 phase boundary predicted by the AFQMC data set is similar to DFT-SCAN, both being steeper than that by DFT-PBEsol (see Table 4 that summarizes values of the Clapeyron slope). Figure 5 also shows QHA predicts a much less steep boundary for the B1-B2 transition than QMD, reflecting the importance of anharmonic vibrational effects, similar to the report by previous studies [29; 37]. Our results clearly show the amounts of changes in volume and volume difference between the two phases of MgO upon transition, as well as the important role of electronic interactions (many-body in nature versus single-particle approximation under different xc functionals) in affecting the results. The much less negative value in Clapeyron slope (\\(dP_{\\rm tr}/dT\\)) and slightly larger value in volume collapse of the B1 Figure 5: (a) Volume collapse and (b) pressures of the B1\\(\\rightarrow\\)B2 transition at finite temperatures with the cold curve calculated using AFQMC and DFT with two optimal xc functionals (PBEsol and SCAN) and thermal effect (based on QMD and TDI) calculated within DFT using the PBEsol functional. Results based on QHA (including electron thermal but excluding anharmonic vibrational effects) are shown in lighter-colored curves (with circle symbols) for comparison. B2 transition predicted by QMD may cause less significant topography of discontinuity and lateral variations in deep-mantle mineralogy of super-Earths than previously expected based on the QHA results, changing expectations on the style of convection in these planets (see discussions in, e.g., [84]). Moreover, by predicting a steeper B1-B2 boundary than latest theoretical studies [29; 37], our AFQMC (and PBEsol) results show excellent consistency with both experiments by McWilliams _et al_[11] and Bolis _et al_[40] (see Appendix G). We note that Bolis _et al_[40] interpreted the turnover in their experiments as the melting start of shocked MgO, largely based on comparisons with theoretical studies by then that underestimated \\(P_{\\rm tr}\\) along the Hugoniot. Our new results suggest that the turnovers in the experiments are associated with the B1-B2 transition. It is beyond the scope of this study, however, to decipher the nature of the subtle differences between experiments by Bolis _et al_. [40] and McWilliams _et al_. [11], as it requires accurate knowledges of the triple point, thermodynamic free energies of the liquid phase, as well as considerations of the kinetics of the transition to fully understand the observations. We have performed additional tests and found the error in transition pressure (associated with the choices of different \\(T_{\\rm ref}\\) and fitting methods in TDI) increases to \\(\\sim\\)50 GPa at \\(T\\approx 10^{4}\\) K (see Appendix F, corresponding changes in the Clapeyron slope are tabulated in Table 4), while the errors due to other sources (EOS models, data error bars, and the data grid) are relatively small (e.g., the statistical error of the AFQMC and QMD energies only leads to a difference in \\(P_{\\rm tr}\\) of 1.5 GPa at 6000 K). ## IV Conclusions This work exemplifies the first application of the AFQMC approach to benchmark the cold curve and phase transition in solid-state materials to very high pressures. Our AFQMC results reproduce the experimental cold curve (equilibrium volume and compressibility at room temperature) and provide a preliminary reference for the equations of state of MgO at up to 1 TPa. In comparison, DFT predictions vary by up to 7% to 15% for the equilibrium properties (\\(V_{0}\\) and \\(B_{0}\\)) and B1-B2 transition (\\(P_{\\rm tr}\\) and volume collapse upon the transition), depending on the xc functionals, and the largest differences are observed between the cold curves by PBE and LDA. The HSE06, SCAN, PBEsol functionals perform better than PBE,LDA, and HF in reproducing the \\(E(V)\\) cold curves by AFQMC. The cold-curve differences for B1 offset those for B2, leading to the sensitivity of the predicted transition pressure and volume change to the choice of the xc functional. Our Hugoniot results based on QMD calculations of the thermal EOS using PBEsol show excellent agreement with experiments for B1 and B2 in the pressure-density relation, as well as for B1 in the temperature-pressure profile. In comparison, QHA results of the pressure-density Hugoniot show consistency with experiments at low pressures but increasing discrepancy at high pressures, because larger anharmonic effects are expected at higher temperatures. The good performance of PBEsol in reproducing both the thermal (along the Hugoniot) and the cold-curve EOS of MgO has motivated us to further calculate the anharmonic free energies and add them to the cold curves by AFQMC and DFT-PBEsol or SCAN to calculate the total free energies and evaluate the B1-B2 transition at various temperatures. Our results show temperature lowers the transition pressure and expands the volumes upon the B1-B2 transition. Anharmonic vibration increases the transition pressure \\(P_{\\rm tr}\\) and hinders the transition volumes \\(V_{\\rm tr}\\) from expansion, relative to QHA. AFQMC predicts a steeper \\(\\mathrm{d}T/\\mathrm{d}P\\) phase boundary and a larger volume collapse upon the B1\\(\\rightarrow\\)B2 transition than DFT-PBEsol, similar to the effect of anharmonicity with respect to QHA. In addition to providing a preliminary reference for the B1-B2 phase boundary and its uncertainty based on state-of-art theoretical computations, our results will be useful for building an accurate multiphase EOS table for MgO for planetary sciences and high energy density sciences applications, as well as for elucidating the mechanism of phase transformations (e.g., kinetics effects) in different experimental settings (e.g., compression rates). More work is desired to clarify the triple point and the melting curve at high temperatures and pressures to multi-TPa pressures. Looking ahead, finite-temperature AFQMC [85; 86], by better accounting of the electron thermal effects, and back-propagation for force and stress estimators in AFQMC [87] can offer additional yet more-accurate options to benchmark the EOS and phase transitions of solid-state materials at high temperatures and pressures. ## Acknowledgements This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856, the University of Rochester, and the New York State Energy Research and Development Authority. The Flatiron Institute is a division of the Simons Foundation. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract number DE-AC52-07NA27344. Part of the funding support was from the U.S. DOE, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program and Center for Predictive Simulation of Functional Materials (CPSFM). Computer time was provided by the Oak Ridge Leadership Computing Facility, Livermore Computing Facilities, and UR/LLE HPC. S.Z. thanks R. S. McWilliams for sharing experimental data and J. Wu, R. Jeanloz, F. Soubiran, B. Militzer, T. Duffy, and K. Driver for discussions. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof. ## Appendix A Finite-size and basis sets corrections to AFQMC energies Our AFQMC calculations were performed for both phases of MgO at all volumes with various cell sizes and optimized basis sets. These include: (i) 2\\(\\times\\)2\\(\\times\\)2 \\(k\\) points (8 MgO units) with pVDZ, pVTZ, pVQZ, and pV5Z; (ii) 3\\(\\times\\)3\\(\\times\\)3 \\(k\\) points (27 MgO units) with pVTZ and pVQZ; and (iii) 4\\(\\times\\)4\\(\\times\\)4 \\(k\\) points (64 MgO units) with pVTZ. We have then followed three steps to extrapolate the AFQMC results to the thermodynamic and the complete basis set (CBS) limits: 1. Use all the pVTZ results to calculate finite size corrections for the 3\\(\\times\\)3\\(\\times\\)3 calculations; 2. Use all the 3\\(\\times\\)3\\(\\times\\)3 calculations to calculate the basis set corrections, combining AFQMC calculations with MP2 calculations and the \"scaled\" correction described in Ref. [27];3. Use the 2\\(\\times\\)2\\(\\times\\)2 calculations to check reliability of the basis set corrections in step 2 and to ensure the basis set corrections were robust. Our extrapolation procedure is demonstrated in Figure 6. The remarkable consistency between the pVTZ and pV5Z corrected values (to approximately 1-2 mHa/MgO from calculations with only 2\\(\\times\\)2\\(\\times\\)2 \\(k\\) points) suggests our corrections are reliable and robust. Figure 6: Examples of our AFQMC energies extrapolation: (a) finite-size correction [pVTZ (\\(N\\)=58)]; (b) AFQMC correlation energy measured with respect to the CBS limit; (c-d) uncorrected AFQMC correlation energies for various basis sets and, as a comparison, basis set corrected AFQMC energies with the pVTZ basis set; (e-f) the same red curves in panels (c-d) but now measured with respect to corrected pV5Z basis set results, showing the excellent agreement and efficiency of the basis set correction. Panels (a-b): 18.65 Å\\({}^{3}\\)/MgO; (b-f): \\(N_{\\text{NiO}}=8\\); (a-c) and (e): B1; (d) and (f): B2. The inset of (a) shows the values of the finite-size corrections at different volumes. In (b), \\(\\Delta E_{\\text{corr}}\\) denotes uncorrected AFQMC energies, \\(\\Delta E_{\\text{MP2}}\\) represents the MP2 correction, and \\(\\Delta E_{\\text{MP2}}^{\\text{scaled}}\\) denotes the scaled MP2 correction. \\(N\\) denotes the number of basis functions. ## Appendix B EOS fit and transition pressure determination Figure 7 compares the two different ways of calculating the transition pressure: using internal energies \\(E(V)\\) and their common tangent (left) or using enthalpies \\(H(P)\\) and their crossover point [88]. The two approaches are thermodynamically equivalent, as shown by the same transition pressure that has been determined (the 1-GPa difference is due to the numerical fitting of the data). The common-tangent approach is our option in this study because the internal energy (or Helmholtz free energy for \\(T\ eq 0\\) K) is readily calculated while pressure is not except for the 0-K DFT cases. The \\(E(V)\\) data are fitted to EOS models to determine the equilibrium volume \\(V_{0}\\) and bulk modulus \\(B_{0}\\). Typical errors of these parameters can be calculated using the Monte Figure 7: Determination of the transition pressure (\\(P_{\\rm tr}\\)) by using two different approaches: (a) common-tangent of the internal energy \\(E(V)\\) curves and (b) intersect of the enthalpy \\(H(P)\\) curves. In this example, the data are from 0-K DFT-PBEsol+QHA calculations; \\(P_{\\rm tr}\\) determined using the two approaches are 504 and 505 GPa, respectively. Carlo approach and are shown in Fig. 8. Table 3 summaries the equilibrium volume \\(V_{0}\\) and bulk modulus \\(B_{0}\\) by using different EOS models for the PBEsol data. We found the third-order Eulerian EOS (Birch-Murnaghan) works surprisingly well for MgO up to TPa pressures as long as data at high-enough pressures are included. ## Appendix C Effect of LO-TO splitting It is well known that the frequencies of the optical modes parallel and perpendicular to the electric field split (\"LO-TO splitting\") in ionic materials such as MgO. [89] This mode splitting is missed in regular phonon calculations but can be correctly captured when Born effective charges, piezoelectric constants, and the ionic contribution to the dielectric tensor are considered (by switching on LEPSILON in VASP). The effects on the phonon dispersion Figure 8: A Monte Carlo approach is used to determine the 1-\\(\\sigma\\) errors of the EOS fitting parameters. In this example, the AFQMC data and their standard errors (as shown in Table 1) are used and the third-order Birch–Murnaghan EOS model is considered. Two Monte Carlo runs with 1000 or 10000 randomly generates datasets give the same results for \\(V_{0}\\), \\(B_{0}\\), and their error bars. relations of B1- and B2-MgO are shown in Fig. 9. Figure 10 compares the resultant differences in vibrational energy and entropy of MgO in different phases and at different densities. The results show that LO-TO splitting only makes a small difference (\\(<\\)0.7%) at \\(T<500\\) K and then quickly drops to zero at higher T; the effect on the differences between B1 and B2 is also small and negligible. \\begin{table} \\begin{tabular}{l c c c c c c c c c} & \\(V_{0}^{\\rm B1}\\) (Å\\({}^{3}\\)) & \\(B_{0}^{\\rm B1}\\) (GPa) & \\(B_{0}^{\\rm B1}\\) & \\(V_{0}^{\\rm B2}\\) (Å\\({}^{3}\\)) & \\(B_{0}^{\\rm B2}\\) (GPa) & \\(B_{0}^{\\rm B2}\\) & \\(V_{\\rm tr}^{\\rm B1}\\) (Å\\({}^{3}\\)) & \\(V_{\\rm tr}^{\\rm B2}\\) (Å\\({}^{3}\\)) & \\(P_{\\rm tr}\\) (GPa) \\\\ \\hline BM3 & 18.720 & 161.4 & 4.01 & 17.97 & 170.2 & 3.94 & – & – & – \\\\ BM4 & 18.737 & 157.4 & 4.11 & 18.366 & 145.1 & 4.09 & 9.013 & 8.609 & 517.5 \\\\ Vinet & 18.786 & 137.7 & 4.91 & 20.552 & 68.5 & 5.56 & 9.012 & 8.583 & 522.7 \\\\ BM3 1 & 18.721 & 160.4 & 4.02 & 18.266 & 151.3 & 3.99 & – & – & – \\\\ BM4 1 & 18.737 & 157.3 & 4.11 & 18.289 & 147.6 & 4.09 & 9.014 & 8.610 & 517.4 \\\\ Vinet 2 & 18.791 & 141.1 & 4.81 & 18.364 & 129.2 & 4.86 & 9.037 & 8.639 & 515.3 \\\\ spline 3 & – & – & – & – & – & 9.001 & 8.605 & 518.5 \\\\ \\end{tabular} \\end{table} Table 3: Equilibrium volume (\\(V_{0}\\)), bulk modulus (\\(B_{0}\\)) and its pressure derivative (\\(B_{0}^{\\prime}\\)), and volumes of transition (\\(V_{\\rm tr}\\)) of MgO in B1 and B2 phases, and the transition pressure (\\(P_{\\rm tr}\\)), determined in different fitting approaches for the \\(E(V)\\) data from static DFT calculations using the PBEsol xc functional. Volumes are in units per Mg-O pair. Figure 9: Phonon band structures of MgO from DFPT vs DFPT+Born calculations of B1 vs B2 MgO at 6.75 g/cm\\({}^{3}\\). ## Appendix D Effects of xc functional on phonon results Figure 11 shows that different xc functionals produce the same phonon band structure and vibrational free energies within QHA. ## Appendix E Convergence test Figure 12 shows large cell sizes in combination with proper/fine \\(k\\)-point meshes are needed to ensure convergence of the EOS. For example, a 250-atom cell with a single \\(k\\) point is not Figure 10: The effects of including LO-TO splitting (BORN) in phonon calculations on the thermodynamic properties of B1 and B2 phases of MgO at various densities. (a) Entropy and internal energy and (b) internal energy changes due to excluding LO-TO splitting; (c) and (d) entropy differences between B1 and B2 at two densities near the phase transition with and without LO-TO splitting. The differences in (c) and (d) are relative to \\(E\\) or \\(TS\\), whichever is larger. enough for B2 at 6.0 g/cm\\({}^{3}\\). In our QMD simulations, we use a 64-atom cell with the \"Brec\" special \\(k\\) point and a 54-atom cell with a \\(\\Gamma\\)-centered 2\\(\\times\\)2\\(\\times\\)2 \\(k\\) mesh, respectively, for the B1 and B2 calculations. Our additional tests for the phonon calculations show that 8- and 16-atom cells, respec Figure 11: (a) Phonon band structures and (b) vibrational free energies of MgO from PBEsol vs PBE calculations. Figure 12: Finite size effects on pressures and internal energies of (a) B1 and (b) B2 structures of MgO at different densities and electronic temperatures. All calculations are based on DFT-PBEsol and use 64-atom cells for B1 and 54-atom cells for B2, unless otherwise specified. “Bcar” and “Brec” denotes using the special \\(k\\) point of \\((1/4,1/4,1/4)\\) in cartesian (wrong “Baldereschi point” for cubic cell) and reciprocal (correct “Baldereschi point” for cubic cell) coordinates, respectively. tively, are needed for B1 and B2 to obtain converged \\(F_{\\rm vib}(T)\\) results rather than using the primitive 2-atom cells. In this study, we choose 54-atom cells (with a 4\\(\\times\\)4\\(\\times\\)4 \\(k\\) mesh) for both B1 and B2 phonon calculations for better accuracy. ## Appendix F Calculation of anharmonic energies and comparison between different terms Figure 13 shows the finite-temperature internal energies of MgO estimated from cold Figure 13: (a) Internal energies along selected isochores of MgO based on QMD (darker solid curves) or QHA (lighter dashed curves) calculations. (b) Differences of the ion thermal term of the internal energy (QMD: solid curves and symbols; QHA: dashed curves) from a classical crystal that assumes 3\\(k_{\\rm B}\\)/atom for the heat capacity (the Dulong–Petit law). The structures and densities are represented by different colors and symbols as denoted in the legend. The solid curves in (b) are polynomial fits to the data. In both panels, \\(E_{\\rm cold}\\) is taken as the value of \\(E_{\\rm QMD}\\) at 0 K. calculations under QHA (\\(E_{\\rm cold+QHA}\\)) in comparison with values from direct QMD simulations (\\(E_{\\rm QMD}\\)). Overall, \\(E_{\\rm cold+QHA}\\) and \\(E_{\\rm QMD}\\) are similar to each other, with noticeable differences near zero K, because of the nuclear quantum effects, or at high temperatures, due to increased anharmonic vibration and electron excitation effects. The differences are more evident when the ion thermal energies (\\(E-E_{\\rm cold}\\)) are plotted with respect to the classical crystal value of \\(3k_{B}T\\). The mismatch between QMD and QHA near zero K and the proximity of \\(E_{\\rm QHA}\\) to \\(3k_{B}T\\) at high temperatures have motivated us to define \\(E_{\\rm anharm}=E_{\\rm QMD}-E_{\\rm cold+QHA}\\approx E_{\\rm QMD}-E_{\\rm cold}-3k_{B}T\\) in the TDI Eq. 6 to calculate the anharmonic free energy \\(F_{\\rm anharm}\\). Under this approximation, the total free energy of the system \\(F(V,T)=E_{\\rm cold}(V)+F_{\\rm i,QHA}(V,T)+F_{\\rm i-th,anharm}(V,T)+F_{\\rm e-th}( V,T)\\approx E_{\\rm cold}(V)+F_{\\rm ind.ph.}^{\\rm quantum.}(V,T)+F_{\\rm int.ph.}^{\\rm class.}(V,T)+F_{\\rm e-th}(V,T)\\) where the subindices \"ind.\" and \"int.\" denote independent and interacting, \"ph.\" denotes phonon, \"class.\" and \"quantum.\" represents the nature of the ions as being classical and quantum, respectively, and \"e-th\" denotes the electron thermal term. The only difference from an entirely accurate (\"quantum\") description lies in the approximation in the anharmonic term by using classical ions (as in QMD simulations and the classical-crystal reference for TDI), whose effect, we believe, is negligible for the purpose of this paper. We have performed extensive tests and found \\(E_{\\rm anharm}(V,T)\\) can be isochorically fitted well using sixth- and eighth-order polynomials for the B1 and B2 phases, respectively. We also note that the different choices of \\(T_{\\rm ref}\\) or fitting \\(E_{\\rm QMD}\\) by using cubic splines can affect the value of \\(F_{\\rm anharm}\\) (see Fig. 14), while lower-order polynomials or exponential fits [2; 90], although they were found to work for certain materials at ambient densities or relatively low temperatures, break down for MgO at high densities and temperatures. In practice, QMD is less efficient and inappropriate for simulating near-zero temperatures. Therefore we have to choose a finite value for \\(T_{\\rm ref}\\) in TDI and assume QHA is valid for any temperature below \\(T_{\\rm ref}\\). This would technically limit the accuracy of the anharmonic free energies, as shown in Fig. 14(b) by the different values of \\(F_{\\rm anharm}\\) when choosing different \\(T_{\\rm ref}\\) and fitting approaches. Figure 14 also shows that the contributions by electron thermal excitation become increasingly significant when the temperature exceeds 8000 K, more at lower densities. The anharmonic vibration and electron thermal terms are relatively small in comparison to the lattice vibration as accounted under QHA. However, because of the similarities between energies of the B1 and B2 phases, the effects of anharmonic vibration can significantly affect the B1-B2 transition boundary, as shown in Fig. 5. Figure 15 summarizes the B1-B2 transition pressure based on free energies calculated using different approaches. Despite the distinctions between predictions by AFQMC and DFT-PBEsol or SCAN at zero K, all methods give similar trends of decreasing \\(P_{\\rm tr}\\) (by \\(\\sim\\)20 to 40 GPa at 9000 K, relative to the corresponding values at 0 K) and enlarging uncertainty (by \\(\\sim\\)40 to 50 GPa at 9000 K) as temperature increases. The relations in \\(P_{\\rm tr}\\) between the different approaches AFQMC, DFT-PBEsol, and DFT-SCAN at high temperatures remain similar to those under QHA, whereas the anharmonic effects clearly steepen the d\\(T\\)/d\\(P\\) slope and push \\(P_{\\rm tr}\\) to higher values than QHA predictions. With polynomial fits of \\(E_{\\rm anharm}\\), the differences between the phase boundaries based on QHA and anharmonic calculations are smaller if the values of \\(T_{\\rm ref}\\) are higher (light long dashed line-squares); in comparison, cubic spline fits of \\(E_{\\rm QMD}\\) using the same \\(T_{\\rm ref}\\) tend to produce larger differences than the Figure 14: Comparison between different thermodynamic energy terms of MgO, including (a) internal energy (solid curves) and vibration entropy (dashed curves) terms under QHA calculations, and (b) anharmonic free energy (dark solid and light dashed curves) and electronic entropy (light solid curves) terms from TDI and QMD calculations. The anharmonic free energies shown in (b) are calculated using TDI with different methods: polynomial fit of anharmonic internal energy (\\(E_{\\rm anharm}=E_{\\rm QMD}-E_{\\rm cold}-3k_{B}T\\)) with \\(C_{V}(T_{\\rm ref})\\)=10%\\(\\times\\)3\\(k_{\\rm B}\\)/atom (dark solid, corresponding \\(T_{\\rm ref}\\)=100–200 K) or 90%\\(\\times\\)3\\(k_{\\rm B}\\)/atom (light long dashed, corresponding \\(T_{\\rm ref}\\)=800–1550 K) or cubic spline fit of the internal energy (\\(E_{\\rm QMD}\\)) with \\(C_{V}(T_{\\rm ref})\\)=50%\\(\\times\\)3\\(k_{\\rm B}\\)/atom (light short dashed, corresponding \\(T_{\\rm ref}\\)=250–550 K). polynomial fits of \\(E_{\\rm anharm}\\) (light short dashed line-squares). We have quantified the phase-boundary differences by calculating the Clapeyron slope. The results are summarized in Table 4. Furthermore, we have tested by employing two different versions of the TDI/temperature-integration approach to cross-check our above results, including (1) a more direct approach [93]: \\[\\frac{F(V,T)}{T}-\\frac{F(V,T_{\\rm ref})}{T_{\\rm ref}}=\\int_{1/T_{\\rm ref}}^{1/T }E(V,\\mathcal{T})\\mathrm{d}\\frac{1}{\\mathcal{T}} \\tag{11}\\] Figure 15: B1-B2 phase boundary calculated using QHA (light solid line-circles) in comparison to those including the anharmonic effect calculated with three different methods in TDI: polynomial fit of \\(E_{\\rm anharm}\\) with \\(C_{V}\\)(\\(T_{\\rm ref}\\))=10%\\(\\times\\)3\\(k_{\\rm B}\\)/atom (dark solid line-squares, corresponding \\(T_{\\rm ref}\\)=100–200 K) or 90%\\(\\times\\)3\\(k_{\\rm B}\\)/atom (light long dashed line-squares, corresponding \\(T_{\\rm ref}\\)=800–1550 K) or cubic spline fit of the internal energy (\\(E_{\\rm QMD}\\)) with \\(C_{V}\\)(\\(T_{\\rm ref}\\))=50%\\(\\times\\)3\\(k_{\\rm B}\\)/atom (light short dashed line-squares, corresponding \\(T_{\\rm ref}\\)=250–550 K). The maximum range of difference defined by the three methods are represented by the shaded area. Black, blue, and red colors denote the calculations that use different cold curves. and (2) an indirect approach [by taking the difference of Eq. 11 with respect to a reference system (e.g., the system under QHA) that also satisfies Eq. 11]: \\[F(V,T)-F_{\\mathrm{ref}}(V,T)=T\\int_{1/T_{\\mathrm{ref}}}^{1/T}[E(V,\\mathcal{T})- E_{\\mathrm{ref}}(V,\\mathcal{T})]\\mathrm{d}\\frac{1}{\\mathcal{T}}. \\tag{12}\\] These tests were performed at \\(T=\\)3000, 6000, and 9000 K, with \\(T_{\\mathrm{ref}}\\) fixed to 500 K for simplicity. In approach (1), the free energy at \\(T_{\\mathrm{ref}}\\) is approximated by the corresponding values under QHA; in approach (2), the QHA system is taken as the reference, which defines \\(F_{\\mathrm{ref}}\\) and \\(E_{\\mathrm{ref}}\\). In both approaches, an additional term \\(E_{\\mathrm{QC}}(V,\\mathcal{T})=E_{\\mathrm{QHA}}(V,\\mathcal{T})-3k_{\\mathrm{B} }\\mathcal{T}\\) has been introduced as quantum correction of the internal energy from QMD, similar to that in Ref. [94]. We note that the quantum correction is crucial to obtain accurate free energies within the temperature integration approach, which starts from a cold reference state where the important nuclear quantum effects are included by QHA but missed in QMD. We also note that, with the quantum correction and with \\(E_{\\mathrm{cold}}\\) deducted from all energy terms, Eq. 12 is equivalent to our method introduced in detail above and in Sec. II.2 (Eq. 6). Based on our PBEsol data (cold curve, QHA, and QMD), the free energies calculated using these two approaches are similar, both producing similar B1-B2 transition pressures: 486 GPa at 3000 K, 462 GPa at 6000 K, and 439 GPa at 9000 K. The excellent consistency \\begin{table} \\begin{tabular}{c c c c c} & QHA & \\(C_{V}(T_{\\mathrm{ref}})\\)=10\\%\\(\\times\\)3\\(k_{\\mathrm{B}}\\)/atom & \\(C_{V}(T_{\\mathrm{ref}})\\)=90\\%\\(\\times\\)3\\(k_{\\mathrm{B}}\\)/atom & \\(C_{V}(T_{\\mathrm{ref}})\\)=50\\%\\(\\times\\)3\\(k_{\\mathrm{B}}\\)/atom \\\\ \\hline AFQMC & -13.6 \\(\\pm\\) 0.8 & **-3.4 \\(\\pm\\) 0.2** & -6.8 \\(\\pm\\) 0.1 & -2.3 \\(\\pm\\) 0.3 \\\\ SCAN & -13.5 \\(\\pm\\) 0.8 & **-3.6 \\(\\pm\\) 0.2** & -7.1 \\(\\pm\\) 0.1 & -2.7 \\(\\pm\\) 0.3 \\\\ PBEsol & -16.5 \\(\\pm\\) 1.1 & **-4.9 \\(\\pm\\) 0.2** & -8.9 \\(\\pm\\) 0.1 & -3.8 \\(\\pm\\) 0.3 \\\\ \\end{tabular} \\end{table} Table 4: Clapeyron slope \\(dP_{\\mathrm{tr}}/dT\\) (in units of MPa/K) of the MgO B1–B2 phase transition estimated by linear regression of the data calculated using different approaches as shown in Fig. 15. The calculations for QHA use 1500–4500 K data, while other cases use 1500–6000 K data, for better linearity and relevance to the super-Earths’ interior conditions. Numbers in boldface correspond to the solid line squares in Figs. 5(b) and 15. We note that the values of the slope calculated here are much smaller than previous estimations based on experiments (\\(-390\\pm 300\\) MPa/K) [11], which were associated with significant uncertainties, and also lower than predictions by an interatomic model of the B1-B2 transition (\\(\\sim-40\\) MPa/K if taking volume collapse of 3% and entropy increase of 7 J/K/mol) [91; 92], suggesting the B1-B2 entropy difference and the thermodynamic properties of MgO are sensitive to pressure and different from the underlying assumptions of the model. of the results from the tests with those shown in Fig. 15 (blue shaded area) reconfirms the methodology and findings of this study. ## Appendix G Comparison with experiments Figure 16 compares our AFQMC results of the B1-B2 transition to shock experiments [11; 40] and recent theoretical calculations [29; 37]. The previous calculations were based on Figure 16: The B1–B2 phase boundary of MgO calculated in this study (same as the black/grey AFQMC results shown in Fig. 15) in comparison to shock experiments by McWilliams _et al._[11] (green symbols) and Bolis _et al._[40; 95] (yellow symbols) and recent theoretical calculations by Bouchet _et al._[29] (temperature-dependent effective potential approach, LDA xc functional, data represented by blue line-circles) and Soubiran and Militzer [37] (TDI based on MD using effective potentials tuned between harmonic oscillators and Kohn–Sham DFT, PBE xc functional, data shown with red crosses). The dark-green circle denotes the condition attributed to the B1–B2 transition by McWilliams _et al._[11]. LDA/PBE, and their predicted \\(P_{\\rm tr}\\) at 0 K is larger than our AFQMC prediction, consistent with our findings shown in Fig. 3. The differences get smaller at higher temperatures until approximately 8000 K, above which the previous calculations show a lower \\(P_{\\rm tr}\\) and thus a less steep Clapeyron slope than ours. Our estimation of the B1-B2 phase boundary, with uncertainty, agrees with the wiggled regions in both experiments by McWilliams _et al._[11] and Bolis _et al._[40]. This suggests the turnovers in both experiments can be associated with the B1-B2 transition. To fully unveil the origins of the subtle differences between the measurements, however, still requires improved experimental diagnostics and theoretical constraints on the structure, kinetics, and thermodynamic conditions of the samples under shock compression. ## References * Murakami _et al._ [2004]M. Murakami, K. Hirose, K. Kawamura, N. Sata, and Y. Ohishi, Post-perovskite phase transition in MgSiO3., Science (New York, N.Y.) **304**, 855 (2004). * Oganov and Ono [2004]A. R. Oganov and S. Ono, Theoretical and experimental evidence for a post-perovskite phase of MgSiO3 in Earth's D\" layer, Nature **430**, 445 (2004). * Tsuchiya _et al._ [2004]T. Tsuchiya, J. Tsuchiya, K. Umemoto, and R. M. Wentzcovitch, Phase transition in MgSiO3 perovskite in the earth's lower mantle, Earth Planet. Sci. Lett. **224**, 241 (2004). * Zhang _et al._ [2013]S. Zhang, H. F. Wilson, K. P. Driver, and B. Militzer, H 4 O and other hydrogen-oxygen compounds at giant-planet core pressures, Phys. Rev. B **87**, 024112 1 (2013). * Peng _et al._ [2020]F. Peng, X. Song, C. Liu, Q. Li, M. Miao, C. Chen, and Y. Ma, Xenon iron oxides predicted as potential Xe hosts in Earth's lower mantle, Nature Commun. **11**, 5227 (2020). * Celliers _et al._ [2018]P. M. Celliers, M. Millot, S. Brygoo, R. S. McWilliams, D. E. Fratanduono, J. R. Rygg, A. F. Goncharov, P. Loubeyre, J. H. Eggert, J. L. Peterson, N. B. Meezan, S. Le Pape, G. W. Collins, R. Jeanloz, and R. J. Hemley, Insulator-metal transition in dense fluid deuterium., Science (New York, N.Y.) **361**, 677 (2018). * Rillo _et al._ [2019]G. Rillo, M. A. Morales, D. M. Ceperley, and C. Pierleoni, Optical properties of high-pressure fluid hydrogen across molecular dissociation, Proc. Natl. Acad. Sci. USA **116**, 9770 (2019). * Zurek and Bi [2019]E. Zurek and T. Bi, High-temperature superconductivity in alkaline and rare earth polyhydrides at high pressure: A theoretical perspective, J. Chem. Phys. **150**, 50901 (2019). * Dubrovinskaia _et al._ [2019]N. Dubrovinskaia, L. Dubrovinsky, N. A. Solopova, A. Abakumov, S. Turner, M. Hanfland, E. Bykova, M. Bykov, C. Prescher, V. B. Prakapenka, S. Petitgirard, I. Chuvashova, B. Gasharova, Y.-L. Mathis, P. Ershov, I. Snigireva, and A. Snigirev, Terapascal static pressure generation with ultrahigh yield strength nanodiamond, Sci. Adv. **2**, e1600341 (2016). * Jenei _et al._ [2018]Z. Jenei, E. F. O'Bannon, S. T. Weir, H. Cynn, M. J. Lipp, and W. J. Evans, Single crystal toroidal diamond anvils for high pressure experiments beyond 5 megabar, Nature Commun. **9**, 3563 (2018). * McWilliams _et al._ [2012]R. S. McWilliams, D. K. Spaulding, J. H. Eggert, P. M. Celliers, D. G. Hicks, R. F. Smith, G. W. Collins, and R. Jeanloz, Phase transformations and metallization of magnesium oxide at high pressure and temperature, Science **338**, 1330 (2012). * Coppari _et al._ [2013]F. Coppari, R. F. Smith, J. H. Eggert, J. Wang, J. R. Rygg, a. Lazicki, J. a. Hawreliak, G. W. Collins, and T. S. Duffy, Experimental evidence for a phase transition in magnesium oxide at exoplanet pressures, Nature Geosci. **6**, 926 (2013). * Millot _et al._ [2015]M. Millot, N. Dubrovinskaia, A. Cernok, S. Blaha, L. Dubrovinsky, D. G. Braun, P. M. Celliers, G. W. Collins, J. H. Eggert, and R. Jeanloz, Shock compression of stishovite and melting of silica at planetary interior conditions, Science **347**, 418 (2015). * Kohn and Sham [1965]W. Kohn and L. J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. **140**, A1133 (1965). * Zhang _et al._ [2018]S. Zhang, F. D. Malone, and M. A. Morales, Auxiliary-field quantum Monte Carlo calculations of the structural properties of nickel oxide, J. Chem. Phys. **149**, 164102 (2018). * Driver _et al._ [2010]K. P. Driver, R. E. Cohen, Z. Wu, B. Militzer, P. L. Rios, M. D. Towler, R. J. Needs, and J. W. Wilkins, Quantum Monte Carlo computations of phase stability, equations of state, and elasticity of high-pressure silica., Proc. Natl. Acad. Sci. USA **107**, 9519 (2010). * Trail _et al._ [2017]J. Trail, B. Monserrat, P. Lopez Rios, R. Maezono, and R. J. Needs, Quantum monte carlo study of the energetics of the rutile, anatase, brookite, and columbite \\(\\mathrm{tio}_{2}\\) polymorphs, Phys. Rev. B **95**, 121108 (2017). * Malone _et al._ [2019]F. D. Malone, S. Zhang, and M. A. Morales, Overcoming the Memory Bottleneck in Auxiliary Field Quantum Monte Carlo Simulations with Interpolative Separable Density Fitting, J. Chem. Theory Comput. **15**, 256 (2019). * Malone _et al._ [2020]F. D. Malone, A. Benali, M. A. Morales, M. Caffarel, P. R. C. Kent, and L. Shulenburger, Systematic comparison and cross-validation of fixed-node diffusion monte carlo and phaseless auxiliary-field quantum monte carlo in solids, Phys. Rev. B **102**, 161104 (2020). * LeBlanc _et al._ [2015]J. P. F. LeBlanc, A. E. Antipov, F. Becca, I. W. Bulik, G. K.-L. Chan, C.-M. Chung, Y. Deng, M. Ferrero, T. M. Henderson, C. A. Jimenez-Hoyos, E. Kozik, X.-W. Liu, A. J. Millis, N. V. Prokof'ev, M. Qin, G. E. Scuseria, H. Shi, B. V. Svistunov, L. F. Tocchio, I. S. Tupitsyn, S. R. White, S. Zhang, B.-X. Zheng, Z. Zhu, and E. Gull (Simons Collaboration on the Many-Electron Problem), Solutions of the two-dimensional hubbard model: Benchmarks and results from a wide range of numerical algorithms, Phys. Rev. X **5**, 041041 (2015). * Motta _et al._ [2017]M. Motta, D. M. Ceperley, G. K.-L. Chan, J. A. Gomez, E. Gull, S. Guo, C. A. Jimenez-Hoyos, T. N. Lan, J. Li, F. Ma, A. J. Millis, N. V. Prokof'ev, U. Ray, G. E. Scuseria, S. Sorella, E. M. Stoudenmire, Q. Sun, I. S. Tupitsyn, S. R. White, D. Zgid, and S. Zhang (Simons Collaboration on the Many-Electron Problem), Towards the solution of the many-electron problem in real materials: Equation of state of the hydrogen chain with state-of-the-art many-body methods, Phys. Rev. X **7**, 031059 (2017). * Zheng _et al._ [2017]B.-X. Zheng, C.-M. Chung, P. Corboz, G. Ehlers, M.-P. Qin, R. M. Noack, H. Shi, S. R. White, S. Zhang, and G. K.-L. Chan, Stripe order in the underdoped region of the two-dimensional hubbard model, Science **358**, 1155 (2017). * Lee _et al._ [2020]J. Lee, F. D. Malone, and D. R. Reichman, The performance of phaseless auxiliary-field quantum Monte Carlo on the ground state electronic energy of benzene, J. Chem. Phys. **153**, 126101 (2020). * Shi and Zhang [2021]H. Shi and S. Zhang, Some recent developments in auxiliary-field quantum Monte Carlo for real materials, J. Chem. Phys. **154**, 24107 (2021). * Zhang and Krakauer [2003]S. Zhang and H. Krakauer, Quantum monte carlo method using phase-free random walks with slater determinants, Phys. Rev. Lett. **90**, 136401 (2003). * Motta and Zhang [2018]M. Motta and S. Zhang, Ab initio computations of molecular systems by the auxiliary-field quantum monte carlo method, WIREs Comput. Mol. Sci. **8**, e1364 (2018). * Morales and Malone [2020]M. A. Morales and F. D. Malone, Accelerating the convergence of auxiliary-field quantum Monte Carlo in solids with optimized Gaussian basis sets, J. Chem. Phys. **153**, 194111 (2020). * Dubrovinskaia _et al._ [2019]N. Dubrovinskaia, S. Petitgirard, S. Chariton, R. Tucoulou, J. Garrevoet, K. Glazyrin, H.-P. Liermann, V. B. Prakapenka, and L. Dubrovinsky, B1-B2 phase transition in MgO at ultrahigh static pressure, arXiv preprint arXiv:1904.00476 (2019). * Bouchet _et al._ [2019]J. Bouchet, F. Bottin, V. Recoules, F. Remus, G. Morard, R. M. Bolis, and A. Benuzzi-Mounaix, Ab initio calculations of the B1-B2 phase transition in MgO, Phys. Rev. B **99**,094113 (2019). * (30) D. Alfe, Melting curve of mgo from first-principles simulations, Phys. Rev. Lett. **94**, 235701 (2005). * (31) N. De Koker and L. Stixrude, Self-consistent thermodynamic description of silicate liquids, with application to shock melting of MgO periclase and MgSiO3 perovskite, Geophys. J. Int. **178**, 162 (2009). * (32) A. B. Belonoshko, S. Arapan, R. Martonak, and A. Rosengren, Mgo phase diagram from first principles in a wide pressure-temperature range, Phys. Rev. B **81**, 054110 (2010). * (33) B. Boates and S. a. Bonev, Demixing Instability in Dense Molten MgSiO\\({}_{3}\\) and the Phase Diagram of MgO, Phys/ Rev. Lett. **110**, 135504 (2013). * (34) D. Cebulla and R. Redmer, Ab initio simulations of MgO under extreme conditions, Physical Review B **89**, 134107 (2014). * (35) T. Taniuchi and T. Tsuchiya, The melting points of MgO up to 4 TPa predicted based on _ab initio_ thermodynamic integration molecular dynamics, J. Phys.: Condens. Matter **30**, 114003 (2018). * (36) R. Musella, S. Mazevet, and F. Guyot, Physical properties of MgO at deep planetary conditions, Phys. Rev. B **99**, 064110 (2019). * (37) F. Soubiran and B. Militzer, Anharmonicity and Phase Diagram of Magnesium Oxide in the Megabar Regime, Phys. Rev. Lett. **125**, 175701 (2020). * (38) S. Root, L. Shulenburger, R. W. Lemke, D. H. Dolan, T. R. Mattsson, and M. P. Desjarlais, Shock Response and Phase Transitions of MgO at Planetary Impact Conditions, Phys. Rev. Lett. **115**, 1 (2015). * (39) K. Miyanishi, Y. Tange, N. Ozaki, T. Kimura, T. Sano, Y. Sakawa, T. Tsuchiya, and R. Kodama, Laser-shock compression of magnesium oxide in the warm-dense-matter regime, Phys. Rev. E **92**, 23103 (2015). * (40) R. M. Bolis, G. Morard, T. Vinci, A. Ravasio, E. Bambrink, M. Guarguaglini, M. Koenig, R. Musella, F. Remus, J. Bouchet, N. Ozaki, K. Miyanishi, T. Sekine, Y. Sakawa, T. Sano, R. Kodama, F. Guyot, and A. Benuzzi-Mounaix, Decaying shock studies of phase transitions in MgO-SiO2 systems: Implications for the super-Earths' interiors, Geophys. Res. Lett. **43**, 9475 (2016). * (41) L. E. Hansen, D. E. Fratanduono, S. Zhang, D. G. Hicks, T. Suer, Z. K. Sprowal, M. F. Huff,X. Gong, B. J. Henderson, D. N. Polsin, M. Zaghoo, S. X. Hu, G. W. Collins, and J. R. Rygg, Melting of magnesium oxide up to two terapascals using double-shock compression, Phy. Rev. B **104**, 014106 (2021). * Hubbard [1959]J. Hubbard, Calculation of partition functions, Phys. Rev. Lett. **3**, 77 (1959). * Ceperley and Alder [1980]D. M. Ceperley and B. J. Alder, Ground state of the electron gas by a stochastic method, Phys. Rev. Lett. **45**, 566 (1980). * Zhang _et al._ [1997]S. Zhang, J. Carlson, and J. E. Gubernatis, Constrained path monte carlo method for fermion ground states, Phys. Rev. B **55**, 7464 (1997). * Shee _et al._ [2019]J. Shee, B. Rudshteyn, E. J. Arthur, S. Zhang, D. R. Reichman, and R. A. Friesner, On achieving high accuracy in quantum chemical calculations of 3d transition metal-containing systems: A comparison of auxiliary-field quantum monte carlo with coupled cluster, density functional theory, and experiment for diatomic molecules, J. Chem. Theory Comput. **15**, 2346 (2019). * Purwanto _et al._ [2009]W. Purwanto, S. Zhang, and H. Krakauer, Excited state calculations using phaseless auxiliary-field quantum monte carlo: Potential energy curves of low-lying c2 singlet states, J. Chem. Phys. **130**, 094107 (2009). * Borda _et al._ [2018]E. J. L. Borda, J. A. Gomez, and M. A. Morales, Non-orthogonal multi-slater determinant expansions in auxiliary field quantum monte carlo, arXiv preprint arXiv:1801.10307 (2018). * Mahajan and Sharma [2021]A. Mahajan and S. Sharma, Taming the sign problem in auxiliary-field quantum monte carlo using accurate wave functions, J. Chem. Theory Comput. **17**, 4786 (2021). * Qin _et al._ [2016]M. Qin, H. Shi, and S. Zhang, Benchmark study of the two-dimensional hubbard model with auxiliary-field quantum monte carlo method, Phys. Rev. B **94**, 085103 (2016). * Chang and Morales [2017]C.-C. Chang and M. A. Morales, Multi-determinant generalized hartree-fock wave functions in monte carlo calculations, arXiv preprint arXiv:1711.02154 (2017). * Giannozzi _et al._ [2009]P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, Quantum espresso: a modular and open-source software project for quantum simulations of materials, J. Phys.: Condens. Matter **21**, 395502 (2009). * (52) P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. Buongiorno Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. Dal Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, Jr., A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Kucukbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. Otero-de-la-Roza, L. Paulatto, S. Ponce, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, Advanced capabilities for materials modelling with Quantum ESPRESSO, J. Phys.: Condens. Matter **29**, 465901 (2017). * (53) B. N. H. F. and L. Jan, Simplifications in the generation and transformation of two-electron integrals in molecular calculations, Int. J. Quantum Chem. **12**, 683 (1977). * (54) H. Koch, A. S. de Meras, and T. B. Pedersen, Reduced scaling in electronic structure calculations using cholesky decompositions, J. Chem. Phys. **118**, 9481 (2003). * (55) A. Francesco, D. V. Luca, F. Nicolas, G. Giovanni, M. Per-ake, N. Pavel, P. T. Bondo, P. Michal, R. Markus, R. B. O., S. Luis, U. Miroslav, V. Valera, and L. Roland, Molcas 7: The next generation, J. Comput. Chem. **31**, 224 (2009). * (56) W. Purwanto, H. Krakauer, Y. Virgus, and S. Zhang, Assessing weak hydrogen binding on ca+ centers: An accurate many-body study with large basis sets, J. Chem. Phys. **135**, 164105 (2011). * (57) D. R. Hamann, Optimized norm-conserving vanderbilt pseudopotentials, Phys. Rev. B **88**, 085117 (2013). * (58) J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. **77**, 3865 (1996). * (59) J. Kim, A. T. Baczewski, T. D. Beaudet, A. Benali, M. C. Bennett, M. A. Berrill, N. S. Blunt, E. J. L. Borda, M. Casula, D. M. Ceperley, S. Chiesa, B. K. Clark, R. C. C. III, K. T. Delaney, M. Dewing, K. P. Esler, H. Hao, O. Heinonen, P. R. C. Kent, J. T. Krogel, I. Kylanpaa, Y. W. Li, M. G. Lopez, Y. Luo, F. D. Malone, R. M. Martin, A. Mathuriya, J. McMinis, C. A. Melton, L. Mitas, M. A. Morales, E. Neuscamman, W. D. Parker, S. D. P. Flores, N. A. Romero, B. M. Rubenstein, J. A. R. Shea, H. Shin, L. Shulenburger, A. F. Tillack, J. P. Townsend, N. M. Tubman, B. V. D. Goetz, J. E. Vincent, D. C. Yang, Y. Yang, S. Zhang, and L. Zhao, Qmcpack : an open source ab initio quantum monte carlo package for the electronic structure of atoms, molecules and solids, J. Phys.: Condens. Matter **30**, 195901 (2018). * Hohenberg and Kohn [1964]P. Hohenberg and W. Kohn, Inhomogeneous electron gas, Phys. Rev. **136**, B864 (1964). * Kresse and Furthmuller [1996]G. Kresse and J. Furthmuller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B **54**, 11169 (1996). * Blochl et al. [1994]P. E. Blochl, O. Jepsen, and O. K. Andersen, Improved tetrahedron method for brillouin-zone integrations, Phys. Rev. B **49**, 16223 (1994). * Perdew and Zunger [1981]J. P. Perdew and A. Zunger, Self-interaction correction to density-functional approximations for many-electron systems, Phys. Rev. B **23**, 5048 (1981). * Perdew et al. [2008]J. P. Perdew, A. Ruzsinszky, G. I. Csonka, O. A. Vydrov, G. E. Scuseria, L. A. Constantin, X. Zhou, and K. Burke, Restoring the density-gradient expansion for exchange in solids and surfaces, Phys. Rev. Lett. **100**, 136406 (2008). * Sun et al. [2015]J. Sun, A. Ruzsinszky, and J. P. Perdew, Strongly constrained and appropriately normed semilocal density functional, Phys. Rev. Lett. **115**, 036402 (2015). * Heyd et al. [2003]J. Heyd, G. E. Scuseria, and M. Ernzerhof, Erratum: \"hybrid functionals based on a screened coulomb potential\" [j. chem. phys. 118, 8207 (2003)], J. Chem. Phys. **124**, 219906 (2006). * Jeanloz [1988]R. Jeanloz, Universal equation of state, Phys. Rev. B **38**, 805 (1988). * Vinet et al. [1987]P. Vinet, J. R. Smith, J. Ferrante, and J. H. Rose, Temperature effects on the universal equation of state of solids, Phys. Rev. B **35**, 1945 (1987). * Birch [1978]F. Birch, Finite strain isotherm and velocities for single-crystal and polycrystalline nacl at high pressures and 300\\({}^{\\circ}\\)k, J. Geophys. Res.: Solid Earth **83**, 1257 (1978). * Togo and Tanaka [2015]A. Togo and I. Tanaka, First principles phonon calculations in materials science, Scr. Mater. **108**, 1 (2015). * Togo [2023]A. Togo, First-principles phonon calculations with phonopy and phono3py, J. Phys. Soc. Jpn. **92**, 012001 (2023). * Mermin [1965]N. D. Mermin, Thermal properties of the inhomogeneous electron gas, Phys. Rev. **137**, A1441 (1965). * Nose [1984]S. Nose, A unified formulation of the constant temperature molecular dynamics methods, J. Chem. Phys. **81**, 511 (1984). * Zhang et al. [2015]S. Zhang, M. A. Morales, R. Jeanloz, M. Millot, S. X. Hu, and E. Zurek, Nature of the bonded-to-atomic transition in liquid silica to TPa pressures, J. Appl. Phys. **131**, 071101(2022). * Dubrovinskaia _et al._ [2019]N. Dubrovinskaia, S. Petitgirard, S. Chariton, R. Tucoulou, J. Garrevoet, K. Glazyrin, H.-P. Liermann, V. B. Prakapenka, and L. Dubrovinsky, B1-b2 phase transition in mgo at ultra-high static pressure (2019), arXiv:1904.00476 [cond-mat.mtrl-sci]. * Fei [1999]Y. Fei, Effects of temperature and composition on the bulk modulus of (Mg,Fe)O, Am. Mineral. **84**, 272 (1999). * Speziale _et al._ [2001]S. Speziale, C.-S. Zha, T. S. Duffy, R. J. Hemley, and H.-k. Mao, Quasi-hydrostatic compression of magnesium oxide to 52 gpa: Implications for the pressure-volume-temperature equation of state, J. Geophys. Res. **106**, 515 (2001). * Jacobsen _et al._ [2008]S. D. Jacobsen, C. M. Holl, K. a. Adams, R. a. Fischer, E. S. Martin, C. R. Bina, J.-F. Lin, V. B. Prakapenka, a. Kubo, and P. Dera, Compression of single-crystal magnesium oxide to 118 GPa and a ruby pressure gauge for helium pressure media, Am. Mineral. **93**, 1823 (2008). * Marsh [1980]S. P. Marsh, _LASL Shock Hugoniot Data_ (University of California Press, Berkeley, 1980). * Vassiliou and Ahrens [1981]M. S. Vassiliou and T. J. Ahrens, Hugoniot equation of state of periclase to 200 gpa, Geophys. Res. Lett. **8**, 729 (1981). * Fratanduono _et al._ [2013]D. E. Fratanduono, J. H. Eggert, M. C. Akin, R. Chau, and N. C. Holmes, A novel approach to hugoniot measurements utilizing transparent crystals, J. Appl. Phys. **114**, 043518 (2013). * Svendsen and Ahrens [1987]B. Svendsen and T. J. Ahrens, Shock-induced temperatures of MgO, Geophys. J. R. Astron. Soc. **91**, 667 (1987). * [83]We encounter imaginary-mode problems when using SCAN for phonon calculations at some of the densities. Therefore, we only use PBEsol for the QHA and finite-temperature QMD calculations. * Karato [2003]S.-I. Karato, _The Dynamic Structure of the Deep Earth: An Interdisciplinary Approach_ (Princeton Univ Press, 2003), Chap. 4. * Lee _et al._ [2021]J. Lee, M. A. Morales, and F. D. Malone, A phaseless auxiliary-field quantum Monte Carlo perspective on the uniform electron gas at finite temperatures: Issues, observations, and benchmark study, J. Chem. Phys. **154**, 64109 (2021). * Shen _et al._ [2020]T. Shen, Y. Liu, Y. Yu, and B. M. Rubenstein, Finite temperature auxiliary field quantum Monte Carlo in the canonical ensemble, J. Chem. Phys. **153**, 204108 (2020). * Chen and Zhang [2023]S. Chen and S. Zhang, Computation of forces and stresses in solids: towards accurate structural optimizations with auxiliary-field quantum monte carlo (2023), arXiv:2302.07460 [condmat.mtrl-sci]. * (88) This example is for \\(T\\)=0 K. At finite temperatures, we consider isotherms and use Helmholtz free energies \\(F(V)\\) for the common-tangent approach or Gibbs free energies \\(G(P)\\) for the crossover-point approach. * (89) D. Alfe, PHON: A program to calculate phonons using the small displacement method, Comput. Phys. Commun. **180**, 2622 (2009). * (90) T. Sun, D. B. Zhang, and R. M. Wentzcovitch, Dynamic stabilization of cubic CaSi O3 perovskite at high temperatures and pressures from ab initio molecular dynamics, Phys. Rev. B **89**, 094109 (2014). * (91) R. Jeanloz, Effect of coordination change on thermodynamic properties, in _High-Pressure Research in Geophysics_, edited by S. Akimoto and M. Manghnani, pp. 479-498, Center for Academic Publishing, Tokyo, Japan, 1982. * (92) R. Jeanloz and M. Roufosse, Anharmonic properties: ionic model of the effects of compression and coordination change., J. Geophys. Res. **87**, 10763 (1982). * (93) W. F. Van Gunsteren, X. Daura, and A. E. Mark, Computation of free energy, Helvetica Chimica Acta **85**, 3113 (2002). * (94) J. H. Jung, P. Srinivasan, A. Forslund, and B. Grabowski, High-accuracy thermodynamic properties to the melting point from ab initio calculations aided by machine-learning potentials, npj Computational Materials **9**, 3 (2023). * (95) Note that the experimental data by Bolis _et al._ shown here are scanned from Fig. 8 of Ref. [29] and lifted by 1300 K to approximately match the original publication (Fig. 2 of Ref. [40]). Unfortunately, the original data in Ref. [40] are not available and it is unclear where the mismatch was from.
By applying auxiliary-field quantum Monte Carlo, we calculate the equation of state (EOS) and B1-B2 phase transition of magnesium oxide (MgO) up to 1 TPa. The results agree with available experimental data at low pressures and are used to benchmark the performance of various exchange-correlation functionals in density functional theory calculations. We determine PBEsol is an optimal choice for the exchange-correlation functional and perform extensive phonon and quantum molecular-dynamics calculations to obtain the thermal EOS. Our results provide a preliminary reference for the EOS and B1-B2 phase boundary of MgO from zero up to 10,500 K.
Condense the content of the following passage.
131
arxiv-format/1307_3406v1.md
# Towards a full Atmospheric Calibration system for the Cherenkov Telescope Array M. Doro\\({}^{1,2,3}\\) \\({}^{1}\\) University and INFN Padova, via Marzolo 8, 35131 Padova (Italy) \\({}^{2}\\) Fisica de les Radiacions, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Spain \\({}^{3}\\) CERES, Universitat Autonoma de Barcelona-IEEC, 08193 Bellaterra, Spain \\({}^{4}\\) Institut de Fisica d'Altes Energies, 08193 Bellaterra, Spain [email protected] M. Gaug\\({}^{2,3}\\) \\({}^{1}\\) University and INFN Padova, via Marzolo 8, 35131 Padova (Italy) \\({}^{2}\\) Fisica de les Radiacions, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Spain \\({}^{3}\\) CERES, Universitat Autonoma de Barcelona-IEEC, 08193 Bellaterra, Spain \\({}^{4}\\) Institut de Fisica d'Altes Energies, 08193 Bellaterra, Spain [email protected] O. Blanch\\({}^{4}\\) \\({}^{1}\\) University and INFN Padova, via Marzolo 8, 35131 Padova (Italy) \\({}^{2}\\) Fisica de les Radiacions, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Spain \\({}^{3}\\) CERES, Universitat Autonoma de Barcelona-IEEC, 08193 Bellaterra, Spain \\({}^{4}\\) Institut de Fisica d'Altes Energies, 08193 Bellaterra, Spain [email protected] L. Font\\({}^{2,3}\\) \\({}^{1}\\) University and INFN Padova, via Marzolo 8, 35131 Padova (Italy) \\({}^{2}\\) Fisica de les Radiacions, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Spain \\({}^{3}\\) CERES, Universitat Autonoma de Barcelona-IEEC, 08193 Bellaterra, Spain [email protected] D. Garrido\\({}^{2,3}\\) \\({}^{1}\\) University and INFN Padova, via Marzolo 8, 35131 Padova (Italy) \\({}^{2}\\) Fisica de les Radiacions, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Spain [email protected] A. Lopez-Oramas\\({}^{4}\\) \\({}^{1}\\) University and INFN Padova, via Marzolo 8, 35131 Padova (Italy) \\({}^{2}\\) Fisica de les Radiacions, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Spain [email protected] M. Martinez\\({}^{4}\\) \\({}^{1}\\) University and INFN Padova, via Marzolo 8, 35131 Padova (Italy) \\({}^{2}\\) Fisica de les Radiacions, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Spain [email protected] ## 1 Introduction Currently in its design stage, the Cherenkov Telescope Array (CTA) is an advanced facility for ground-based very-high-energy gamma-ray astronomy [1]. It is an international initiative to build the next-generation Cherenkov telescope array covering the energy range from a few tens of GeV to a few hundreds of TeV with an unprecedented sensitivity. The design of CTA is based on currently available technologies and builds upon the success of the present generation of ground-based Cherenkov telescope arrays (H.E.S.S., MAGIC and VERITAS [1]). Nowadays, the main contribution to the systematic uncertainties of imaging Cherenkov telescopes stems from the uncertainty in the height- and wavelength-dependent atmospheric transmission for a given run of data. MAGIC cites a contribution of 10% to the uncertainty of their energy scale [2] and 12% additional uncertainty on the flux due to run-by-run variations, while H.E.S.S. retrieves 10% for the atmospheric profile, and 15% from run-by-run atmospheric variations [3]. Both estimates are based upon data recorded during clean atmospheric conditions and have to be considered lower-limits for the general case of data taken under moderately acceptable atmospheric conditions. Atmospheric quality affects the measured Cherenkov yield in several ways: The air-shower development itself, the loss of photons due to scattering and absorption of Cherenkov light out of the camera field-of-view, resulting in dimmer images and the scattering of photons into the camera, resulting in blurred images. Despite the fact that several supplementary instruments are currently used to measure the atmospheric transparency, their data are only used to retain good-quality observation time slots, and only a minor effort has been made to routinely correct data with atmospheric information [4, 5, 6]. It is envisaged that the world-wide community of scientists using CTA data will be serviced with high-level data. It is moreover foreseeable that CTA will observe many more spectral features than the current generation of Imaging Atmospheric Cherenkov Telescopes (IACTs), probably also resolving finer structures. To achieve this goal, the atmosphere must be monitored continuously and precisely such that observatory data can be corrected before dissemination. This requires the extensive use of remote-sensing instrumentation such as LIDARs, possibly complemented by additional atmospheric monitoring devices to complement the LIDAR information. ## 2 Effects of atmospheres on data reconstruction Although IACTs are normally placed at astronomical sites, characterized by extremely good atmospheric conditions, the local atmosphere is potentially influenced by phenomena occurring at tens to thousands of kilometers far, and thus should be continuously monitored. Of the various atmospheric layers, only the troposphere (reaching up to \\(\\sim\\)15 km) and sometimes parts of the tropopause and, in the case of stratospheric eruptions, the lower stratosphere (15-20 km) are affected by variations of their chemical (and thus optical) properties. Air molecules can travel to the top of the troposphere (from 7 to 20 km depending on the latitude) and back down in a few days, hence this mixing makes the characteristics of the layer changing fast. While the molecular content of the atmosphere varies very slowly at a given location during the year, and slowly from place to place, aerosol concentrations can vary on timescales of minutes and travel large, inter-continental, distances. Most of them are concentrated within the first 3 km of the troposphere, with the free troposphere above being orders of magnitude cleaner. Aerosol sizes reach from molecular dimensions to millimeters, and the particles remain in the troposphere from 10 days to 3 weeks. The sizes are strongly dependent on relative humidity. Different types of aerosol show characteristic size distributions, and an astronomical site will always show a mixture of types, with one possibly dominant type at a given time and/or altitude. Light scattering and absorption by aerosols needs to be described by Mie theory or further developments of it, including non-sphericity of the scatterer. Aerosols use to have larger refraction indices than the one of water, and typically show also a small imaginary part. Contrary to the typical \\(\\lambda^{-4}\\) wavelength dependency of Rayleigh-scattering molecules, aerosols show power-law indices (the so-called _Angr\\(\\ddot{u}\\)rom_ coefficients) from 0 to 1.5, i.e. a much weaker dependency on wavelength. In order to estimate the effect of different atmospheric conditions on the image analysis of IACTs, we have simulated different molecular and aerosol profiles for the MAGIC system, consisting of two telescopes. The results are p-resented elsewhere in this conference [7]. Several aerosol scenarios have then been simulated: Enhancements of the ground layer from a quasi aerosol-free case up to a thick layer which reduces optical transmission by 70%, a cloud layer at the altitudes of 6 km, 10 km (cirrus) and 14 k-m (volcano debris) a.s.l. and a 6 km cloud layer with varying aerosol densities. We found -- as expected -- that the aerosol and clouds layer height, besides the density and type, affect the data differently, and that the position of this overdensity should be known precisely. In other words, the _total_ extinction (or the Aerosol Optical Depth) is not a good parameter for all cases, and using only integral extinction often may lead to large systematic errors. For this reason, height-resolving instruments are required. We believe that the main findings of this study should also be valid for CTA, at least in the energy range from 50 GeV to 50 TeV, albeit efforts have started to repeat the same simulation-s for CTA. Previous studies have been made [8, 4, 5] for H.E.S.S. and for the MAGIC mono system, however only for an increase of low-altitude aerosol densities, and in [9] for a reference configuration of CTA, claiming a change in the spectral power-law index of gamma-ray fluxes, when atmospheric aerosol layers are present. In our work, we found that different atmospheres affect the energy threshold, the energy resolution and the energy bias, that propagate into the computation of a target flux and spectral reconstruction. See [7] for further details. ## 3 Raman LIDARs for CTA Atmospheric properties can be derived, to a certain extent, directly from IACT data. Several studies have been made by the H.E.S.S. and MAGIC collaborations to estimate the integral atmospheric transmission, using trigger rates, muon rates, combinations of both [6], or the anode currents of the photomultipliers and/or pedestal RMS. Up to now, these parameters have been used only to discard data taken under non-optimal conditions, but work is ongoing to use this information to correct data themselves. However, as s-tated above, the use of integral transmission parameters is only valid in some of the possible atmospheric scenarios, namely those where the aerosol enhancement is found at the ground layer, or where clouds are low (below few k-m a.g.l), since the integral transmission parameters lack information about the layer height. For layers at higher altitudes, trigger rates with different cuts in image size and the stereo shower parameters themselves could eventually be used, however studies on these possibilities are not yet conclusive. For this reason, we have investigated the possibilities of using remote sensing devices such as the LIDAR [16]. LIDAR is an acronym for L\\(\\ddot{\\text{I}}\\)ight Detection And Ranging. The methodology of the LIDAR technique requires the transmission of a laser-generated light-pulse into the atmosphere (see Fig. 1). The amount of laser-light backscattered into the field of view of an optical receiver on the ground, is then recorded and analyzed. LIDARs have proven to be a powerful tool for environmental studies. Successful characterization of the atmosphere has been made at night using these systems [10, 11, 12], and in other fields of astronomy, the LIDAR technique has proven to be useful for the determination of the atmospheric extinction of starlight [13]. Of the various kinds of LIDARs, the so-called elastic one make only use of the elastically backscattered light by atmospheric constituents, while the Raman LIDARs make also use of the backscattered light from roto-vibrational excitation of atmospheric molecules. Elastic LIDARs are the simplest class of LIDAR, but their backscatter power return depends on two unknown physical quantities (the total optical extinction and backscatter coefficients) which need to be inferred from a single measurement. As a result various assumptions need to be made, or boundary calibrations introduced, limiting the precision of the height-dependent atmospheric extinction to always worse than 20%. The introduction of additional elastic channels and/or Raman (inelastic-scattering) channels allows for simultaneous and independent measurement of the extinction and backscatter coefficients with no need for _a priori_ assumptions [11]. Raman LIDARs yield a precision of the atmospheric extinction of better than 5%. The LIDAR return signal can be fully described by the LIDAR equation: Figure 1: Schematic view of a possible Raman LIDAR for the CTA. A laser is pointed towards the atmosphere, and the backscattered light collected by a telescope. At the focal plane, a light guide transports the light to a polychromator unit which is controlled and readout by an acquisition system and a data processor unit. \\[P(R,\\lambda_{rec})=K\\ \\frac{G(R)}{R^{2}}\\ \\beta(R,\\lambda_{em})\\ T^{\\dagger}(R, \\lambda_{em})\\ T^{\\dagger}(R,\\lambda_{rec})\\quad, \\tag{1}\\] which contains a system factor \\(K\\) (emitted power, pulse duration, collection area of the telescope), a geometrical overlap factor (overlap of the telescope field-of-view with the laser light cone) \\(G(R)\\), the molecular and aerosol backscatter coefficient \\(\\beta(R,\\lambda_{em})\\) and the transmission terms \\(T^{\\dagger}(R,\\lambda_{em})\\) and \\(T^{\\dagger}(R,\\lambda_{rec})\\). \\(R\\) is the atmospheric range, i.e. the distance from the LIDAR optical receiver, and \\(\\lambda_{em,rec}\\) are the emitted and received wavelengths. Using the elastic and Raman-scattered profiles, the atmospheric aerosol extinction coefficients \\(\\alpha^{m,p}\\) (\\(m\\) stands for molecules and \\(p\\) stands for particles or aerosol) can be derived using formulas such as: \\[\\alpha^{p}(R,\\lambda_{0})=\\frac{\\frac{d}{dr}\\ln\\left(\\frac{N_{N_{0}}(R)}{R^{2} P(R,\\lambda_{N_{2}})}\\right)-\\alpha^{m}(R,\\lambda_{0})-\\alpha^{m}(R,\\lambda_{N_{2}})}{ 1+\\left(\\frac{\\lambda_{0}}{\\lambda_{N_{2}}}\\right)^{\\lambda}}, \\tag{2}\\] where \\(\\lambda_{0}\\) is the elastic wavelength (355, 532 nm in our case) and \\(\\lambda_{N_{2}}\\) is the corresponding Raman-shifted N\\({}_{2}\\) backscattered wavelengths (387, 607 nm). \\(N_{N_{2}}\\) is the nitrogen number density. Eq. (2) has only the Angstrom index as free parameter (if only one elastic-Raman wavelength pair is used) and this leads to a good precision on \\(\\alpha^{p}\\), because over- and underestimating the Angstrom index by 0.5 leads to only 5% relative error in the extinction factor. Hence, apart from statistical uncertainties (which can be minimized by averaging many LIDAR return signals), results are typically precise to about 5-10% **in each altitude bin**, and probably even better in clear free tropospheres with only one aerosol layer. The uncertainty generally grows with increasing optical depth of the layer. By adding a **second Raman line**, e.g. the \\(N_{2}\\) line at 607 nm, the last free Angstrom parameter becomes fixed, and precisions of **better than 5%** can be achieved for the aerosol extinction coefficients. The molecular extinction part needs to be plugged in by hand using a convenient model. However, since the molecular densities change very little, and on large time scales, this can be achieved by standard tools. Precisions of typically better than 2% are rather easy to achieve. The experience of MAGIC with an elastic LIDAR system (i.e. analyzing only one backscatter wavelength, and no Raman lines), has shown that simplified reconstruction algorithms can be used to achieve good precision of the aerosol extinction coefficients, at least within the range of uncertainties inherent to an elastic LIDAR [14]. An analog conclusion was achieved with the H.E.S.S. LIDAR: a stable analysis algorithm was found, limited by the 30% uncertainties of the time and range dependent LIDAR ratio. ## 4 Raman LIDAR characterization Several institutes in CTA are currently designing Raman LIDAR systems: the Institut de Fisica d'Altes Energies (IFAE) and the Universitat Autonoma de Barcelona (UAB), located in Barcelona (Spain), the LUPM (Laboratoire Univers et Particules de Montpellier) in Montpellier (France) and the CEILAP (Centro de Investigaciones Laser y sus Aplicaciones) group in Villa Martelli (Argentina) [15]. The different groups are designing independently the LIDAR systems with different mechanical, optical and steering solutions. In order to assess the performance of Raman L-IDARs for CTA, we use the current baseline design of the Barcelona LIDAR, which is also presented elsewhere in this conference [16]. It consists of a 1.8 m diameter parabolic mirror equipped with a powerful Nd:YAG laser. The outgoing laser beam at 355 and 532 nm is directed towards the telescope pointing axis in a co-axial configuration, ensuring full near-range overlap at little more than hundred meters. In the design of the optical readout module, special care has been taken to minimize signal losses throughout the entire light collection scheme, especially for the 2 dim Raman lines at 387 and 607 nm. For the Barcelona LIDAR, a so-called _link-budget_ figure-of-merit calculation has been performed showing that the dimnest Raman line will be detected from a distance of 15 km (see Fig. 2) with a signal-to-noise ratio of 10 after only one minute. This short integration time (non standard for typical LIDAR usage) is required for CTA because the LIDAR operation should not interfere with the experiment datataking. For example, it could be possible to perform LIDAR campaigns entirely during the telescope repositioning time. ## 5 Strategies for data reconstruction Once the atmospheric extinction profile is determined, the data taken with the CTA observatory need to be corrected accordingly. This can be achieved by re-calibrating the data themselves, either event-wise or bin-wise (Fig. 3), or Figure 3: Scheme of a possible data analysis flow in case the atmospheric model is used at the data level. Figure 2: Estimated return power from the link-budget simulation of the Barcelona Raman LIDAR. The horizontal red line is the background power calculated for a typical night-sky background at an astronomic site. by simulating adapted atmospheres (Fig. 4). We have seen from our simulations that data affected by enhancements of the ground layer can be scaled rather easily up to high levels of extinction, and probably no dedicated MC simulation is needed for each set of data. This is not the case anymore for (cirrus) clouds at altitudes from 6 to 12 km a.s.l., which create strong energy-dependent effects on the scaling factors. Moreover, the images are distorted depending on the location of the shower maximum, which varies even for showers of a same energy. Very high altitude layers, in turn, produce only effects on the very low energy gamma-ray showers. Depending on the properties of the chosen site for CTA, still to be decided, it would probably make sense to create 10 - 20 typical atmospheric simulations within these possibilities and interpolate between them. ## 6 Complementing devices Apart from the Raman LIDAR, complementary devices for atmospheric characterization and the understanding of the site climatology can be used. A first class of devices comprises those which provide at least some profiling of the atmosphere, such as radio sondes, profiling microwave or infrared radiometers and differential optical absorption spectrometers. The operating wavelengths of these devices are very different from those of the Raman LIDAR, and precise conversion of their results to the spectral sensitivity window of the CTA is difficult. However, since aerosols are better visible at larger wavelengths, profiling devices may be used to determine cloud heights with high precision and their results may be good seeds for the Raman LIDAR data inversion algorithm. A next class of complementary devices contains those which measure integral parameters, such as Sun, Lunar and stellar photometers, UV-scopes and starguides. Integral optical depth measurements have become world-wide standards, organized in networks ensuring proper (cross-)calibration of all devices. Specacular resolutions of better than 1% can be obtained during the day, about 2% with moon, and 3% under dark night conditions, at wavelength ranges starting from about 400 nm. Extrapolations to the wavelength range between 300 and 400 nm worsens the resolution again. The precise results from these devices can serve as important cross-checks of the integrated differential Raman LIDAR transmission tables. Finally, all major astronomical observatories operate cloud detection devices, mainly all-sky cameras and/or take advantage from national weather radars. All-sky cameras have become standardized within the CONCAM or the TASCA networks, however important differences in sensitivity to cirrus clouds are reported. The advantage of these devices are their big field-of-view which allows to localize clouds over the entire sky and makes possible online adapted scheduling of source observations. Relatively cheap cloud sensors based on pyrometers or thermopiles have been tested by the MAGIC collaboration and the SITE WP of CTA. The calibration of these devices is however complex and measurements are easily disturbed by surrounding installations. Recent anaysis can be found in [17, 18]. ## 7 Conclusions The CTA observatory will have two arrays of telescopes, one in Southern and one in the Northern hemispheres not chosen yet. Despite the astronomical sites are expected to have extremely good atmospheric conditions for most of the year, with dry clean air, the experience with H.E.S.S., MAGIC and VERITAS has shown that non-monitored atmospheric variations introduce systematic effects on the data, which limit the energy and flux reconstruction. With the goal of producing high-quality data for CTA, currently various groups are developing instruments for atmospheric monitoring and calibration. In particular, our Monte Carlo studies have shown that integral atmospheric transmission parameters are not sufficient to fully characterize the atmosphere (related to the fact that gamma-ray showers are initiated at altitudes between the strato- and troposphere) and that range-resolved measurements are advisable. The natural solution is the use of (Raman) LIDARs, which were described in this contribution. In addition, the use of complementary instruments that measure integral or differential (in altitude) atmospheric parameters is possible and envisaged. Once retrieved the differential atmospheric transparency, different strategies are foreseen to accurately and precisely reconstruct data, ultimately reducing the reconstructed energy and flux uncertainties. In addition, it would be possible to increase the duty cycle of the telescopes by retrieving those data taken during non-optimal atmospheric conditions which are normally discarded by standard clean-atmosphere analysis, especially important during e.g. multi-wavelength campaigns or target of opportunity observations. Finally, the use of atmospheric instruments will allow for continuous weather now- and forecast. **Acknowledgment:** We gratefully acknowledge support from the agencies and organizations listed in this page: [http://www.cta-observatory.org/?q=node/22](http://www.cta-observatory.org/?q=node/22) ## References * [1] M. Actis et al., Exper.Astron. 32 (2011) 193-316. * [2] J. Aleksic et al., Astrop.Phys. 35 (2012) 435-448 * [3] F. Aharonian et al., A&A 457 (2006) 899-915 * [4] S. J.Nolan et al., Procs. 30th ICRC (2007) * [5] D. Dorner et al. A&A 493 (2009) 721-725 * [6] R. De Los Reyes et al, these proceedings, ID-0610 * [7] D. Garrido et al., these proceedings, ID-0465. * [8] K. Bernlohr, Astrop.Phys. 12 (2000) 255-268 * [9] S.J. Nolan et al., Astrop.Phys. 34 (2010) 304-313 * [10] H. Inaba, Springer-Verlag, New York (1976) 143 * [11] A. Ansmann et al., Appl.Opt. 31 (1992) 7113 * [12] A. Behrendt et al., Appl.Opt. 41 (2002) 7657 * [13] P.C. Zimmer et al., Procs. AAS Meeting, 42 (2010) 401 * [14] C. Fruck et al., these proceedings, ID-1054. * [15] P. Ristori et al., these proceedings, ID-0346. * [16] A. Lopez-Oranas et al., these proceedings, ID-0210. * [17] M. Daniel and G. Vasileiadis, AIP Conf. Procs. 1505 (2012) 717-720 * [18] J. Hahn et al., AIP Conf. Procs 1505 (2012) 721-724 Figure 4: Scheme of possible data analysis flow where the atmospheric information is introduced in the Monte Carlo simulations
The current generation of Cherenkov telescopes is mainly limited in their gamma-ray energy and flux reconstruction by uncertainties in the determination of atmospheric parameters. The Cherenkov Telescope Array (CTA) aims to provide high-precision data extending the duty cycle as much as possible. To reach this goal, it is necessary to continuously and precisely monitor the atmosphere by means of remote-sensing devices, which are able to provide altitude-resolved and wavelength-dependent extinction factors, sensitive up to the tropopause and higher. Raman LIDARs are currently the best suited technology to achieve this goal with one single instrument. However, the synergy with other instruments like radiometers, solar and stellar photometers, all-sky cameras, and possibly radio-sondes is desirable in order to provide more precise and accurate results, and allows for weather forecasts and now-casts. In this contribution, we will discuss the need and features of such multifaceted atmospheric calibration systems. C CTA, IACT, atmospheric calibration, remote sensing instruments
Condense the content of the following passage.
207
arxiv-format/1801_06596v2.md
**Rapid Assessment of Damaged Homes in the Florida Keys** **after Hurricane Irma** Siyuan Xian1,*, Kairui Feng1, Ning Lin1, Reza Marsooli1, Dan Chavas2, Jie Chen2, Adam Hatzikyriakou1 Footnote 1: [https://www.census.com/census/c](https://www.census.com/census/c) Lindt et al., 2007 for Hurricane Katrina; Wang et al. 2017, Shao et al. for general cases). Here, we conduct a rapid damage survey and assessment for Hurricane Irma, and we use a statistical regression approach to quantify the contribution of specific vulnerability factors to the damage. Such rapid post-event assessments can provide crucial information for implementing post-storm response measures (Lin et al., 2014; Horner et al., 2011; AL-Kanj et al., 2016). The raw and analyzed data from this study appear on DesignSafe1, a web-based research platform of the National Science Foundation's (NSF) Natural Hazards Engineering Research Infrastructure (NHERI). Footnote 1: [https://www.designsafe-ci.org/#research](https://www.designsafe-ci.org/#research) ###### #### Damage Survey and Analysis NOAA's post-storm satellite imagery2 provides an overview of Irma's impact. The two selected survey areas in Florida Keys, the Big Pine Key and Marathon, suffered the most severe damage, according to the satellite imagery, and experienced high water levels and wave heights, indicated by hydrodynamic modeling. Footnote 2: [https://storms.ngs.noaa.gov/storms/irma/index.html#6/28.139/-81.547](https://storms.ngs.noaa.gov/storms/irma/index.html#6/28.139/-81.547) Field surveys can provide detailed information for analyzing damage mechanisms. However, traditional on-site surveys require a significant time and effort, as surveyors must walk through affected areas and photograph damaged properties. Thus, we applied a rapid survey method. Rather than walking, we drove at a speed of 10 mph throughout the affected areas, taking GPS-informed pictures from the rare side windows. Over two days, the team took 3700\\({}^{+}\\) pictures for 1600+ residential buildings comprised of single family and mobile homes (e.g., trailers). Using the collected photos and the satellite images, we categorized the damage state for each surveyed residential house. Satellite images were primarily used to assess roof damage. More detailed damage mechanisms were further evaluated from the photos. We adopted FEMA's damage state criteria used in the damage assessment study for Hurricane Sandy3. The categories include: No/very limited damage; Minor damage; Major damage; and Destroyed. Footnote 3: [https://www.arcgis.com/home/item.html?id=307dd522499d4a44a33d7296a5da5ea0](https://www.arcgis.com/home/item.html?id=307dd522499d4a44a33d7296a5da5ea0)We found that the destroyed and severely damaged buildings were caused largely by hydrodynamic forces induced by storm surge/waves. For example, Fig. 2a shows that storm surge/waves completely crashed the lower part of a building in Big Pine Key. Fig. 2b shows debris from damaged trailers floating in the water in a trailer community in Marathon. The observed storm surge damage is consistent with the high surge and wave heights estimated for the two sites (Figs. 3a and 3b). The assessed damage state for each house appears in Figs. 3c and 3d. The slightly and moderately damaged buildings are 72.7% and 75% of the total surveyed building for the assessed areas in Big Pine Key and Marathon, respectively. The percentages of the destroyed buildings are 13.9% and 16.9%, respectively. In both areas, the destroyed buildings are clustered. The destroyed buildings in Big Pine Key are near the coastline and narrow waterways, a strong indication that the damage was caused mainly by hydrodynamic forces. The completely destroyed buildings in Marathon cluster in the north and middle parts of the study area. The majority of those buildings are mobile homes. Statistical analysis confirms these general observations. We use an ordered logistic regression model to correlate the damage state with the following factors: distance from the coastline (m), building type, and building size (m\\({}^{2}\\)). Our analysis for Big Pine Key shows that the distance from the coastline is the single significant predictor of damage state (p-value \\(<\\) 0.001; Table 1a), as the damage is dominated by buildings located near narrow waterways connected to the ocean. For Marathon, although many damaged houses are near the coast, house type and house size are the two significant predictors (p-value \\(<\\) 0.001; Table 1b), highlighting the near-complete destruction of trailers (which are often small). Possible measures to reduce flood vulnerability in the study areas include elevating and strengthening the buildings (especially mobile homes) and relocating homeowners living near the coastline (and narrow waterways) further inland. However, potential financial challenges exist, especially for Marathon, where the median annual income is $50,976 vs. $63,716 for Big Pine Key. Some local homeowners in a destroyed trailer community in Marathon (indicated by the red rectangle in Fig. 3d) with whom we talked had lived in trailers as their primary homes for decades without flood insurance. Financial constraints may hinder their rebuilding or relocating to somewhere safer. As low-income people living in mobile homes suffered most, natural hazards worsen economic inequality in this case. In contrast, discussion with local residents in Big Pine Key indicated that many structures there were second homes and, furthermore, were designed to withstand hurricane hazards (e.g., key assets were raised above the ground floor). These observations raise again issues of affordability and equity (Montgomery and Chakraborty, 2015). Policies relevant to hurricane damage recovery and rebuilding must address these issues. ###### Acknowledgements. This study is supported by NSF grant CMMI-1652448. ## References: * Al-Kanj et al. (2016) Al-Kanj, L., Powell, W. B., & Bouzaiene-Ayari, B. (2016). The Information-Collecting Vehicle Routing Problem: Stochastic Optimization for Emergency Storm Response. _arXiv preprint arXiv:1605.05711_. * Dietrich et al. (2012) Dietrich, J C, S Tanaka, J J Westerink, C N Dawson, R A Luettich Jr., M Zijlema, L H Holthuijsen, J M Smith, L G Westerink, and H J Westerink. 2012. \"Performance of the Unstructured-Mesh, SWAN+ADCIRC Model in Computing Hurricane Waves and Surge.\" Journal of Scientific Computing 52 (2). Springer US: 468-97. doi:10.1007/s10915-011-9555-6. * Hatzikyriakou et al. (2015) Hatzikyriakou, A., Lin, N., Gong, J., Xian, S., Hu, X., & Kennedy, A. (2015). Component-based vulnerability analysis for residential structures subjected to storm surge impact from Hurricane Sandy. _Natural Hazards Review_, _17_(1), 05015005. * Horner & Widener (2011) Horner, M. W., & Widener, M. J. (2011). The effects of transportation network failure on people's accessibility to hurricane disaster relief goods: A modeling approach and application to a Florida case study. _Natural hazards_, _59_(3), 1619-1634. * Lin et al. (2014) Lin, C. C., Siebeneck, L. K., Lindell, M. K., Prater, C. S., Wu, H. C., & Huang, S. K. (2014). Evacuees' information sources and reentry decision making in the aftermath of Hurricane Ike. _Natural Hazards_, _70_(1), 865-882. Marsooli, R., and N. Lin (2017). Numerical Modeling of Storm Tides and Waves Induced by Historical Tropical Cyclones along the East and Gulf Coasts of the United States. _Journal of Geophysical Research: Oceans_, Submitted. * Montgomery & Chakraborty (2015) Montgomery, M. C., & Chakraborty, J. (2015). Assessing the environmental justice consequences of flood risk: a case study in Miami, Florida. _Environmental Research Letters_, _10_(9), 095010. * Shao et al. (2017) Shao, W., Xian, S., Keim, B. D., Goidel, K., & Lin, N. (2017). Understanding perceptions of changing hurricane strength along the US Gulf coast. _International Journal of Climatology_, 37(4), 1716-1727. * Wang et al. (2017) Wang, C., Zhang, H., Feng, K., & Li, Q. (2017). Assessing hurricane damage costs in the presence of vulnerability model uncertainty. _Natural Hazards_, 85(3), 1621-1635. * Xian et al. (2015) Xian, S., Lin, N., & Hatzikyriakou, A. (2015). Storm surge damage to residential areas: a quantitative analysis for Hurricane Sandy in comparison with FEMA flood map. _Natural Hazards_, _79_(3), 1867-1888. * Xian et al. (2017) Xian, S., Lin, N., & Kunreuther, H. (2017). Optimal house elevation for reducing flood-related losses. _Journal of Hydrology_, _548_, 63-74. Figures & Tables: Figure 3: Spatial distribution of estimated hazards and damage states in study areas. (a) and (b) show simulated maximum total water level and significant wave height, respectively; (c) and (d) show assessed damage state (none: green; minor: yellow; major: orange; destroyed: red) for residential buildings in Big Pine Key and Marathon, respectively. \\begin{table} \\begin{tabular}{l|c|c|c|c|c} \\hline (a) & Factors in damage state & Coef. & Std. Err. & z & p-value & 95\\% conf. interval \\\\ \\hline House Type & 0.0233 & 1.987 & 0.12 & 0.906 & (-0.366 & 0.413) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Ordered logistic regression models that correlate damage state with vulnerability factors (a) for 846 assessed buildings in Big Pine Key; (b) for 811 buildings in Marathon. Figure 2: Photos of damage in (a) Big Pine Key: storm surge damage besides waterway (left side of building) and (b) Marathon: trailer community with house debris filling waterway [MISSING_PAGE_POST] \\begin{tabular}{l|c|c|c|c|c} & & & & & (-0.0198 \\\\ House Size (square meters) & -0.00081 & 0.00059 & -1.36
To understand the hazard and inform the field survey, we first use the coupled hydrodynamic and wave model ADCIRC+SWAN (Dietrich et al. 2012, Marsooli and Lin 2017) to simulate the storm tide (i.e., water level) and wave height for Hurricane Irma. To simulate Irma's storm tide and wave (Figure 1), we apply the surface wind (at 10-m) and sea-level pressure fields from National Center for Environmental Prediction Final (NCEP FNL) operational global analysis data (0.25\\({}^{\\circ}\\) x 0.25\\({}^{\\circ}\\) x 6 hours). The model results, e.g., time series in Figure 1, indicate that the model satisfactorily captures the temporal evolution and the peak values of the water levels and wave heights induced by Hurricane Irma. The model results show that the highest water levels, between 2 and 2.5 m, occurred in South/Southwest Florida. However, coastal zones in this region are predominantly uninhabited and covered by wetlands, so little loss of life or property is expected. High water levels are also estimated for the Florida Keys, especially islands located on the right side of the storm track. For example, the peak storm tide in Big Pine Key and Marathon reaches up to 2 m. The model results also show that large waves with a significant wave height of about 14 m reached a few kilometers off the Florida Keys. In contrast, wave heights off the southern and southwestern coasts of Florida were small (\\(<2\\) m).
Summarize the following text.
333
arxiv-format/2108_02455v1.md
# LSENet: Location and Seasonality Enhanced Network for Multi-Class Ocean Front Detection Cui Xie,, Hao Guo,, and Junyu Dong, This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFE0201200, the National Natural Science Foundation of China (NSFC) under Grant 41706010, U1706218, and 41927805, and the Fundamental Research Funds for the Central Universities under Grant 201964022 (Corresponding authors: Junyu Dong).C. Xie, H. Guo and J. Dong are with the School of Computer Science and Technology, Ocean University of China, Qingdao 266100, China (E-mail: [email protected] (C. Xie), [email protected] (J. Dong). ## I Introduction Ocean fronts refer to the narrow transition zone between two or more water masses of different physical properties, which can be described by the horizontal gradient of sea temperature, salinity, density, chlorophyll and other marine hydrological elements. In ocean fronts zone, the strong local horizontal and vertical flows lead to the accumulation of nutrients and plankton. Therefore, the fronts have great influences on maritime and pelagic fishing planning. In addition, ocean fronts have obvious reflection and refraction effect on the propagation of underwater sound, which will affect the marine military and other underwater operations. Thus, it is of great significance and value to identify the ocean front. Traditional ocean front detection methods are mainly based on gradient algorithm [1-3]. These methods heavily depend on the experience of ocean experts to determine the gradient threshold, which is unstable, time-consuming and not automatic. Moreover, these methods are seriously affected by noise data, so the detection accuracy is low. In recent years, with the development of deep learning, some ocean front detection methods based on neural network have been proposed. Sun _et al._[4] proposed an ocean front detection method based on convolutional neural network, but the algorithm could only detect whether the ocean front occurs (binary classification) at patch level accuracy. Li _et al._[5] regarded ocean front detection as an edge detection problem and proposed the weak edge identification network (WEIN), which can identify the existence of the ocean front at pixel level. However, all the above ocean front detection methods ignore the detection of different classes of ocean fronts in different sea areas. In practical research, researchers often focus on a specific class of ocean front in specific sea area, or compare the differences between different classes of ocean fronts. Therefore, the detection of different classes of ocean fronts is urgently needed for more comprehensive analysis. Although Cao _et al._[6] have studied the detection of multi-class ocean fronts. Their proposed method is still based on gradient algorithm, the detection accuracy and generalization ability of the method are not satisfactory. In order to solve the above problems, we propose a new semantic segmentation network model called location and seasonality enhanced network (LSENet). In addition, a channel supervision unit structure and a location attention mechanism are designed to integrate the seasonality and spatial dependence of ocean fronts, so as to improve the detection accuracy of the model. The main contributions of this paper are as follows: 1) We model multi-class detection of ocean fronts as a semantic segmentation problem and propose LSENet to address it. 2) We propose a channel supervised unit structure, which integrates the seasonality of ocean front to the feature learning process of the model. 3) We propose a new location attention mechanism, which enables the model to adaptively allocate the weights to ocean fronts in different sea areas, thus improving the multi-class detection accuracy of the model. The remaining part of this paper is organized as follows. Section II provides an overview of the related work. Section III describes the data sets we used. Section IV describes the details of the LSENet model. Experiments and some visualizationresults are presented in section V. Section VI concludes the paper. ## II Related Work ### _Semantic Segmentation_ Currently, semantic segmentation is one of the hot research topics in the field of computer vision. Its goal is to classify every pixel in the image [7]. It has been widely used in biomedical image segmentation [8], automatic driving [9] and other fields. Long _et al._[10] proposed FCN (Fully Convolutional Networks) in 2015, which is considered as a pioneering work of semantic segmentation. Only convolutional neural network is used in this network model, and information fusion of different scales is added. Ronneberger [11]_et al._ proposed UNet, which performs well in the case of weak supervised learning, and is still the common baseline in the field of biomedical image segmentation. Zhao _et al._[12] proposed PSPNet on the basis of FCN, and achieved good performance in some semantic segmentation data sets (such as Cityscapes dataset [13]) by introducing pyramid pooling module. In addition, the Deeplab series models proposed by Chen _et al._[14-17] performed well in semantic segmentation tasks, their Xception structure significantly reducing the number of parameters and improving the speed of model operation. This method is also one of the most frequently chosen baselines in the field of semantic segmentation. Based on the successful application of semantic segmentation in many fields, we introduce the idea of semantic segmentation into ocean front detection problem to accomplish the task of multi-class ocean fronts detection at pixel level. ### _Attention Mechanisms_ In recent years, attention mechanism has been widely used in the field of semantic segmentation. Hu _et al._[18] proposed a kind of channel attention in their SENet, which can help models to capture global semantic information at the channel level. Fu _et al._[19] proposed channel correlation attention and position correlation attention in order to capture the correlation between pixels. Choi _et al._[20] found that in urban-scene image, the sky basically appeared in the upper part of the image, while the road usually existed in the lower part of the image. Therefore, they proposed a height attention, which enabled the model to capture the relationship between height and classes and improved the segmentation accuracy. Inspired by the work of Choi _et al._[20], we propose a location attention that aims to ensure that the model learns the spatial dependence between the ocean front classes and the corresponding sea areas. In other words, it makes sure the model knows that a front occurring near the Bohai Sea in China is more likely to be a Bohai Coastal Front than a Korean Strait Front. ## III Datasets It is worth mentioning that there is no multi-class labeled data in the current ocean fronts research. So, we built a dataset of ocean fronts to train the model. Firstly, we collected the advanced high resolution radiometer (AVHRR) sea surface temperature (SST) daily data (2015 to 2018) from National Oceanic and Atmospheric Administration (NOAA). Then, we intercepted a piece of data near the sea area of China, with longitude ranging from 117.975E to 130.975E and latitude ranging from 23.025N to 40.025N. Next, we computed the gradient of daily SST data and generate 1461 SST gradient maps of 340 \\(\\times\\) 260 size. Finally, we invited experts in ocean fronts research to help label these SST gradient maps, and the labeled results are shown in Fig. 1. Here, according to the experts' experience and the existing research [21]-[24], ocean fronts near sea area of China were divided into 11 classes, namely: (1) Zhejiang-Fujian Front (2) Yellow Sea Warm Current Front (3) Kuroshio Front (4) Northern East China Sea Shelf Front (5) Jiangsu Front (6) Shandong Peninsula Front (7) Bohai Sea Coastal Front (8) Bohai Sea Strait Front (9) Yangtze Estuary Front (10) Western Korea Coastal Front (11) Korea Strait Front. In order to evaluate the performance of the model comprehensively and accurately, we selected the data of the whole year of 2018 (365 samples in total) as the test set to ensure that the test set contained ocean fronts from all seasons of the year. We used the remaining data from 2015 to 2017 (a total of 1096 samples) as the train set for model training. In order to prevent overfitting and increase the amount of data at the same time, we used random photometric distortion, random crop and random flip image to enhance the data. Moreover, in order to facilitate data enhancement and model downsampling, we used 0 padding to reshape the size of the original input image to \\(352\\times 352\\times 3\\). ## IV Model Description ### _Overview_ Fig. 2 shows the overview of our model. It consists of two parts: (1) encoder-decoder part, which is responsible for feature extraction and feature processing; (2) head part, which is responsible for output detection results. The input image will first go through 5 encoders and 4 decoders. The role of the encoder is to learn the features of the image, and downsampling the image, so as to increase the receptive field of the convolution kernel and reduce the computation of the network. Fig. 1: Labeled data. (a) original SST gradient map. (b) labeled result. The function of the decoder is to analyze the feature map generated by the encoder, and to upsample the feature map step by step to the original input size. Moreover, in order to reduce the loss of image information caused by the downsampling, the concatenate operation is added between the encoder and the decoder. Finally, the multi-class detection results are generated through the head part. ### _Encoder-Decoder Part_ Fig. 3 shows the structure of an encoder. The encoder contains two branches. The branch at the bottom is the image feature processing branch, which is composed of two convolution blocks. Its main function is to extract and analyze the features of the image. In this branch, the feature map \\(\\mathbf{A}\\in\\mathbb{R}^{H\\times W\\times C_{0}}\\) are fed into two convolution blocks to generate a new feature map \\(\\mathbf{B}\\mathbf{I}\\in\\mathbb{R}^{H\\times W\\times C}\\). The upper branch is the channel supervision unit proposed in this paper. The design idea of channel supervision unit is to let the model comprehensively analyze the input feature map at the channel level, judge the class of fronts contained in the feature map, and introduce the seasonality of ocean fronts into the model in the form of seasonal feature encoding. In this branch, feature map \\(\\mathbf{A}\\in\\mathbb{R}^{H\\times W\\times C_{0}}\\) first goes through a global average pooling layer and the dimension is compressed to \\(\\mathbb{R}^{1\\times 1\\times C_{0}}\\). Then a full connection layer (RELU function activation) is used to reduce the number of channels to \\(C/2\\) for seasonal feature insertion. After concatenating the monthly one-hot encoded seasonal features, the number of channels is changed to \\(12+C/2\\), and then a full connection layer (RELU function activation) is used to generate the feature map \\(\\mathbf{B}\\mathbf{I}\\in\\mathbb{R}^{1\\times 1\\times C}\\). After summing the feature map \\(\\mathbf{B}\\mathbf{I}\\) with each pixel of \\(\\mathbf{B}\\mathbf{I}\\), the output feature map \\(\\mathbf{B}\\in\\mathbb{R}^{\\frac{H}{2}\\times^{2}\\times C}\\) of the encoder is obtained by downsampling. The structure of the decoder and encoder is basically the same, the only difference is that the last sampling layer of decoder uses upsampling. All encoder and decoder parameters used in our model are shown in Table I. ### _Head Part_ The head part consists of detection branch and attention branch (as shown in Fig. 4). The detection branch at the bottom is relatively simple, which only contains a convolutional layer of 1\\(\\times\\)1 convolution kernel to map the number of channels to the desired number of classes \\(N\\) (\\(N=12\\)), and then the model detection result is obtained by Softmax function. For the upper location attention branch, a given feature map \\(\\mathbf{A}\\in\\mathbb{R}^{H\\times W\\times C}\\) is first fed into an averaged pooling layer to obtain the feature map \\(\\mathbf{C}\\mathbf{I}\\in\\mathbb{R}^{\\frac{H}{2}W\\times C}\\), where \\(r\\) is the downsampling coefficient, which is empirically set to 11 in this paper. Under this value, the size of feature map \\(\\mathbf{C}\\mathbf{I}\\) is \\(32\\times 32\\). Then, the feature map \\(\\mathbf{C}\\mathbf{I}\\) is fed into a convolution block for feature learning, and the size Fig. 3: The structure of an encoder. Fig. 2: An overview of the LSENet. of the feature map is reshaped to \\(\\frac{H}{RT^{T}\\tau^{N}}c_{t}\\). Each pixel in this feature map can be considered as a high abstraction of the feature of the region with the size of \\(r\\times r\\). At the same time, in order to enhance the model's sensitivity to geographical location, location encoding is added into the attention branch. Next, the location encoded feature map is input into several convolution blocks for feature learning, and then the result is converted into values between 0 and 1 through a convolution block with Sigmoid activation function. These values represent the attention weights of the model for different channels of each region. The resulting attention weight map \\(\\mathbf{C2\\in\\mathbb{R}^{T-r}\\times N}\\) is upsampled by bilinear interpolation and is element-wise multiplied with the results of the detection branch. Finally, an element-wise sum operation is performed with the result of multiplication and the result of detection branch to further optimize the detection result. The location encoding uses positional encoding 2D method of wang et al. [25], which can not only encode location order from left to right, but also top to bottom. And the formula is as follows: \\[PE(x,y,2i)=sin\\big{(}x/10000^{4i/D}\\big{)} \\tag{1}\\] \\[PE(x,y,2i+1)=cos\\big{(}x/10000^{4i/D}\\big{)}\\] (2) \\[PE(x,y,2j+D/2)=sin\\big{(}y/10000^{4j/D}\\big{)}\\] (3) \\[PE(x,y,2i+1+D/2)=cos\\big{(}y/10000^{4j/D}\\big{)} \\tag{4}\\] Where, \\(x\\) and \\(y\\) represent the values of horizontal and vertical coordinates respectively, and the value ranges are \\([0,H]\\), \\([0,W]\\). \\(i\\) and \\(j\\) represent the number of channels whose value range is \\([0,D/2\\,]\\). \\(D\\) represents the total number of channels in the feature map. The location encoding is completed by element-wise sum the result of the positional encoding 2D and the feature map to be encoded. ## V Experiments ### _Parameter Setting and Evaluation Metrics_ Based on our generated datasets in Part III, all experiments in this paper are performed on an NVIDIA RTX 3080 GPU and implemented using the open source deep learning library Keras [26] and TensorFlow [27]. In order to compare with other semantic segmentation models, we set the same parameters for our model and the compared model: batch size is set to 4, training epoch is set to 80, cross entropy is used as loss function, Adam [28] is used as optimizer, and 1e-3 is used as initial learning rate. And the learning rate decay strategy is used to reduce the learning rate to half of the previous rate when the loss of the validation set does not drop for 3 consecutive epochs. In this paper, we use mean intersection over union (mIoU) as an evaluation metric. The mIoU is a commonly used evaluation method in the field of semantic segmentation. The essence of intersection over union (IoU) is to calculate the ratio of the intersection and the union of the ground truth and the predicted segmentation datasets. This ratio can be expressed as true positive (TP) divided by the sum of false positive (FP), false negative (FN), and true positive (TP), as the following formula: \\[IoU=TP/(FP+FN+TP) \\tag{5}\\] The mIoU is the average of IoU, and the complete formula is as follows: \\[mIoU=\\frac{1}{k+1}\\sum_{i=0}^{k}\\frac{p_{ii}}{\\sum_{j=0}^{k}p_{ij}+\\sum_{j=0}^ {k}p_{ji}-p_{ii}} \\tag{6}\\] Fig. 4: The structure of the head part. Where, \\(p_{ij}\\) represents the number of pixels that belong to class \\(i\\) but are detected to be class \\(j\\) (\\(p_{ij}=TP\\), when \\(i\\) and \\(j\\) are equal, and \\(p_{ij}=FN\\), when \\(i\\) and \\(j\\) are not equal). Similarly, \\(p_{ij}\\) represents the case of FP when \\(i\\) and \\(j\\) are not equal. And \\(k\\) represents the total number of classes. ### _Comparison with Other Semantic Segmentation Models_ In order to evaluate the results of model detection, some semantic segmentation models are used for comparison: 1) **FCN-8s**[10] is a fully convolutional network without fully connected layer, with the introduction of deconvolutional layer to increase data size, which can output fine results. 2) **UNet**[11] uses encoder-decoder structure, it performs well in weakly supervised learning and is widely used in the field of biomedical image segmentation. 3) **SegNet**[29] uses max-pooling indices that can record the location information of maximum pixels, which solves the problem of information loss. 4) **Deeplabv3+**[16] uses the structure of dilated convolution and Xception to improve speed of the algorithm and performs well in the field of semantic segmentation. 5) **PSPNet**[12] uses pyramid pooling module to aggregate background information and is widely used in the field of semantic segmentation. 6) **DANet**[18] is able to capture the correlation between pixels by using the two attention mechanisms of channel correlation and position correlation. 7) **HANet**[19] adds height attention based on Deeplabv3+, which enables the model to capture the relationship between height and classes. In addition, PSPNet and DANet use ResNet50 [30] and ResNet101 [30] as backbone respectively to process and learn features. Table II shows the comparison results, from which we can Fig. 5: Comparison with other semantic segmentation models. (a) FCN-8s. (b) UNet. (c) SegNet. (d) Deeplabv3+. (e) PSPNet (ResNet50). (f) PSPNet (ResNet101). (g) DANet (ResNet50). (h) DANet (ResNet101). (i) HANet. (j) our proposed model. (k) ground truth. see that our model does perform well, especially for the detection of front4 with variety of morphology. Although the performance of FCN and PSPNet are good for some applications, they don't perform well for fronts detection. Because both methods have more than 2\\(\\times\\) upsampling layer, which will cause information loss for detection of strip-shaped fronts. In addition, performance of DANet (ResNet101) is not as good as that of DANet (ResNet50), this may be because the number of DANet (ResNet101) parameters is too large for this data set, which will lead to a slight overfitting phenomenon and thus affect the detection results of the model. In addition, we visualized the detection results (Fig. 5). Compared with other models, we can clearly see the superiority of the detection results of our model, especially for front3 (at the bottom of the image), front4 (in the middle of the image) and front7 (at the top left corner of the image). ### _Ablation Experiment_ Ablation experiments were carried out to demonstrate the validity of channel supervision unit (CSU) and location attention mechanism (LA). In this experiment, we define the Basenet to represent the network removal of CSU and LA in our model. For CSU test, two seasonal feature encoding methods are designed, one-hot encoding by month (feature vector of length 12) and one-hot encoding by season (feature vector of length 4). For LA test, except for using the positional encoding 2D, the CoordConv [31] are also used for location encoding. Compared with positional encoding 2D, the implementation of CoordConv is relatively simple and fast, which only needs to first concatenate the horizontal and vertical coordinates of the pixel channel-wise to the feature map, and then carry out a convolution operation. Table III shows the results of ablation experiment. On the whole, both CSU and LA have improved the detection accuracy of model, and the improvement of CSU is greater than that of LA. Moreover, the adding seasonal and location encoding is better than the method Fig. 6: Attention maps of one day in June, the green curve in the figure shows the edge of the ocean front. (a) ground truth. (b) background. (c) front4. (d) front5. (e) front6. (f) front7. (g) front8. (h) front9. (i) front10. (j) front11. extinction of ocean fronts usually last for many days. As is shown in Fig. 8, the ocean fronts of a random week in 2018 are visualized, clearly showing the similarity of shapes during the evolution of ocean fronts (for example, the westward convergence of front3 and southward extension of front5). Therefore, in the future, temporal continuity can be introduced through a specific network structure to further improve the detection accuracy of our model. ## VI Conclusion In this paper, we propose a semantic segmentation model called LSENet for multi-class detection of ocean fronts at pixel-level. In this model, we consider the different physical characteristics of ocean fronts, and introduce the seasonality and spatial dependence through the channel supervision unit structure and the location attention mechanism respectively for improving the detection accuracy. We carry out many comparison experiments on our own data sets and demonstrate the superiority of our model and the unique designed structure. In the future, we will integrate the temporal continuity of ocean front into our model by a new network structure. Fig. 9 gives the implementation details of location attention mechanism. Fig. 10 shows the attention maps in spring or winter, which includes all 11 fronts. In the attention map of each front, the sea area where front frequently occurs is basically given a higher weight. Additionally, the front10 in the background attention map (Fig. 10(b)) is misjudged as in Fig. 6. However, the mechanism behind this phenomenon is still unclear and needs in-depth study. ## References * [1]I. M. Beskin, J. E. OrReilly, \"An algorithm for oceanic front detection in chlorophyll and SST satellite imagery,\" _J. Mar. Syst._, vol. 78, no. 3, pp. 319-326, Oct. 2009. * [2] J. Hopkins, P. Challenor, and A. G. P. Shaw, \"A new statistical modeling approach to ocean front detection from SST satellite images,\" _J. Atmos. Ocean. Technol._, vol. 27, no. 1, pp. 173-191, Jan. 2010. * [3] G. Kirches _et al._, \"GRADHIST--A method for detection and analysis of oceanic fronts from remote sensing data,\" _Remote Sens. Environ._, vol. 181, pp. 264-280, Aug. 2016. * [4] X. Sun _et al._, \"A Multiscale Deep Framework for Ocean Fronts Detection and Fine-Grained Location,\" _IEEE Geosci. and Remote Sens. Lett._, vol. 16, no. 2, pp. 178-182, Feb. 2019. * [5] Q. Li _et al._, \"Weak Edge Identification Network for Ocean Front Detection,\" _IEEE Geosci. and Remote Sens. Lett._, to be published. DOI: 10.1109/LGRS.2021.3051203. * [6] W. Cao _et al._, \"Automatic Ocean Front Fine Recognition Based on Deep Learning (in Chinese),\" _Comput. Eng._, vol. 46, no. 10, pp. 266-274, Dec. 2020. * [7] A. Garcia-Garcia _et al._, (2017). \"A Review on Deep Learning Techniques Applied to Semantic Segmentation.\" [Online]. Available: [https://arxiv.org/abs/1704.06857](https://arxiv.org/abs/1704.06857) * [8] Z. Zhang _et al._, \"ET-Net: A Generic Edge-A'Tention Guidance Network for Medical Image Segmentation,\" in _Proc. 10th Int. Workshop on Mach. Learn. in Med. Imag._, Shenzhen, China, Oct. 13-17, 2019, pp. 442-450. * [9] H. Zhao _et al._, \"ICNet for Real-Time Semantic Segmentation on High-Resolution Images,\" in _Proc. 15th Eur. Conf. Comput. Vision (ECCV)_, Munich, Germany, SEP. 08-14, 2018, pp. 418-434. * [10] J. Long, E. Shelhamer and T. Darrell, \"Fully convolutional networks for semantic segmentation,\" in _Proc. IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR)_, 2015, pp. 3431-3440. * [11] O. Ronneberger, P. Fischer, T. Brox. (2015). \"U-Net: Convolutional Networks for Biomedical Image Segmentation.\" [Online]. Available: [https://arxiv.org/abs/1505.04597v1](https://arxiv.org/abs/1505.04597v1) * [12] H. Zhao _et al._, \"Pyramid Scene Parsing Network,\" in _Proc. IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR)_, 2017, pp. 6230-6239, DOI: 10.1109/CVPR.2017.660. * [13] M. Cordts _et al._, \"The Cityscapes Dataset for Semantic Urban Scene Understanding,\" in _Proc. IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR)_, 2016, pp. 3133-3233, DOI: 10.1109/CVPR.2016.350. * [14] L. Chen _et al._, \"Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform,\" in _Proc. IEEE Geosci. Comput. Vision and Pattern Recognit. (CVPR)_, 2016, pp. 4545-4554, DOI: 10.1109/CVPR.2016.492. * [15] L. Chen _et al._, \"DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,\" _IEEE Trans. Pattern Anal. and Mach. Intell._, vol. 40, no. 4, pp. 834-848, Apr. 2018. * [16] L. Chen, G. Papandreou, F. Schroff, and H. Adam. (2017). \"Rethinking atrous convolution for semantic image segmentation.\" [Online]. Available: [https://arxiv.org/abs/1706.05587](https://arxiv.org/abs/1706.05587) * [17] L. Chen _et al._, \"Encoder-decoder with atrous separable convolution for semantic image segmentation.\" in _Proc. Eur. Conf. Comput. Vision (ECCV)_, 2018, pp. 801-818. * [18] J. Hu, L. Shen and G. Sun, \"Squeeze-and-Excitation Networks,\" in _Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognit._, 2018, pp. 7132-7141, DOI: 10.1109/CVPR.2018.00745. * [19] J. Fu _et al._, \"Dual Attention Network for Scene Segmentation,\" in _Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognit. (CVPR)_, 2019, pp. 3141-3149, DOI: 10.1109/CVPR.2019.00326. * [20] S. Choi, J. T. Kim and J. Choo, \"Cars Can't Fly Up in the Sky: Improving Urban-Scene Segmentation via Height-Driven Attention Networks,\" in _Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognit. (CVPR)_, 2020, pp. 9370-9380. * [21] R. Hickox _et al._, \"Climatology and seasonal variability of ocean fronts in the East China, Yellow and Bohai Seas from satellite SST data,\" _Geophysical Res. Lett._, vol. 27, no. 18, pp. 2945-2948, Dec. 2000. * [22] C. Chen, \"Chemical and physical fronts in the Bohai, Yellow and East China seas,\" _J. Mar. Syst._, vol. 78, no. 3, pp. 394-410, Oct. 2009. * [23] S. Ren, W. Hui, L. Na. \"Review of Ocean Front in Chinese Marginal Seas and Frontal Forecasting,\" _Advances in Earth Sci._, vol. 30, no. 5, pp. 552-563, May. 2015. * [24] B. Chen _et al._, \"Ocean Front Sea Area Analysis Method Based on Satellite Remote Sensing Sea Surface Temperature Data (in Chinese),\" _Ocean Eng._, vol. 36, no. 2, pp. 108-118, Mar. 2018. * [25] Z. Wang, J. Liu, \"Translating urban formula images to LaTeX sequences using deep neural networks with sequence-level training,\" _Int. J. Document Anal. and Recognit._, vol. 24, no. 1, pp. 63-75, Nov. 2020. * [26] F. Chollet _et al._, (2015). Keras. [Online]. Available: [https://github.com/fchollet/keras](https://github.com/fchollet/keras) * [27] M. Abadi _et al._, (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. [Online]. Available: [http://tensorflow.org/](http://tensorflow.org/) * [28] P. Kingma, J. Ba. (2014). \"Adam: A Method for Stochastic Optimization.\" [Online]. Available: [https://arxiv.org/abs/1412.6980](https://arxiv.org/abs/1412.6980) * [29] V. Badrinarayanan, A. Kendall and R. Cipolla, \"SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,\" _IEEE Trans. Pattern Anal. and Mach. Intell._, vol. 39, no. 12, pp. 2481-2495, Dec. 2017, DOI: 10.1109/TPAMI.2016.2644615. * [30] K. He _et al._, \"Deep Residual Learning for Image Recognition,\" in _Proc. IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR)_, 2016, pp. 770-778, DOI: 10.1109/CVPR.2016.90. * [31] R. Liu _et al._, \"An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution,\" in _Proc. Advances in Neural Inf. Process. Syst. (NeurIPS)_, 2018, pp. 9605-9616. Figure 10: Attention maps of one day in January, the green curve in the figure shows the edge of the ocean front. (a) ground truth. (b) background. (c) front1. (d) front2. (e) front3. (f) front4. (g) front5. (h) front6. (i) front7. (j) front8. (k) front9. (l) front10. (m) front11. Figure 9: Implementation details of location attention mechanism.
Ocean fronts can cause the accumulation of nutrients and affect the propagation of underwater sound, so high-precision ocean front detection is of great significance to the marine fishery and national defense fields. However, the current ocean front detection methods either have low detection accuracy or most can only detect the occurrence of ocean front by binary classification, rarely considering the differences of the characteristics of multiple ocean fronts in different sea areas. In order to solve the above problems, we propose a semantic segmentation network called location and seasonality enhanced network (LSENet) for multi-class ocean fronts detection at pixel level. In this network, we first design a channel supervision unit structure, which integrates the seasonal characteristics of the ocean front itself and the contextual information to improve the detection accuracy. We also introduce a location attention mechanism to adaptively assign attention weights to the fronts according to their frequently occurred sea area, which can further improve the accuracy of multi-class ocean front detection. Compared with other semantic segmentation methods and current representative ocean front detection method, the experimental results demonstrate convincingly that our method is more effective. Deep learning, semantic segmentation, ocean front
Write a summary of the passage below.
222
arxiv-format/1405_2403v1.md
# Hyperspectral pan-sharpening: a variational convex constrained formulation to impose parallel level lines, solved with ADMM Alexis Huck, Francois de Vieilleville, Pierre Weiss and Manuel Grizonnet A. Huck and F. De Vieilleville are with Magellium, Toulouse, France.P. Weiss is with ITAV-USR3505, universite de Toulouse, France.M. Grizonnet is with CNES, Toulouse, France. ## I Introduction High spectral resolution of hyperspectral imaging sensors generally implies concession on spatial resolution due to optics/photonics and cost considerations. If high spatial resolution panchromatic data are available, hyperspectral pan-sharpening can significantly help to improve the spatial resolution of sensed hyperspectral images. In the last three decades, pan-sharpening approaches were dedicated to multispectral data. The earliest methods were based on specific _spectral-space transforms_ such as the Hue-Intensity-Saturation (HIS) transform or the Principal Component Analysis (PCA) Transform. More recently, spatial frequency based approaches such as the High Pass Filter (HPF) method exploiting multiscale spatial analysis [1] provided improved results. The multiscale spatial analysis framework generally offers very time efficient performance but lacks flexibility to consider some prior knowledge about \"physics of scene and sensor\" (the sensors Modulation Transfer Function (MTF), sensor noise or any prior information). This aspect has been a limitation for application to hyperspectral pan-sharpening. Thus, recent methods are generally based on variational [2] or bayesian [3] formulations. In particular, in [2], the authors have proposed to consider a term based on the topographic properties of the panchromatic image. This idea stems from [4] where the authors show that most geometrical information of an optical image lies in the set of its gray level-set lines. The proposed algorithm includes three novelties: 1. we propose a constrained convex formulation where the constraints are the fit-to-data terms. This enables to easily tune the related parameters which are the (supposed) known noise variances of the sensors. 2. The proposed minimization algorithm is based on the ADMM. It handles the non differentiabilities, constraints and special structures of the linear transforms in an efficient way. 3. The formulation takes the MTF (Modulation Transfer Function) into account, which helps refining the fit-to-hyperspectral-data constraint. This is favorable to high spectral fidelity in pan-sharpened hypersepctral data. ## II Problem formulation In this paper, we rearrange (hyperspectral) images into vectors in order to allow writing matrix-vector products. Let \\(x=\\begin{pmatrix}x_{1}\\\\ \\vdots\\\\ x_{L}\\end{pmatrix}\\in\\mathds{R}^{LM}\\) and \\(u=\\begin{pmatrix}u_{1}\\\\ \\vdots\\\\ u_{L}\\end{pmatrix}\\in\\mathds{R}^{LN}\\) denote the low spatial resolution (LR) measured hyperspectral image and the (unknown) high spatial resolution hyperspectral image respectively. The integers \\(L\\) and \\(M\\) represent the number of spectral bands and the number of spectral pixels in the low resolution image, respectively. We let \\(p\\in\\mathds{R}^{N}\\) denote the rearranged panchromatic measured image, where \\(N=q^{2}\\times M\\) and \\(q\\geq 1\\) denotes the resolution factor between the low and high resolution images. The linear projection operator which returns the \\(l^{th}\\) spectral band is denoted \\(\\pi_{l}\\). Then, \\(x_{l}=\\pi_{l}x\\in\\mathds{R}^{M}\\) and \\(u_{l}=\\pi_{l}u\\in\\mathds{R}^{N}\\) are the \\(l^{th}\\) spectral bands of \\(x\\) and \\(u\\) respectively. A model formulation for any spectral band \\(l\\) of the hyperspectral measurements is given by \\[x_{l}=\\mathbf{D}_{s}\\mathbf{H}_{s}u_{l}+n_{x_{l}}. \\tag{1}\\] The linear operator \\(\\mathbf{H}_{s}\\in\\mathds{R}^{N\\times N}\\) respresents the spatial convolution with the spatial Point Spread Function of the hyperspectral sensor. The linear operator \\(\\mathbf{D}_{s}\\in\\mathds{R}^{M\\times N}\\) is a downsampling operator that preserves \\(1\\) every \\(q\\) pixels in the horizontal and vertical directions. Some additive sensor noise is considered in the vector \\(n_{x_{l}}\\). We assume that \\(n_{x_{l}}\\sim\\mathcal{N}(0,\\sigma_{x_{l}}^{2})\\) where \\(\\sigma_{x_{l}}^{2}\\) is the noise variance of the \\(l^{th}\\) measured hyperspectral band. A model formulation for the panchromatic image acquisition process is given by \\[p=\\mathbf{G}u+n_{p} \\tag{2}\\]where \\(\\mathbf{G}\\in\\mathds{R}^{N\\times LN}\\) is a linear operator which linearly and positively combines the spectral bands with weights equal to the samples of the sensitivity spectral pattern of the panchromatic image. The noise of the measured panchromatic image is denoted \\(n_{p}\\sim\\mathcal{N}(0,\\sigma_{p}^{2})\\). Analogously to [2], we will exploit the fact that the different spectral bands of hyperspectral images approximately share the same level lines. Such a knowledge can be integrated by comparing the gradient of the panchromatic data with the gradient of each hyperspectral image channel. A simple way to measure the discrepancy between the normal fields consist of using the function \\(f\\) below \\[f(u)=\\sum_{l=1}^{L}\\sum_{i=1}^{N}\\left|\\left\\langle\ abla u_{l}(i),\\frac{ \ abla^{\\perp}p(i)}{\\left\\|\ abla p(i)\\right\\|_{2}}\\right\\rangle_{\\mathds{R}^ {2}}\\right| \\tag{3}\\] where \\(\ abla=\\left[\\partial_{h}^{\\intercal},\\partial_{v}^{\\intercal}\\right]^{ \\intercal}:\\mathds{R}^{N}\\rightarrow\\mathds{R}^{N}\\times\\mathds{R}^{N}\\) is the standard discrete gradient operator, \\(\\partial_{h}\\) and \\(\\partial_{v}\\) are the horizontal and vertical gradient operators respectively, \\(\\left\\langle\\cdot,\\cdot\\right\\rangle_{\\mathds{R}^{2}}\\) is the standard Euclidian dot product in \\(\\mathds{R}^{2}\\) and \\(\\left\\|\\cdot\\right\\|_{2}\\) the associated \\(L_{2}\\) norm. The operator \\(\ abla^{\\perp}=\\left[-\\partial_{v}^{\\intercal},\\partial_{h}^{\\intercal} \\right]^{\\intercal}:\\mathds{R}^{N}\\rightarrow\\mathds{R}^{N}\\times\\mathds{R}^ {N}\\) returns for each pixel a vector orthogonal to the gradient. Functional \\(f\\) has many attractive properties: it is convex in \\(u\\) and it can be shown to have a meaning in the continuous setting for bounded variation functions. In natural scenes, the gradient can be very low in image areas corresponding to homogeneous radiometry of the scene. In such a case, \\(f\\) does not provide much information and an additional regularizing term should be added in the variational formulation. In this work, we use a standard total variation regularizer [5], commonly used for such purposes and adapted in Eq. 4 to multiband images \\[TV(u)=\\sum_{l=1}^{L}\\sum_{i=1}^{N}\\left\\|(\ abla u_{l})(i)\\right\\|_{2} \\tag{4}\\] where \\(\\left\\|\\cdot\\right\\|_{2}\\) is \\(L_{2}\\)-norm in \\(\\mathds{R}^{2}\\). The proposed variational formulation for the hyperspectral pan-sharpening problem is as follows: \\[\\hat{u} = \\operatorname*{argmin}_{u}\\ \\gamma f(u)+(1-\\gamma)TV(u)\\] \\[s.t. \\left\\|x_{l}-\\mathbf{DH}_{s}u_{l}\\right\\|_{2}^{2}\\leq M\\sigma_{x_ {l}}^{2},\\forall l\\in[1,\\dots,L]\\right.\\] \\[\\left.\\left\\|p-\\mathbf{G}u\\right\\|_{2}^{2}\\leq N\\sigma_{p}^{2}\\] where \\(\\left\\|\\cdot\\right\\|_{2}\\) denotes the \\(L_{2}\\) norm in \\(\\mathds{R}^{M}\\) or \\(\\mathds{R}^{N}\\). In this formulation, \\(\\gamma\\in[0,1]\\) fixes a balance between the two terms \\(f\\) and \\(TV\\). The fit-to-data terms are constraints deriving from the physical models (Eq. 1 and 2). The parameters \\(\\{\\sigma_{x_{l}}\\}_{l\\in\\{1,\\dots,L\\}}\\) and \\(\\sigma_{p}\\) can be _a priori_ given or estimated, which is a strong asset of the variational constrained formulation. ## III ADMM based optimization The proposed algorithm is called TVLCSP (for Total Variation iso-gray Level-set Curves Spectral Pattern) and its pseudo-code is given in the procedure TVLCSP. A variant (called TVLC) not considering the sensitivity spectral and fit-to-panchromatic data has been developed but is not presented here. ``` 1:procedureTVLCSP(\\(x,p,\\sigma_{x},\\sigma_{p},\\alpha,\\beta\\) ) 2:# Initialization 3:\\(\\lambda\\gets 0\\)\\(\\triangleright\\) a vector of zeroes. 4:\\(y_{1}\\leftarrow\\left[\\forall\\,x^{\\intercal},\\uparrow\\,x^{\\intercal}\\right]^{ \\intercal}\\), \\(y_{2}\\leftarrow\\left[\\uparrow\\,x^{\\intercal},\\uparrow\\,x^{\\intercal}\\right]^{ \\intercal}\\), 5:\\(y_{3}\\leftarrow\\left[\\cdot\\,x\\right.\\), \\(y_{4}\\gets p\\) 6:# Iterative scheme 7:while stop condition not met do 8:for all\\(l\\in\\{1,\\dots,L\\}\\)do 9:\\(z_{1,l}\\leftarrow\ abla u^{l}-\\frac{\\lambda_{l}^{l}}{\\beta}\\) 10:\\(y_{1}^{l}\\leftarrow\\frac{z_{1}^{l}}{\\left\\|z_{1,\\mathbb{R}^{2}}^{l}}\\cdot\\max \\left(\\left\\|z_{1}^{l}\\right\\|_{2,\\mathbb{R}^{2}}-\\frac{\\gamma}{\\beta},0\\right)\\) 11:\\(\\epsilon\\leftarrow\\left\\langle y_{2}^{l},\\eta\\right\\rangle_{\\mathds{R}^{2}}\\)\\(\\triangleright\\) where \\(\\eta=\\frac{\ abla^{\\perp}p}{\\left\\|\ abla p\\right\\|_{2}}\\) 12:if\\(\\epsilon\ eq 0\\)then 13:\\(y_{2}^{l}\\leftarrow\ abla u^{l}-\\frac{1}{\\beta}\\left(\\lambda_{2}^{l}+(1-\\gamma) \\text{sign}(\\(\\mathds{R}^{LN}\\rightarrow\\mathds{R}^{N}\\) is a spectral decimation operator. Finally, we define the matrix \\[\\mathbf{M}=[(\ abla\\pi_{1})^{\\intercal},\\ldots,(\ abla\\pi_{L})^{\\intercal},( \ abla\\pi_{1})^{\\intercal},\\ldots,(\ abla\\pi_{L})^{\\intercal},\\mathbf{H}_{s}^{ \\intercal},\\mathbf{H}_{\\lambda}^{\\intercal}]^{\\intercal} \\tag{6}\\] The up-sampling operator \\(\\uparrow\\): \\(\\mathds{R}^{LM}\\rightarrow\\mathds{R}^{LN}\\) spatially up-samples a vectorized hyperspectral image by a factor \\(q\\) and the vector \\(\\mathbb{1}\\) is a vector of ones of suitable dimension. The multiplications and divisions are element-wise. The ADMM procedure introduces an internal parameter, denoted \\(\\beta\\), whose value impacts the convergence speed. Note that the update rule of \\(u\\) (line \\(34\\) in the procedure) can be computed in the Fourier domain. More precisely, since \\(\\mathbf{M}^{\\intercal}\\) and \\(\\mathbf{M}^{\\intercal}\\mathbf{M}\\) are circulant matrices (as concatenation and summation of circulant matrices, respectively), the left product by \\(\\mathbf{M}^{\\intercal}\\), the inversion of \\(\\mathbf{M}^{\\intercal}\\mathbf{M}\\) and the left product by \\(\\left(\\mathbf{M}^{\\intercal}\\mathbf{M}\\right)^{-1}\\) can be performed in the Fourier domain. matrix \\(\\mathbf{M}\\) is a concatenation of circulant matrices (Eq. 6), each associated with a convolution-type operation either in the two spatial dimensions (case of all matrices in Eq. 6 but \\(\\mathbf{H}_{\\lambda}^{\\intercal}\\)) or in the spectral dimension (case of \\(\\mathbf{H}_{\\lambda}^{\\intercal}\\)). Thus, the left side product by \\(\\mathbf{M}^{\\intercal}\\) is in the Fourier domain. Additionally, \\(\\mathbf{M}^{\\intercal}\\mathbf{M}\\) is a summation of circulant matrices, which is also circulant so the left-product by its inverse can be computed in the Fourier domain too. Thus, line \\(34\\) of the procedure should be replaced by: \\[u=\\mathcal{F}^{-1}\\left(\\frac{\\mathcal{F}\\left(\\mathbf{M}^{\\intercal}\\right) \\cdot\\mathcal{F}\\left(\\frac{\\lambda}{\\beta}+y\\right)}{\\mathcal{F}\\left( \\mathbf{M}^{\\intercal}\\mathbf{M}\\right)}\\right) \\tag{7}\\] where \\(\\mathcal{F}\\) and \\(\\mathcal{F}^{-1}\\) represent the Fourier transform and its inverse, respectively. Currently, the stop condition is a number of iterations but another approach could be based on the stationary of \\(y\\), \\(\\lambda\\) and \\(u\\). ## IV Experimental results We present here results of TVLCSP on AVIRIS [6] and simulated HypXim [7] data. We have first extracted a selection of the Cuprite scene (AVIRIS) which represents a mineral area. The \\(224-\\)spectral band data has been preprocessed and simulated as follows. 1 - Absorption spectral bands have been removed (bands: \\(1-6\\), \\(106-114\\), \\(152-170\\) and \\(215-224\\)) to get a reference high resolution hyperspectral image \\(u_{\\text{ref}}\\). 2 - A convex combination of the spectral bands of \\(u_{\\text{ref}}\\) gives the simulated panchromatic data \\(p\\). The weights are the coefficients of the vector \\(\\mathbf{g}=\\left[\\frac{1}{80}\\cdots\\frac{1}{80}\\right]\\). 3 - The low resolution hyperspectral image \\(x\\) has been obtained from Eq. 1 without noise and with \\(\\mathbf{H}_{s}\\) representing an average filter. The chosen algorithm parameters are given in Table I. Visual results are presented in Fig. 1. In Table II we present quantitative evaluation and comparison with a wavelet-based pan-sharpening method [1] using usual performance metrics: 1 - global quality metrics RMSE and ERGAS, 2 - spectral quality metrics SAM and the spectral dispersion the spatial dispersion \\(D_{\\lambda}\\)[8]), and 3 - spatial quality metrics FCC [9] and spatial dispersion \\(D_{s}\\)[8]. Note that \\(D_{\\lambda}\\) and \\(D_{s}\\) are metrics without reference (ground truth high resolution hyperspectral image) requirement, which is relevant where no reference is available or when the reference is likely to introduce error in comparison (case of our HypXim data) due to noise. Additionally, TVLCSP has been tested on simulated HypXim data. They have been simulated from data acquired in the framework of the Pleiades program. The scene is located in Namibia and a sub-scene has been extracted. Some characteristics of the considered data are given in Table III. The considered sensitivity spectral pattern is shown in Fig. 2(a). We see that only some (\\(20\\)) of the spectral bands contribute to the panchromatic data thus only \\(20\\) non-zero coefficients in \\(\\mathbf{g}\\) and the presented results only concerns these bands. Note that the hyperspectral sensor spatial Point Spread Functions (PSF) has been supposed Gaussian spectrally and spatially invariant, Fig. 1: Cuprite scene and processing with wavelet and TVLCSP, for a resolution ratio \\(q=4\\)with a parameter tuned experimentally. The visual results are presented in Fig. 2 and the corresponding performance metrics are given in Table IV. We see that the method works well for many resolution rations. However, the results on HypXim are not as good as those on AVIRIS, probably due to our approximation hypotheses on the sensor parameters and to the presence of noise. Note that the simulated reference image is corrupted by sensor noise whereas TVLCSP provides relatively denoised data estimations, which introduces lack of confidence in the performance metrics values. ## V Conclusion We have tackled the pan-sharpening problem using a variational convex constrained approach with an objective based on the conservation of the set of iso-gray-level lines among spectral bands and total variation. The fit-to-data constraints have been mathematically considered as such and are based on the signal model and the sensor parameters, including noise statistics. An ADMM scheme has been developped, called TVLCSP and evaluated on AVIRIS and HypXim simulated data. ## Acknowledgment To cite this work, please use the reference [10]. The authors would like to thank the CNES for initializing and funding the study and providing HypXim simulated data. ## References * [1] T. Ranchin and L. Wald, \"Fusion of high spatial and spectral resolution images: the arsis concept and its implementation,\" _PERRS_, vol. 66, no. 1, pp. 49-61, 2000. * [2] C. Ballester, V. Caselles,, L. Iugal, and J. Verdera, \"A variational model for p-x image fusion,\" _IJCV_, vol. 69, no. 1, p. 4358, 2006. * [3] M. Joshi and A. Jalobeanu, \"MAP estimation for multiresolution fusion in remotely sensed images using an IGMRF prior model,\" _IEEE TGRS_, vol. 48, no. 3, pp. 1245-1255, 2010. * [4] V. Caselles, B. Coll, and J.-M. Morel, \"Geometry and color in natural images,\" _JMIV_, vol. 16, no. 2, pp. 89-105, 2002. * [5] L. Rudin, S. Osher, and E. Fatemi, \"Nonlinear total variation based noise removal algorithms,\" _Physica D_, vol. 60, no. 14, pp. 259-268, 1992. * [6] \"Aviris,\" [http://aviris.jpl.nasa.gov/](http://aviris.jpl.nasa.gov/). * [7] R. Marion, V. Carrere, S. Jacquemond, S. Chervel, P. Prasualt, M. D'oria, P. Giloupe, S. Hosford, B. Luke, and A. Bouraugiugnon, \"Hypxim: A new hyperspectral sensor combining science/defence applications,\" in _WHISPERS_, 2011. * [8] L. Alparone, B. Aiazzi, S. Baronti, A. Garzelli, F. Nencini, and M. Selva, \"Multispectral and panchromatic data fusion assessment without reference,\" _PERRS_, vol. 74, no. 2, pp. 193-200, 2008. * [9] Z. Zhou, D. Civico, and J. Silander, \"A wavelet transform method to merge landsat TM and SPOT panchromatic data,\" _IJRS_, vol. 19, no. 4, 1998. * [10] A. Huck, F. de Vieilleville, P. Weiss, and M. Grizonnet, \"Hyperspectral pan-sharpening: a variational convex constrained formulation to impose parallel level lines, solved with ADMM,\" in _WHISPERS_, 2014. Fig. 2: HypXim scene and processing with TVLCSP, for resolution ratios \\(q=2\\) and \\(q=6\\).
In this paper, we address the issue of hyperspectral _pan-sharpening_, which consists in fusing a (low spatial resolution) hyperspectral image HX and a (high spatial resolution) panchromatic image P to obtain a high spatial resolution hyperspectral image. The problem is addressed under a variational convex constrained formulation. The objective favors high resolution spectral bands with level lines parallel to those of the panchromatic image. This term is balanced with a total variation term as regularizer. Fit-to-P data and fit-to-HX data constraints are effectively considered as mathematical constraints, which depend on the statistics of the data noise measurements. The developed Alternating Direction Method of Multipliers (ADMM) optimization scheme enables us to solve this problem efficiently despite the non differentiabilities and the huge number of unknowns. hyperspectral, fusion, pan-sharpening, ADMM
Write a summary of the passage below.
180
arxiv-format/2312_14074v1.md
# LiDAR-LLM: Exploring the Potential of Large Language Models for 3D LiDAR Understanding Senqiao Yang\\({}^{1}\\) Equal contribution: {yangsenqiao.ai,liujaming.pku}@gmail.com Jiaming Liu\\({}^{1,2}\\) Project leader Ray Zhang \\({}^{3}\\) Corresponding author: [email protected] Mingjie Pan\\({}^{1}\\) Equal contribution: {yangsenqiao.ai,liujaming.pku}@gmail.com Zoey Guo \\({}^{3}\\) Xiaoqi Li \\({}^{1}\\) Zehui Chen \\({}^{1}\\) Peng Gao \\({}^{3}\\) Yandong Guo\\({}^{2}\\) Shanghai Zhang\\({}^{1}\\) Corresponding author: [email protected] ## 1 Introduction Recently, large language models (LLMs) [6, 34, 42] have demonstrated significant capabilities in complex reasoning Figure 1: **Characteristics of LiDAR-LLM.** Our proposed LiDAR-LLM takes 3D LiDAR data as input and aligns the 3D modality with the language embedding space, leveraging the exceptional reasoning capabilities of LLMs to understand outdoor 3D scenes. To enhance the spatial orientation representation of LiDAR features, we propose a View-Aware Transformer between the LiDAR Encoder and LLMs. Simultaneously, the bottom part showcases examples derived from our generated or employed LiDAR-text data, covering a spectrum of 3D-related tasks. and robust conversational abilities in the field of natural language processing. Building upon LLMs, Multimodal Large Language Models (MLLMs), such as BLIP-2 [28] and Flamingo [17], have been introduced. These models take in more modality (e.g., 2D images) as input, enabling LLMs to discuss and comprehend the visual scene. Despite MLLMs excelling at processing 2D image content, their comprehension of the more challenging 3D real-world scenes remains an open question. Understanding 3D scenes holds importance for various applications, including autonomous driving [4, 12, 14, 35] and robotics [16, 30, 49, 43], due to the wealth of spatial information in 3D data. Existing 3D understanding methods [5, 27, 33, 37, 48] often fail to demonstrate sufficient generalization capabilities when faced with unseen scenarios. They are limited in expressing specific downstream tasks in a manner comprehensible to humans, such as generating scene captioning and question answering. Therefore, recent works [22, 45, 24] take indoor 3D point clouds as input and leverage the powerful capabilities of LLMs to analyze them, aligning the 3D characteristics with the textual features of LLMs. However, they still encounter challenges when dealing with 3D outdoor LiDAR data, primarily due to its sparsity and complex geometric relationships, which pose a difficult multimodal alignment and reasoning. In this paper, as shown in Figure 1, we introduce LiDAR, a novel approach that harnesses the reasoning capabilities of LLMs to comprehensively understand outdoor 3D scenes. The LiDAR-LLM architecture comprises a 3D LiDAR encoder, an intermediate alignment transformer, and an LLM, _e.g._, LLaMA [42]. The key insight of LiDAR-LLM lies in redefining the problem of 3D scene cognition through interpretative language modeling. However, the introduction of LLMs for perceiving outdoor 3D scenes faces two challenges: (1) In contrast to the abundant availability of image-text paired data [9, 40, 41], 3D LiDAR-text paired data is exceedingly rare, and readily accessible multimodal models (e.g., CLIP [39]) are lacking. (2) 3D LiDAR data encompasses a variety of objects and intricate geometric relationships among them. Take outdoor autonomous driving, for example, where the ego vehicle is surrounded by a diverse array of moving and stationary objects, which both occlude and influence each other. To tackle these challenges, for LiDAR-LLM, we introduce a three-stage training strategy and generate relevant datasets, gradually transferring 3D representations into the text feature space and unleashing LLMs' reasoning capabilities for 3D scenes. Specifically, in the first stage, we employ MLLMs [28, 50] and GPT4 [34] for communication between multi-view images and language within the nuScenes dataset [7], where each scene is accompanied by paired 3D LiDAR data. In this way, we generate a dataset of 420K lidar-text pairs and **cross-modal align** the 3D LiDAR features with the word embeddings of LLMs. During the second stage, as the **perception** forms the foundation of 3D scene understanding, we incorporate the 3D bounding boxes into the question-answer text and generate a 280K LiDAR grounding dataset. This enhances LiDAR-LLM's sensitivity to object locations and relationships. In the final stage, we perform efficient fine-tuning of our model on **high-level instruction** datasets [15, 38], comprehensively expanding its capabilities for 3D downstream tasks. To more effectively bridge the modality gap between 3D LiDAR and text, we design a View-Aware Transformer (VAT) that connects the 3D LiDAR encoder with the LLM, injecting six view position embeddings into the 3D feature. Combined with the three-stage training strategy, VAT enhances the LLM's comprehension of the spatial orientation of visual features. In summary, our contributions are as follows: * We propose LiDAR-LLM, which takes 3D LiDAR data and language prompts as input, harnessing the reasoning capabilities of LLMs to understand outdoor 3D scenes. LiDAR-LLM can perform tasks such as 3D captioning, 3D grounding, 3D question answering, and more. * We introduce a three-stage training strategy for gradually transferring 3D representations into the text feature space, which involves cross-modal alignment, perception, and high-level instruction. In parallel, we collect a set of LiDAR-text paired datasets, including 420K 3D captioning and 280K 3D grounding data, which will be released. * We specially design a View-Aware Transformer (VAT) that connects the 3D LiDAR encoder with the LLM, bridging the modality gap between 3D LiDAR and text, and enhancing the LLM's comprehension of the spatial orientation of visual features. * In our proposed LiDAR-text datasets, LiDAR-LLM exhibits superior performance, achieving a 40.9 BLEU-1 score on the 3D captioning dataset, and securing a classification accuracy of 63.1% and a BEV mIoU of 14.3% on the 3D grounding dataset. ## 2 Related Work ### Multi-modal Large Language Models Extensive language models, such as LLaMA [42] and GPT-3 [19], demonstrate proficiency in managing a variety of language tasks, leveraging their powerful reasoning and generalization abilities. Building upon these achievements, 2D Multi-modal Large Language Models (2D MLLMs) [17, 28, 50] are introduced to bridge RGB visual images and text. These models leverage the capabilities of Large Language Models (LLMs) [42] and, by conditioning on 2D inputs, aim to address 2D downstream tasks, such as visual question answering [3] and captioning [2]. The representative model, BLIP [28], pre-trains a multi-modal mixture of encoder-decoder models using a datasetbootstrapped from large-scale noisy image-text pairs. It injects diverse synthetic captions and removes noisy captions to achieve unified vision-language understanding and generation. Meanwhile, VisionLLM [44] aligns vision-centric tasks with language tasks, allowing for flexible definition and management through language instructions. Furthermore, the introduction of 3D Multi-modal Large Language Models (3D MLLMs) [22, 24, 45, 46] aims to broaden the scope of knowledge, reasoning, and conversational capabilities obtained from LLMs to encompass the 3D modality. For instance, several projects leverage GPT-3 [19] or LLaMA [42] to improve the language-based comprehension of 3D spatial geometry, as demonstrated in works like PointCLIP V2 [54] and ViewRefer [21]. They focus on the 3D point cloud with a single object or an indoor scene. In contrast to these approaches, we are the first to exploit the reasoning capabilities of LLMs for understanding outdoor 3D scenes and completing tasks such as captioning, 3D grounding, and 3D question answering. The unique challenges posed by 3D LiDAR point cloud data, including the lack of LiDAR-text paired data and encompassing a variety of objects and relationships, present difficulties in multimodal alignment and reasoning. ### 3D-Language Tasks The combination of 3D point clouds and natural language holds diverse applications and has recently attracted growing attention [1, 10, 11, 18, 23, 26]. Specifically, 3D captioning [11, 13] is required to describe a particular object within a 3D scene. 3D visual grounding [10, 48] focuses on generating the location of the object that the text expression refers to. Meanwhile, in the context of 3D visual question answering [5], the model needs to answer language questions given the visual content of the 3D scene. However, 3D approaches for the aforementioned tasks are designed to address individual task-specific challenges without exploring their commonalities and providing a unified solution. Moreover, these methods are tailored for indoor point cloud tasks and may not directly transfer to outdoor LiDAR since LiDAR is much sparser and more diverse in geometric relationships. To reconcile this, we propose LiDAR-LLM, a LiDAR-oriented approach, to uniformly perform 3D tasks of outdoor scenes. ## 3 Method ### Overview The overall framework of LiDAR-LLM is presented in Fig. 2. The core concept involves transforming the highly sparse and intricate geometric LiDAR data into the representation space understandable by Large-Language Models (LLMs). This transformation is facilitated by our proposed View-Aware Transformer (VAT), which incorporates view position embeddings to enhance the spatial orientation un Figure 2: **Overview of our LiDAR-LLM framework.** The initial column showcases our 3D feature extractor, which processes the LiDAR point cloud input to derive a 3D voxel feature. Subsequently, the feature is flattened along the z-axis to produce the bird’s-eye view (BEV) feature. The View-Aware Transformer (VAT) accepts BEV embedding and learnable queries as input, with the output queries serving as soft prompt input to the frozen LLM. In the VAT, we introduce six view position embeddings into the BEV feature along with corresponding queries to enhance the capability of spatial orientation representation. This framework aligns the LiDAR modality with the language embedding space, enabling us to leverage the LLM for a comprehensive understanding of outdoor 3D scenes. derstanding of the LLM. Thus, it enables a comprehensive interpretation of intricate details in the outdoor 3D scene. However, the integration of LLMs to comprehend outdoor 3D scenes faces two challenges: (1) Unlike the abundance of available image-text paired data, 3D LiDAR-text paired data is exceptionally scarce; and (2) 3D LiDAR data involves diverse objects and intricate geometric relationships among them. Therefore, we implement a three-stage training strategy and generate LiDAR-text paired training data to collaboratively align 3D representations with the feature space of LLMs. Through this process, LiDAR-LLM undergoes diverse tasks across modalities and handles complex cross-modal scenarios at both the scene and instance levels. It unleashes the LLMs' common-sense reasoning and localization capabilities on 3D LiDAR data. ### Model Architecture Given a LiDAR input \\(L\\in\\mathbb{R}^{n\\times 3}\\), where \\(n\\) is the number of points, VoxelNet [52] is employed to extract its 3D voxel feature. Subsequently, considering the computational cost, we flatten the feature along the z-axis to generate the bird's-eye view (BEV) feature. Simultaneously, for the text input \\(T\\) with a maximum of \\(m\\) characters, LLaMA [42] is utilized to extract text features. With the BEV feature \\(\\mathcal{F}_{v}\\in\\mathbb{R}^{c\\times h\\times w}\\) along with the text feature \\(\\mathcal{F}_{t}\\in\\mathbb{R}^{m\\times d}\\) (where \\(d\\) is the dimension of the feature), our objective is to project these LiDAR BEV features into the word embedding space of a pre-trained LLaMA through our proposed View-Aware Transformer (VAT). This alignment is crucial for conducting multi-modal understanding and generating accurate answers in 3D downstream tasks. During training, we only fine-tune the injected adapters [25] in the LLaMA and VAT module while freezing the major parameters. This aims to preserve the powerful feature extraction and reasoning ability of existing modules and further equip the model with capabilities in understanding 3D LiDAR scenes. **VAT design.** In the right part of Fig. 2, the input to the VAT includes a set of \\(K\\) learnable query embeddings, with \\(K\\) set to 576 for convenient projection into the word embedding space of the LLM. These queries interact with the BEV feature through a cross-attention mechanism. The VAT produces an output comprising \\(K\\) encoded visual vectors, one for each query embedding. These vectors then undergo processing through a multi-layer perceptron (MLP) and are subsequently fed into the frozen LLM. However, outdoor LiDAR data, such as nuScenes [7], demands a comprehensive understanding of the orientation relationships between diverse objects and the ego car. And it encompasses intricate relationships among objects. Hence, we introduce view position embedding for the BEV feature, with the aim of promoting the model's capacity to learn orientation and geometric relationships. Specifically, we first construct the view position embedding \\(\\mathcal{V}_{p}\\in\\mathbb{R}^{c\\times 6}\\) with zero initial parameters. Then, we split the BEV feature according to six views, including front, front right, front left, back, back right, and back left view. During training, when dealing with a question related to a specific view, we inject the corresponding position embedding into both the BEV feature and queries. For instance, when training a caption sample related to the front left view, we only inject the front left position embedding \\(\\mathcal{V}_{p}\\in\\mathbb{R}^{c\\times 1}\\) into the front left view portion of the BEV feature and queries. If the training sample involves a question regarding the entire panoramic scene, we inject all six view position embeddings during training. ### Three-stage training strategy In this section, we demonstrate how we empower LLMs with the capabilities to comprehend 3D LiDAR data and uniformly complete extensive 3D tasks. We introduce a three-stage training strategy and generate relevant datasets, gradually transferring 3D representations into the text feature space. Three stages contain cross-modal alignment, perception, and high-level instruction. **Cross-Modal Alignment (3D Captioning):** To effectively address abundant 3D downstream tasks, the model requires a thorough understanding of the LiDAR scene. Scene captioning is a logical approach to enable the model to capture essential information and details in the LiDAR data by integrating the entire 3D scene into LLMs. However, the absence of direct LiDAR and text description pairs for caption training motivates us to leverage existing multi-view images aligned with LiDAR data in nuScenes [7] for creating text descriptions. Employing powerful off-the-shelf 2D Multi-Modal LLMs (MLLMs) [28, 50], we generate captions for each view, creating textual descriptions corresponding to the LiDAR scene. Nevertheless, the captions for LiDAR data and 2D multi-views are not perfectly aligned, as 2D MLLM may provide descriptions related to weather or colors for 2D images, which are not applicable to LiDAR data. To address this inconsistency, we further employ GPT-4 [34] to filter out captions that are more relevant and suitable for LiDAR data. With the collected LiDAR-caption pairs, our goal is to enable LLaMA to generate descriptive text conditioned on LiDAR input. We observe that textual captions for LiDAR data tend to be excessively detailed and lengthy due to their intricate geometric structures. Jointly learning overall captions could lead to entanglement in LLM reasoning. To mitigate this, we initially train the model to caption a single view to reduce complexity. The output caption is supervised by the ground-truth answer of the corresponding view using cross-entropy loss. After enabling the model to acquire captioning skills for individual views, the subsequent step involves instructing the model to understand the entire panoramic scene and generate a global description. By doing so, we align the 3D feature representation to the text feature space of LLM, enabling the model to comprehend the context in the LiDAR data. **Perception:** After equipping the model with a global scene understanding, this stage focuses on endowing the model with instance-level perception abilities, as they form the foundation for high-level instructional tasks such as planning. To achieve this, we employ an object-centric learning strategy, ensuring the model is cognizant of various object details such as quantity, localization, and spatial relations. The model learns the alignment between the representation of individual 3D objects and the corresponding text embedding of the LLM associated with the objects. Two tasks, visual grounding, and grounded captioning, are designed for this purpose. Objects are first represented as a sequence of discrete tokens, where each object's label and bounding box are extracted. Given a 3D object with its annotations, the category name and locations are encoded into a word embedding using the tokenizer of the pre-trained LLM. Unlike the previous indoor 3D MLLM [45], there is no need to extract each object from the point cloud individually; instead, we achieve object perception across the entire 3D scene. For visual grounding, the model learns to generate location tokens specifying the region position \\((x_{1},y_{1},z_{1},x_{2},y_{2},z_{2},\\theta)\\) based on the LiDAR input and instruction, where \\(\\theta\\) is the box angle. The task of Grounded Captioning is positioned as the inverse counterpart to visual grounding. The model is trained to generate descriptive text by leveraging the input LiDAR data and text with location information. Outputs of both tasks are subject to supervision through cross-entropy loss. The formulations of the instructions are depicted in Fig. 3. This alignment process aims to align the 3D visual object embedding with the text embedding space, unlocking LLM's 3D perception ability. **High-level Instruction:** In this stage, having comprehensively understood the LiDAR scene and equipped the model with basic 3D perception capabilities, we leverage a high-level instruction dataset (e.g., nuScenes-QA [38]) to further enhance the model's reasoning skills in the 3D space. Through the fine-tuning of LiDAR-LLM using this dataset, we not only enhance its proficiency in comprehending a diverse array of instructions but also empower it to generate responses that are both creative and contextually appropriate. Moreover, this refinement process equips LiDAR-LLM with the capability to engage in intricate spatial reasoning and integrate external knowledge into its generated responses. These tasks are also supervised through cross-entropy loss, ensuring the effective alignment of the model's output with the desired high-level instructions. Meanwhile, we also explore the autonomous driving planning capabilities of LiDAR-LLM on the nuScenes dataset [7]. Instead of generating any planning QA data, we directly utilize our trained model to infer questions related to planning. We found that, through our proposed three-stage training strategy, LiDAR-LLM develops preliminary planning capabilities, as illustrated in Fig. 3. The results also demonstrate that our training pipeline can stimulate the model's reasoning ability in 3D LiDAR data. ### Training and Task Inference. LiDAR-LLM undergoes joint fine-tuning with a variety of tasks and datasets, equipping it with a versatile skill set to adeptly handle diverse tasks within complex cross-modal scenarios. In the fine-tuning phase, we perform fine-tuning on a dataset consisting of 700K LiDAR-text pairs generated by us and 460K publicly available datasets [38]. Throughout the training process, the mentioned tasks are systematically trained in a step-by-step manner. During inference, our input still consists of LiDAR and question text. We have the flexibility to infer each question individually or consecutively infer multiple questions. ## 4 Experiment In this section, we conduct extensive experiments on nuScenes [7], nuScenes-QA [38], and our generated datasets. We first introduce the selected baseline and evaluation metrics (SS4.1), as well as the implementation details (SS4.2). Our main experiments evaluate the model with three essential capabilities: 3D captioning (SS4.3), 3D grounding (SS4.4), and holistic high-level instruction following (SS4.5). Finally, we present detailed ablation studies (SS4.6) and qualitative analysis (SS4.7) to provide a deeper understanding of our approach. Additional quantitative and qualitative analyses are available in Appendices A and B, respectively. ### Baselines & Evaluation Metrics **Baselines.** To the best of our knowledge, we are the first Multi-modal LLM (MLLM) to utilize LiDAR data with textual instructions as input and implement a series of outdoor 3D tasks. Since there are no 3D MLLM that can directly process LiDAR data, we project the depth information from LiDAR onto the 2D plane and employ current state-of-the-art (SOTA) 2D MLLM methods, MiniGPT-4 [53] and LLaMA-Adapter V2 [50], as our competitive counterparts. **Evaluation Metrics.** To assess the quality of language generation in the 3D Captioning Task, we adopt both BLEU [36] and BERT Score [51] to comprehensively gauge the response quality from the model. In the 3D Grounding Task, we evaluate the grounding ability of our proposed models through a combination of classification Top-1 accuracy and BEV mean Intersection over Union (mIoU). For NuScenes-QA (high-level instruction), we assess model performance using Top-1 accuracy, in line with common practices in Visual Question Answering (VQA) research [3, 5]. We also conduct separate evaluations for different question types. ### Implementation Details Our LiDAR-LLM mainly comprises three components: LiDAR feature extraction backbone, View-Aware Transformers (VAT), and the Large Language Model (LLM). For the LiDAR feature extraction, we employ the standard pre-trained 3D detector, CenterPoint-Voxel [52] following its default settings. The point cloud range is [-54.0m, 54.0m, -5.0m, 54.0m, 54.0m, 3.0m], and the BEV grid size is [0.6m, 0.6m]. For the VAT, we set the token number of learnable queries to 576, and the dimension of the token is 768. In terms of the LLM, we employ LLaMA-7B [42] considering both efficiency and efficacy. Throughout the three-stage training phase, we utilize the Adam optimizer \\((\\beta_{1},\\beta_{2})=(0.9,0.999)\\) with an initial learning rate of 1e-4, halving it every 2 epochs. And we fine-tuning the VAT and adapters in LLaMA2 for 6 epochs. All experiments are conducted on NVIDIA Tesla A100 GPUs. ### 3D Captioning **Dataset Construction.** Due to the absence of a captioning dataset tailored for LiDAR data, we integrate GPT-4 [34] and 2D MLLMs [28, 50] to construct a large-scale 3D captioning dataset on nuScenes (named nu-Caption), which consists of 420K high-quality LiDAR-text pairs. In nu-Caption, we employ 348K LiDAR-text pairs for the training set and 72K pairs for the validation set. The caption question covers three aspects, progressing from simple to difficult: 1) the general description of current scenes or traffic conditions, 2) the detailed description of objects and their relationships, 3) and the recognition of potential risks on the road. Concrete examples are depicted in Fig. 3, and additional details can be found in Appendix C. **Results Analysis.** We then evaluate the methods on our generated nu-Caption dataset and report the result in Table 1. LiDAR-LLM outperforms previous 2D MLLMs for all evaluation metrics. Specifically, our model achieves 19.26% BLEU-4 and 91.32% Bert score, surpassing MiniGPT4, which achieved 2.63% BLEU-4 and 84.38% Bert score; also get a 11.81% BLEU-4 and 4.87% Bert score improvement compared to LLaMA-AdapterV2. These results indicate that employing 2D MLLMs directly to LiDAR data produces unsatisfactory results, leading to the omission of crucial details in caption descriptions. Concurrently, this set of results affirms that our LiDAR-LLM exhibits basic 3D scene understanding capabilities, proficiently expressing geometric relationships and engaging in common reasoning with sparse LiDAR data. In conclusion, although 2D MLLMs achieve superior performance in the imagery domain, they are not an expert at 3D LiDAR outdoor scenes, which further validates the necessity to develop large language models specialized for LiDAR data. ### 3D Grounding **Dataset Construction.** Apart from the captioning task, 3D grounding requires the model to have superior capabilities in object perception. Due to the absence of a grounding dataset tailored for outdoor LiDAR data, we first utilize the annotations from the nuScenes dataset to construct a comprehensive dataset, named nu-Grounding. It consists of 280K pairs of questions and answers for both visual grounding and grounded captioning tasks, with 232K pairs allocated to the training set and 48K pairs to the validation set. For Grounded Captioning, we evaluate accuracy separately for scenarios with 19 categories and 5 categories. For Visual Grounding, we only focus on predicting bounding boxes for the \"Car\" category and compute the associated BEV mIoU. **Results Analysis.** For Grounded Captioning, as shown in Table 2, our model achieves 63.1% accuracy in scenarios with 5 categories, surpassing LLaMA-Adapter and MiniGPT4 with accuracies of 23.4% and 21.2%, respectively. Meanwhile, when trained and tested on 19 categories, our approach still demonstrates a significant advantage over 2D MLLMs. This indicates that our LiDAR-LLM possesses an understanding of localization information in 3D scenes and achieves object classification ability \\begin{table} \\begin{tabular}{c|c|c c c c c} \\hline \\hline Tasks & Models & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & Bert Score \\\\ \\hline \\multirow{4}{*}{3D Captioning} & Mini-GPT4 [53] & 14.97 & 6.76 & 3.74 & 2.63 & 84.38 \\\\ & LLaVA1.5 [31] & 19.92 & 12.10 & 8.57 & 5.37 & 85.01 \\\\ & Instruct-BLIP [28] & 18.67 & 13.38 & 7.41 & 5.20 & 85.89 \\\\ & LLaMA-AdapterV2 [50] & 30.17 & 17.34 & 10.40 & 7.45 & 86.45 \\\\ & Ours & 40.98 & 29.96 & 23.43 & 19.26 & 91.32 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Experimental Results on nu-Caption dataset. Our model outperforms all baseline models for all evaluation metrics. \\begin{table} \\begin{tabular}{c|c|c c|c} \\hline \\hline Tasks & Models & ACC-19 & ACC-5 & mIoU \\\\ \\hline \\multirow{4}{*}{3D Grounding} & Mini-GPT4 [53] & 5.1 & 21.2 & - \\\\ & LLaVA1.5 [31] & 9.2 & 22.7 & - \\\\ & Instruct-BLIP [28] & 8.4 & 23.9 & - \\\\ & LLaMA-AdapterV2 [50] & 7.1 & 23.4 & - \\\\ & Ours & 34.4 & 63.1 & 14.3 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Experimental results on the nu-Grounding dataset. ACC-19 and ACC-5 denote the mean Top-1 accuracy for scenarios with 19 categories and 5 categories, respectively. The mIoU is calculated for the ”Car” category. under LiDAR data. For visual grounding, our LiDAR-LLM achieves 14.3% BEV mIoU. The results demonstrate that our approach not only exhibits basic perceptual capabilities but also effectively generates bounding boxes. More visual grounding results for other categories are shown in Appendix A. For 3D grounding task, our goal is not solely to obtain localization information for objects but also to enhance the model's understanding of the spatial relationships within LiDAR data. This training stage further enhances our potential in high-level instruction tasks. ### High-level Instruction Task **Dataset** nuScenes-QA [38] is a multi-modal visual question answering benchmark for autonomous driving scenarios, which contains five types of questions: existence, counting, query-object, query-status, and comparison. The questions can be classified into two types based on reasoning complexity: zero-hop and one-hop reasoning. **Results Analysis.** As illustrated in Table 3, we compare our model's performance on nuScenes-QA with different pre-training stages. To ensure a fair comparison, we perform an equal number of fine-tuning iterations on the nuScenes-QA dataset, irrespective of which pre-trained parameters are loaded. In Ex1, training LiDAR-LLM from scratch achieves an accuracy of 41.2%, validating the effectiveness of our model design in high-level instruction tasks. Ex2, pre-training on the captioning task, results in a 6.2% accuracy improvement compared to Ex1. Ex3, pre-training on the grounding task, achieves a 5.3% accuracy improvement over Ex1. The results show that the first two stages of our training strategy effectively improve accuracy in the final high-level instruction task. This implies that when LiDAR-LLM possesses basic 3D scene understanding or perception capabilities, it can better accomplish high-level reasoning tasks. Compared to pre-training on a single task, pre-training on both captioning and grounding tasks (Ex4) shows significant improvement, achieving a total accuracy of 48.6%. This improvement is observed in both zero-hop and one-hop reasoning questions. For comparative experiments with 2D MLLMs, please refer to Appendix A. Furthermore, we investigate the zero-shot planning capability of LiDAR-LLM. As illustrated in Figure 3, the model can generate planning-related language instructions without the need for any fine-tuning on a planning dataset. ### Ablation Study **Effectiveness of View-Aware Transformer (VAT).** To demonstrate the effectiveness of each component in VAT, we compare BLEU-4 and BERT scores in the 3D captioning task. As shown in Table. 4, \\(Ex_{1}\\) denotes the absence of any transformer structure and position embedding. In this case, the BEV Feature is directly input to an MLP to align the dimensions and is then fed to the LLM, yielding only 88.14% BERT score and 11.37% BLEU-4. In \\(Ex_{2}\\), with the use of Query Transformer, we observe that the BERT Score achieves 90.60% and 15.41% BLEU-4. The results demonstrate that the structure of the Query Transformer can better align LiDAR features with text embeddings. Compared with \\(Ex_{2}\\) and \\(Ex_{3}\\), the introduction of the View Position embedding achieves an 0.7% BERT score and 3.85% BLEU-4 improvement. This set of results affirms that incorporating the View Position embedding aids the model in better understanding 3D scenes and spatial relationships. **Effectiveness of the three-stage training strategy.** As shown in Table 3, we validate the effectiveness of our proposed three-stage training strategy on the high-level instruction task (NusceneQA). We decompose the strategy into captioning, grounding, and the final high-level task. For detailed experimental analysis, please refer to Section 4.5. ### Qualitative Analysis In the 3D captioning task, as indicated in green in Fig. 3, LiDAR-LLM demonstrates its proficiency in aligning linguistic and LiDAR input, showcasing an understanding of contextual information within the LiDAR data, and providing answers based on both textual questions and corresponding visual information. From the figure, our method \\begin{table} \\begin{tabular}{l|c c|c c} \\hline \\hline & QTrans & VPE & BERT Score & BLEU-4 \\\\ \\hline \\(Ex_{1}\\) & & & 88.14 & 11.37 \\\\ \\(Ex_{2}\\) & ✓ & & 90.60 & 15.41 \\\\ \\(Ex_{3}\\) & ✓ & ✓ & 91.32 & 19.26 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Ablation study of VAT, where VPE means View Position Embedding, and QTrans means the Query Transformer. \\begin{table} \\begin{tabular}{c|c|c c c|c c c|c c c|c c c|c c|c} \\hline \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{Pretrain} & \\multicolumn{4}{c}{Exist} & \\multicolumn{4}{c}{Count} & \\multicolumn{4}{c}{Object} & \\multicolumn{4}{c}{Status} & \\multicolumn{4}{c}{Comparison} & \\multirow{2}{*}{Acc} \\\\ & & H0 & H1 & All & H0 & H1 & All & H0 & H1 & All & H0 & H1 & All & H0 & H1 & All & H0 & H1 & All & \\\\ \\hline Ex1 & None & 69.4 & 62.3 & 65.5 & 10.4 & 9.7 & 10.0 & 54.8 & 31.1 & 34.5 & 23.0 & 32.5 & 29.2 & 47.8 & 54.2 & 53.8 & 41.2 \\\\ Ex2 & C & 80.0 & 71.7 & 75.5 & 13.8 & 12.6 & 13.2 & 59.6 & 32.0 & 36.0 & 45.8 & 40.8 & 42.5 & 73.9 & 54.8 & 56.2 & 47.4 \\\\ Ex3 & G & 79.7 & 69.0 & 73.9 & 12.7 & 12.0 & 12.4 & 58.6 & 33.9 & 37.4 & 46.0 & 34.7 & 38.6 & 62.6 & 55.3 & 55.9 & 46.5 \\\\ Ex4 & C+G & 79.1 & 70.6 & 74.5 & 15.3 & 14.7 & 15.0 & 59.6 & 34.1 & 37.8 & 53.4 & 42.0 & 45.9 & 67.0 & 57.0 & 57.8 & 48.6 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: The high-level instruction results on nuScenes-QA. ”C” and ”G” denote loading the pre-trained model parameters from the 3D captioning task and 3D grounding task, respectively. H0 and H1 represent zero-hop and one-hop reasoning questions, respectively. exels in identifying crucial objects and evaluating their states, such as \"parked\" or \"driving down,\" showcasing its ability in comprehending 3D scenes. During the grounding stage, as indicated in blue in Fig. 3, the model showcases its ability to identify the referred object, exhibit spatial relation awareness, and demonstrate precise localization skills. In the yellow section of Fig. 3, we illustrate the interpretability of LiDAR-LLM in planning tasks. Based on the capabilities unleashed by our method, LiDAR-LLM generates a coherent high-level action. For instance, in the second sub-figure, we prompt the model to describe the safest measures for the car. The model precisely identifies the critical object, the 'bus,' and analyzes its potential impacts on the ego car. With these reasoning priors, when instructing the model to plan a'safest measures', it generates 'brake and stop' to avoid a collision with the obstructing bus. Furthermore, in another high-level task (NuScenes-QA [38]), as represented by the pink section in Fig. 3, LiDAR-LLM demonstrates strong capabilities by accurately counting objects, identifying object classifications, and reasoning about spatial relationships among surrounding objects. Figure 3: Qualitative examples of prompt questions and LiDAR-LLM’s prediction. Additional examples are shown in Appendix B. ## 5 Conclusion In conclusion, our paper represents a pioneering effort to unleash the reasoning capabilities of LLMs to comprehend outdoor LiDAR data. Our proposed LiDAR-LLM reformulates the intricate challenge of 3D outdoor scene understanding as a language modeling problem. To train LiDAR-LLM, we generate a comprehensive set of LiDAR-text paired datasets, encompassing 420K 3D captioning and 280K 3D grounding data. We then introduce a three-stage training strategy, involving cross-modal alignment, perception, and high-level instruction, aligning the LiDAR modality with the language embedding space of the LLM. Our architectural innovation introduces the View-Aware Transformer (VAT) to connect the 3D encoder with the LLM. This design effectively bridges the modality gap and enhances the LLM's comprehension of spatial orientation in LiDAR features. Through extensive experimentation on both our generated datasets and open-source datasets, our LiDAR-LLM demonstrates promising performance across diverse tasks, including 3D captioning, 3D grounding, 3D question answering, and autonomous driving planning. In future work, we will explore the continual transfer learning [20, 32, 47] and lightweight operations [8, 29] of MLLMs, making it feasible to deploy MLLMs on edge devices. ## References * [1] Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I 16_, pages 422-440. Springer, 2020. * [2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 6077-6086, 2018. * [3] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In _Proceedings of the IEEE international conference on computer vision_, pages 2425-2433, 2015. * [4] Eduardo Arnold, Omar Y Al-Jarrah, Mehrdad Dianati, Saber Fallah, David Oxtoby, and Alex Mouzakitis. A survey on 3d object detection methods for autonomous driving applications. _IEEE Transactions on Intelligent Transportation Systems_, 20(10):3782-3795, 2019. * [5] Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoaki Kawanabe. Scanqa: 3d question answering for spatial scene understanding. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 19129-19139, 2022. * [6] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877-1901, 2020. * [7] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 11621-11631, 2020. * [8] Jiahao Chang, Shuo Wang, Hai-Ming Xu, Zehui Chen, Chenhongyi Yang, and Feng Zhao. Detrdistill: A universal knowledge distillation framework for detr-families. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 6898-6908, 2023. * [9] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pretraining to recognize long-tail visual concepts. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3558-3568, 2021. * [10] Dave Zhenyu Chen, Angel X Chang, and Matthias Niessner. Scanrefer: 3d object localization in rgb-d scans using natural language. In _European conference on computer vision_, pages 202-221. Springer, 2020. * [11] Zhenyu Chen, Ali Gholami, Matthias Niessner, and Angel X Chang. Scan2cap: Context-aware dense captioning in rgb-d scans. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 3193-3203, 2021. * [12] Zehui Chen, Zhenyu Li, Shiquan Zhang, Liangji Fang, Qinhong Jiang, and Feng Zhao. Bevdistill: Cross-modal bev distillation for multi-view 3d object detection. _arXiv preprint arXiv:2211.09386_, 2022. * [13] Zhenyu Chen, Ronghang Hu, Xinlei Chen, Matthias Niessner, and Angel X Chang. Unit3d: A unified transformer for 3d dense captioning and visual grounding. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 18109-18119, 2023. * [14] Xiaowei Chi, Jiaming Liu, Ming Lu, Rongyu Zhang, Zhaoqing Wang, Yandong Guo, and Shanghang Zhang. Bev-san: Accurate bev 3d object detection via slice attention networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 17461-17470, 2023. * [15] DriveLM Contributors. DriveLM: Drive on language. [https://github.com/OpenDriveLab/DriveLM](https://github.com/OpenDriveLab/DriveLM), 2023. * [16] Helisa Dhamo, Fabian Manhardt, Nassir Navab, and Federico Tombari. Graph-to-3d: End-to-end generation and manipulation of 3d scenes using scene graphs. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 16352-16361, 2021. * [17] Jean-Baptiste Alayrac et al. Flamingo: a visual language model for few-shot learning. 2022. * [18] Mingtao Feng, Zhen Li, Qi Li, Liang Zhang, XiangDong Zhang, Guangming Zhu, Hui Zhang, Yaonan Wang, and Ajmal Mian. Free-form description guided 3d visual graph network for object grounding in point cloud. In _Proceedingsof the IEEE/CVF International Conference on Computer Vision_, pages 3722-3731, 2021. * [19] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. _Minds and Machines_, 30:681-694, 2020. * [20] Yulu Gan, Mingjie Pan, Rongyu Zhang, Zijian Ling, Lingran Zhao, Jiaming Liu, and Shanghang Zhang. Cloud-device collaborative adaptation to continual changing environments in the real-world. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12157-12166, 2023. * [21] Ziyu Guo, Yiwen Tang, Renrui Zhang, Dong Wang, Zhigang Wang, Bin Zhao, and Xuelong Li. Viewrefer: Grasp the multi-view knowledge for 3d visual grounding with gpt and prototype guidance. _arXiv preprint arXiv:2303.16894_, 2023. * [22] Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xiangheng Ma, Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, et al. Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding, generation, and instruction following. _arXiv preprint arXiv:2309.00615_, 2023. * [23] Yining Hong, Yilun Du, Chunru Lin, Josh Tenenbaum, and Chuang Gan. 3d concept grounding on neural fields. _Advances in Neural Information Processing Systems_, 35:7769-7782, 2022. * [24] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. _arXiv preprint arXiv:2307.12981_, 2023. * [25] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. _arXiv preprint arXiv:2106.09685_, 2021. * [26] Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, and Tyne-Luh Liu. Text-guided graph neural networks for referring 3d instance segmentation. In _Proceedings of the AAAI Conference on Artificial Intelligence_, pages 1610-1618, 2021. * [27] Yang Jiao, Shaoxiang Chen, Zequn Jie, Jingjing Chen, Lin Ma, and Yu-Gang Jiang. More: Multi-order relation mining for dense captioning in 3d scenes. In _European Conference on Computer Vision_, pages 528-545. Springer, 2022. * [28] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023. * [29] Xinwei Li, Li Lin, Shuai Wang, and Chen Qian. Unlock the power: Competitive distillation for multi-modal large language models. _arXiv preprint arXiv:2311.08213_, 2023. * [30] Xiaoqi Li, Yanzi Wang, Yan Shen, Ponomarenko Iaroslav, Haoran Lu, Qianxu Wang, Boshi An, Jiaming Liu, and Hao Dong. Imagemannip: Image-based robotic manipulation with affordance-guided next view selection. _arXiv preprint arXiv:2310.09069_, 2023. * [31] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023. * [32] Jiaming Liu, Senqiao Yang, Peidong Jia, Ming Lu, Yandong Guo, Wei Xue, and Shanghang Zhang. Vida: Homeostatic visual domain adapter for continual test time adaptation. _arXiv preprint arXiv:2306.04344_, 2023. * [33] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. _arXiv preprint arXiv:2210.07474_, 2022. * [34] OpenAI. GPT-4 technical report, 2023. * [35] Mingjie Pan, Jiaming Liu, Renrui Zhang, Peixiang Huang, Xiaoqi Li, Li Liu, and Shanghang Zhang. Renderocc: Vision-centric 3d occupancy prediction with 2d rendering supervision. _arXiv preprint arXiv:2309.09502_, 2023. * [36] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_, pages 311-318, 2002. * [37] Maria Parelli, Alexandros Delitzas, Nikolas Hars, Georgios Vlassis, Sotirios Anagnostidis, Gregor Bachmann, and Thomas Hofmann. Clip-guided vision-language pre-training for question answering in 3d scenes. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5606-5611, 2023. * [38] Tianwen Qian, Jingjing Chen, Linhai Zhuo, Yang Jiao, and Yu-Gang Jiang. Nuscenes-qa: A multi-modal visual question answering benchmark for autonomous driving scenario. _arXiv preprint arXiv:2305.14836_, 2023. * [39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_, pages 8748-8763. PMLR, 2021. * [40] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. _Advances in Neural Information Processing Systems_, 35:25278-25294, 2022. * [41] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 2556-2565, 2018. * [42] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023. * [43] Hongcheng Wang, Andy Guan Hong Chen, Xiaoqi Li, Mingdong Wu, and Hao Dong. Find what you want: Learning demand-conditioned object attribute space for demand-driven navigation. _arXiv preprint arXiv:2309.08138_, 2023. * [44] Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, YuQiao, et al. Vision1lm: Large language model is also an open-ended decoder for vision-centric tasks. _arXiv preprint arXiv:2305.11175_, 2023. * [45] Zehan Wang, Haifeng Huang, Yang Zhao, Ziang Zhang, and Zhou Zhao. Chat-3d: Data-efficiently tuning large language model for universal dialogue of 3d scenes. _arXiv preprint arXiv:2308.08769_, 2023. * [46] Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. _arXiv preprint arXiv:2308.16911_, 2023. * [47] Senqiao Yang, Jiarui Wu, Jiaming Liu, Xiaoqi Li, Qizhe Zhang, Mingjie Pan, and Shanghang Zhang. Exploring sparse visual prompt for cross-domain semantic segmentation. _arXiv preprint arXiv:2303.09792_, 2023. * [48] Zhengyuan Yang, Songyang Zhang, Liwei Wang, and Jiebo Luo. Sat: 2d semantics assisted training for 3d visual grounding. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 1856-1866, 2021. * [49] Shunyu Yao, Tzu Ming Hsu, Jun-Yan Zhu, Jiajun Wu, Antonio Torralba, Bill Freeman, and Josh Tenenbaum. 3d-aware scene manipulation via inverse graphics. _Advances in neural information processing systems_, 31, 2018. * [50] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. _arXiv preprint arXiv:2303.16199_, 2023. * [51] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. _arXiv preprint arXiv:1904.09675_, 2019. * [52] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 4490-4499, 2018. * [53] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. _arXiv preprint arXiv:2304.10592_, 2023. * [54] Xiangyang Zhu, Renrui Zhang, Bowei He, Ziyao Zeng, Shanghang Zhang, and Peng Gao. Pointclip v2: Adapting clip for powerful 3d open-world learning. _arXiv preprint arXiv:2211.11682_, 2022. **Supplementary Materials - Overall** Due to space constraints in the main paper, we provide a thorough quantitative and qualitative analysis of the proposed method in this supplementary material. In Appendix A, we offer more extensive experiments in 3D captioning, 3D grounding, and the High-level Instruction task. In Appendix B, additional qualitative analyses are presented across multiple downstream tasks, encompassing both good and challenging cases in question answering. Finally, in Appendix C, we showcase the details of our generated nu-Caption and nu-Grounding datasets. ## Appendix A Additional Experiments In this section, our aim is to supplement additional baseline comparison experiments, including Mini-GPT4 [53], LLAVA [31], Instruct-BLIP [28], and LLaMA-AdapterV2 [50]. Simultaneously, in Section A.2, we extend the visual grounding experiment to include a broader range of categories. ### 3D Captioning As shown in Table 5, we evaluate the methods on our generated nu-Caption dataset. Specifically, compared to the previous 2D MLLMs model, our model achieves 19.26% BLEU-4 and 91.32% Bert score, surpassing the previous sota method LLAMA-AdapteV2 an 11.81% BLEU-4 and 4.87% Bert score improvement. These results suggest that the direct application of 2D MLLMs to LiDAR data yields unsatisfactory outcomes, resulting in the exclusion of vital information in caption descriptions. Concurrently, this set of findings confirms that our LiDAR-LLM demonstrates fundamental 3D scene understanding abilities, effectively capturing geometric relationships and engaging in common reasoning with sparse LiDAR data. ### 3D Grounding For Grounded Captioning, as shown in Table 6, our model achieves 63.1% accuracy in scenarios with 5 categories, significantly surpassing the previous sota method Instruct-BLIP with accuracies of 23.9%. Meanwhile, to further demonstrate the effectiveness of our LiDAR-LLM, we trained and tested on 19 categories, our approach still showcases a significant advantage over 2D MLLMs, which have more than 25% improvement. This indicates that our LiDAR-LLM possesses an understanding of localization information in 3D scenes and achieves object classification ability under LiDAR data. As shown in Table 7, we extend the visual grounding task to include location outputs for five categories, namely car, pedestrian, bus, truck, and construction vehicle. The results indicate that our method efficiently produces object localization based on the question prompt. In comparison to the single category presented in the main paper, our method does not decrease the BEV mIoU even with the expansion to five categories. Specifically, for the challenging pedestrian category, our method achieves a mIoU of 9.05%. We refrain from using the LLM tokenizer to directly generate the 3D bounding box, as it can only leverage the classification loss (cross entropy) to optimize the word token corresponding to the box location. To attain a more accurate bounding box result, we project the last hidden state feature of the LLM into an MLP network and employ Regression Loss (L2 loss) for optimization. ### High-level Instruction Task As illustrated in Table 8, we compare our model's performance on nuScenes-QA with previous 2D sota MLLMs models. The results highlight a significant improvement in our model compared to the previous method. Notably, our model achieves an accuracy of 45.9% for the status task, showcasing a substantial advantage over the previous SOTA method which only gained 9.0% accuracy. These results show the capability of our model to effectively handle high-level reasoning tasks. ## Appendix B Additional Qualitative Analysis We have included additional samples for visualization, as shown in Fig. 4 and Fig. 5. **(a) Good cases.** In Fig. 4, we show LiDAR-LLM's proficiency in aligning linguistic and LiDAR input, and comprehend the LiDAR scene. For example, given the top scene in Fig. 4, under the captioning task (green section), the model accurately interprets the scene as a \"street lined with trees\" and deduces that it is \"set in a city\". It demonstrates the ability to extract information from the LiDAR data, respond to textual queries, and reason about its understanding of the scene. Moreover, LiDAR-LLM consistently generates coherent high-level actions. For example, in the bottom scene of Fig. 4, when tasked with providing the safest measures for the car, the model correctly identifies the \"traffic conditions are empty with only a few cars parked on the side\" during the captioning task. This indicates its observation and assessment of potential risks to the ego car. Therefore, guided by these reasoning priors, when planning \"safe measures,\" it generates the action \"go ahead,\" as such action avoids unsafety risks. **(b) Bad cases.** Inevitably, there are also instances of failure, as illustrated in Fig. 5. Due to the inherent limitation of LiDAR in providing rich semantic information compared to cameras, LiDAR-LLM may exhibit \"illusions\" when confronted with elements beyond the scope of the LiDAR coverage. For instance, in the captioning task for the top scene, LiDAR-LLM erroneously describes a clear and sunny scene as \"cloudy\" due to the absence of color information. Additionally, in the bottom scene, LiDAR-LLMpredicts a \"red\" traffic light that does not even exist, as LiDAR data lacks color information for the model to learn from. Furthermore, there are errors in grounding and high-level action tasks, which are highlighted in red. ## Appendix C LiDAR-Language Data In this section, we will showcase the details of our generated nu-Caption datasets and nu-Grounding datasets. **nu-Caption** As shown in Table 9, the nu-Caption follows a progression of increasing complexity. Initially, it offers a broad depiction of current scenes or traffic conditions. Subsequently, it delves into finer details, providing a comprehensive and intricate description of objects and their relationships. Finally, it demonstrates its ability to recognize potential risks on the road. To enhance the quality of the questions aligned with these captions, we utilize a systematic approach by leveraging the LLaMA-Adapter and GPT-4. We begin by structuring our questions in humans. Subsequently, we employ the capabilities of GPT-4 to refine and extend these questions, ensuring that they meet the elevated standard of quality required. Additionally, to generate high-quality QA-pairs, we initially utilize the LLaMA-AdapterV2 to generate the original caption. We then employ GPT-4 to refine and enhance the QA pairs. Consequently, we have collected a total of 349K LiDAR-text pairs for the training set and 72K pairs \\begin{table} \\begin{tabular}{c|c|c c} \\hline \\hline Tasks & Models & ACC-19 & ACC-5 \\\\ \\hline \\multirow{5}{*}{3D Grounding} & Mini-GPT4 & 5.1 & 21.2 \\\\ & LLaVA & 9.2 & 22.7 \\\\ & Instruct-BLIP & 8.4 & 23.9 \\\\ & LLaMA-AdapterV2 & 7.1 & 23.4 \\\\ & Ours & 34.4 & 63.1 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Experimental results on the nu-Grounding dataset. ACC-19 and ACC-5 denote the mean Top-1 accuracy for scenarios with 19 categories and 5 categories, respectively. \\begin{table} \\begin{tabular}{c|c c c c c} \\hline \\hline Tasks & Models & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & Bert Score \\\\ \\hline \\multirow{5}{*}{3D Captioning} & Mini-GPT4 & 14.97 & 6.76 & 3.74 & 2.63 & 84.38 \\\\ & LLaVA & 19.92 & 12.10 & 8.57 & 5.37 & 85.01 \\\\ & Instruct-BLIP & 18.67 & 13.38 & 7.41 & 5.20 & 85.89 \\\\ & LLaMA-AdapterV2 & 30.17 & 17.34 & 10.40 & 7.45 & 86.45 \\\\ & Ours & 40.98 & 29.96 & 23.43 & 19.26 & 91.32 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Experimental Results on nu-Caption dataset. Our model outperforms all baseline models for all evaluation metrics. Figure 4: Some correct qualitative examples of LiDAR-LLM’s predictions. \\begin{table} \\begin{tabular}{c|c|c c c} \\hline \\hline Tasks & Models & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & Bert Score \\\\ \\hline \\multirow{5}{*}{3D Captioning} & Mini-GPT4 & 14.97 & 6.76 & 3.74 & 2.63 & 84.38 \\\\ & LLaVA & 19.92 & 12.10 & 8.57 & 5.37 & 85.01 \\\\ & Instruct-BLIP & 18.67 & 13.38 & 7.41 & 5.20 & 85.89 \\\\ & LLaMA-AdapterV2 & 30.17 & 17.34 & 10.40 & 7.45 & 86.45 \\\\ & Ours & 40.98 & 29.96 & 23.43 & 19.26 & 91.32 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Experimental Results on nu-Caption dataset. Our model outperforms all baseline models for all evaluation metrics. for the validation set. **nu-Grounding** To further endow the model with instance-level perception abilities, we collect the nu-Grounding dataset. The nu-Grounding dataset consists of two main aspects: Grounded Captioning and Visual Grounding. As shown in Tabel 10, in grounded captioning, the questions comprise bounding boxes and aim to generate captions that specifically describe localized areas. On the other hand, Visual Grounding focuses on raising objects in the questions and requires the model to provide bounding box answers. These challenging datasets, characterized by bounding boxes with 7 data points [xmin, ymin, zmin, xmax, ymax, zmax, angle], serve to enhance the model's instance-level perception. All the nu-Grounding data are collected through the Ground-truth of the nuScenes dataset. **The general description of current scenes:** Question: [\"This is the car's back right view. How would you uniquely describe the road scene?\"] Answer: [\"The road scene features a one-way street with a car driving on it, a motorcycle parked on the side, and a person riding a bicycle. The presence of a car, motorcycle, and bicycle on the same street suggests that the area is a mix of different modes of transportation, which is not common in many urban settings.\"] **The detailed description of objects:** Question: [\"This is the car's back left view. In the view, are there any moving objects, and provide a description of them.\"] Answer: [\"Yes, there are cars moving in the background of the scene. They are likely driving on the road, and their presence adds to the overall activity and movement in the scene.\"] **The recognition of potential risks:** Question: [\"This is the car's back left view. From the current scene, identify any potential challenges a driver might face.\"] Answer: [\"In this scene, a car is parked on a street next to a forest. The driver might face challenges such as limited visibility due to the trees and the dense forest. The presence of trees and branches on the street could obstruct the driver's view of other vehicles, pedestrians, or obstacles, increasing the risk of accidents.\"]
Recently, Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) have shown promise in instruction following and 2D image understanding. While these models are powerful, they have not yet been developed to comprehend the more challenging 3D physical scenes, especially when it comes to the sparse outdoor LiDAR data. In this paper, we introduce LiDAR-LLM, which takes raw LiDAR data as input and harnesses the remarkable reasoning capabilities of LLMs to gain a comprehensive understanding of outdoor 3D scenes. The central insight of our LiDAR-LLM is the reformulation of 3D outdoor scene cognition as a language modeling problem, encompassing tasks such as 3D captioning, 3D grounding, 3D question answering, etc. Specifically, due to the scarcity of 3D LiDAR-text pairing data, we introduce a three-stage training strategy and generate relevant datasets, progressively aligning the 3D modality with the language embedding space of LLM. Furthermore, we design a View-Aware Transformer (VAT) to connect the 3D encoder with the LLM, which effectively bridges the modality gap and enhances the LLM's spatial orientation comprehension of visual features. Our experiments show that LiDAR-LLM possesses favorable capabilities to comprehend various instructions regarding 3D scenes and engage in complex spatial reasoning. LiDAR-LLM attains a 40.9 BLEU-1 on the 3D captioning task and achieves a 63.1% classification accuracy and a 14.3% BEV mIoU on the 3D grounding task. Web page: [https://sites.google.com/view/lidar-llm](https://sites.google.com/view/lidar-llm)
Condense the content of the following passage.
372