id
stringlengths 25
96
| input
stringlengths 137
1.08M
| output
stringlengths 501
1.6k
| instruction
stringclasses 5
values | num_tokens
int64 73
522
|
---|---|---|---|---|
arxiv-format/1112_5284v1.md | # From FAIR to RHIC, hyper clusters and an effective strange EoS for QCD+
Footnote †: Presented at the international conference on Strangeness in Quark Matter 2011
J. Steinheimer\\({}^{1}\\), A. Botvina\\({}^{1,4}\\), K. Gudima\\({}^{1,5}\\), I. Mishustin\\({}^{1,6}\\), S. Schramm\\({}^{1}\\), M. Bleicher\\({}^{1}\\), H. Stocker\\({}^{1,2,3}\\)
\\({}^{1}\\) FIAS, Johann Wolfgang Goethe University, Frankfurt am Main, Germany
\\({}^{2}\\) Institut fur Theoretische Physik, Goethe-Universitat, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany
\\({}^{3}\\) GSI Helmholtzzentrum fur Schwerionenforschung GmbH, Planckstr. 1, D-64291 Darmstadt, Germany
\\({}^{4}\\)Institute for Nuclear Research, Russian Academy of Sciences, 117312 Moscow, Russia
\\({}^{5}\\)Institute of Applied Physics, Academy of Sciences of Moldova, MD-2028 Kishinev, Moldova
\\({}^{6}\\)Kurchatov Institute, Russian Research Center, 123182 Moscow, Russia
## 1 Introduction
The objective of the low energy heavy ion collider programs, at the RHIC facility on Long Island and the planned projects NICA in Dubna and FAIR near the GSI facility, is to find evidence for the onset of a deconfined phase [1, 2]. At the highest RHIC energies, experiments [3, 4, 5, 6] have already confirmed a collective behavior of the created system, signaling a change in the fundamental degrees of freedom. Lattice QCD calculations indeed expect a deconfinement crossover to occur in systems created at the RHIC. As theoretical predictions on the thermodynamics of finite density QCD are difficult (see e.g. [7, 8, 9, 10]), one hopes to experimentally confirm a possible first order phase transition and consequently the existence of a critical endpoint, by mapping out the phase diagram of QCD in small steps. Hadronic bulk observables which are usually connected to the onset of deconfinement are the particle flow and its anisotropies as well as particle yields and ratios [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. It has often been proposed, that e.g. the equilibration of strangeness would be an indication for the onset of a deconfined phase, although this idea is still under heavy debate [27, 28, 29, 30, 31]. Two main aspects of strangeness physics, closely connected to the equilibration of strangeness and the hyperon interactions, are the formation of nuclear clusters with strange content and the bulk properties of very dense nuclear matter with finite strangeness content.
## 2 Hypernuclei
Exotic forms of deeply bound objects with strangeness have been proposed [32] as states of matter, either consisting of baryons or quarks. The H di-baryon was predicted by Jaffe [33] and later, many more bound di-baryon states with strangeness were proposed using quark potentials [34, 35] or the Skyrme model [36]. However, the non-observation of multi-quark bags, e.g. strangelets is still one of the open problems of intermediate and high energy physics. On the hadronic side, hypernuclei are known to exist and be produced in heavy Ion collisions already for a long time [37, 38, 39, 40]. Metastable exotic multi-hypernuclear objects (MEMOs) as well as purely hyperonic systems of \\(\\Lambda\\)'s and \\(\\Xi\\)'s were introduced in [41, 42] as the hadronic counterparts to multi-strange quark bags [43, 44]. A motivation of hypernuclear physics is that it offers a direct experimental way to study hyperon-nucleon (\\(YN\\)) and hyperon-hyperon (\\(YY\\)) interactions (\\(Y=\\Lambda,\\Sigma,\\Xi,\\Omega\\)). The nucleus serves as a laboratory offering the unique opportunity to study basic properties of hyperons and their interactions.
### Hypernuclei production in the spectator fragments
In this work we will focus on the production of hypernuclei in high energy collisions of Au+Au ions. In such systems strangeness is produced abundantly and is likely to form clusters of different sizes. We can discriminate two distinct mechanisms for hypercluster formation in heavy ion collisions. First, the formation of hypernuclei in the hot and dense fireball of most central heavy ion collisions where the general assumption is that hypernuclei are formed at or shortly after the hadronisation/chemical freeze out of the hadrons produced. In this work we will focus on a different production mechanism, the absorption of hyperons in the spectator fragments of non central heavy ion collisions. In this scenario we are interested in hyperons which propagate with velocities close to the initial velocities of the nuclei, i.e., in the vicinity of nuclear spectators. To calculate the absorption rate we employed the Ultra-relativistic Quantum Molecular Dynamics model (UrQMD v2.3) [45, 46] and the intra-nuclear cascade model (DCM) developed in Dubna [47] to estimate the model dependence of the predictions. The hyperons produced in the hot and dense stage of a heavy ion collisions can be absorbed by the spectators if their kinetic energy in the rest frame of the residual nucleus is lower than the attractive potential
Figure 1: Probability per event for the formation of conventional and strange spectator residuals (top panels), and their mean mass numbers (bottom panels) versus the number of captured \\(\\Lambda\\) hyperons (H), calculated with DCM and UrQMD model for p + Au and Au + Au collisions with energy of 2 GeV per nucleon (left panels), and 20 GeV per nucleon (right panels). The reactions and energies are noted in the figure by different histograms.
energy, i.e., the hyperon potential given by [48]:
\\[V_{\\Lambda}(\\rho)=-\\alpha\\frac{\\rho}{\\rho_{0}}\\left[1-\\beta(\\frac{\\rho}{\\rho_{0} })^{2/3}\\right], \\tag{1}\\]
where \\(\\alpha=57.5\\) MeV, and \\(\\beta=0.522\\). The local nucleon density \\(\\rho\\) at the hyperon's position is calculated within the hadronic transport models, whereas the details of the computation and more results on the properties of the absorbed hyperons can be found in [49]. Figure 1 shows the resulting probabilities for the formation of a conventional and strange spectator residual (top panels), and their mean mass numbers (bottom panels) versus the number of captured \\(\\Lambda\\) hyperons (H), calculated with the DCM and UrQMD models for p + Au and Au + Au collisions at an energy of 2 GeV per nucleon (left panels), and 20 GeV per nucleon (right panels). One clearly observes that the production of heavy multi-hyper nuclei is possible at FAIR.
## 3 The strange equation of state
The strange EoS is of particular interest for the understanding of several aspects of QCD:
1. As has been shown in [50] the net strangeness distribution in phase space of a heavy ion collision can fluctuate, although the total net strangeness is zero. To dynamically treat such a system, the equation of state for \\(\\rho_{s}\
eq 0\\) needs to be evaluated.
2. Compact stars are very dense and long lived objects. Due to a \\(\\beta\\)-equilibrium inside the star, net-strange conservation is violated by the weak interaction.
3. Lattice QCD results at finite \\(\\mu_{B}\\) are often evaluated through a Taylor expansion in \\(\\mu_{B}\\) at \\(\\mu_{B}=\\mu_{S}=0\\). A vanishing strange number chemical potential induces a non-vanishing net strangeness, which means that the equation of state of net-strange matter is calculated.
First investigations on the strange equation of state were done in [51], where one usually considered a first order transition from a hadron to a quark phase. In our study we employ the recently developed SU(3)\\({}_{f}\\) parity doublet for hadronic matter and its extension to quark degrees of freedom. In this approach an explicit mass term for baryons is possible, where the signature for chiral symmetry restoration is the degeneracy of the baryons and their respective parity partners. Adding an effective quark and gluon contribution is done via a PNJL-like approach [52, 53]. This model uses the Polyakov loop \\(\\Phi\\) as the order parameter for deconfinement. \\(\\Phi\\) is defined via \\(\\Phi=\\frac{1}{3}\\mathrm{Tr}[\\exp{(i\\int d\\tau A_{4})}]\\), where \\(A_{4}=iA_{0}\\) is the temporal component of the SU(3) gauge field. To suppress the hadronic contributions from the equation of state at high temperatures and densities, effects of finite-volume hadrons are included in a thermodynamic consistent way. This model allows for a smooth transition from a hadronic to a quark dominated system, where the order parameters and thermodynamic quantities are in reasonable agreement with recent lattice data. For a detailed description of the parity model and comparisons with lattice we refer to [54].
Figure 2 presents our results on the order parameter of the chiral phase transition as a function of \\(\\mu_{B}\\) and \\(\\mu_{S}\\) at fixed temperature. The red lines indicate paths of constant values for \\(f_{s}=\\rho_{s}/\\rho_{B}\\), the strangeness per baryon fraction. At the temperature \\(T=56\\) MeV, the critical endpoint of the chiral phase transition was located at \\(\\mu_{B}^{cep}\\approx 1150\\) MeV. We can observe that for increasing \\(f_{s}\\), the change in the order parameter becomes steeper and the value of \\(T_{CEP}\\) increases slightly to \\(T_{CEP}=68\\) MeV for \\(f_{s}=0.5\\). For a gas of deconfined quarks there is a strong correlation between the baryon number and strangeness. In a hadronic medium such a correlation is usually not trivial as strangeness can be found in mesons and baryons. These considerations led to the idea that the so called strangeness-baryon correlation factor \\(c_{BS}=-3\\frac{\\langle N_{B}N_{S}\\rangle-\\langle N_{B}\\rangle\\langle N_{S} \\rangle}{\\left\\langle N_{S}^{2}\\right\\rangle-\\left\\langle N_{S}\\right\\rangle^ {2}}\\) is sensitive to the deconfinement and/or chiral phase transition [55]. On the other hand the strangeness to baryon ratio \\(f_{s}\\) should also be sensitive on any phase transition at finite baryon densities. On the lattice such quantities are usually calculated as functions of the expansion coefficients. The information that can be extracted from these quantities is exemplified in figure 3. Here we show \\(c_{BS}\\) as a function
Figure 2: (Color online) Contour plots of the normalized chiral condensates as a function of the chemical potentials \\(\\mu_{B}\\) and \\(\\mu_{S}\\) for fixed temperature (a: \\(T=56\\) MeV, b: \\(T=150\\) MeV). The red lines correspond to different values of a fixed strangeness to baryon fraction \\(f_{s}\\).
of temperature for \\(\\mu_{B}/T=3\\) and \\(\\mu_{S}=0\\). One can observe a distinct peak at \\(T\\approx 150\\) MeV \\(\\Rightarrow\\mu_{B}=450\\) MeV. One can identify this peak with the crossover transition of the chiral condensate. Such a behavior of \\(c_{BS}\\) has been predicted and also has been shown to exist in lattice data [56]. At higher temperatures the strangeness to baryon correlation approaches unity which resembles closely the behavior of the quark and gluon fraction \\(\\lambda=e_{Quarks+Gluons}/e_{Tot}\\) of the system. In comparison figure 3 also shows the temperature dependence of \\(f_{s}\\) at \\(\\mu_{S}/T=1\\) and \\(\\mu_{B}=0\\). This quantity is even more sensitive in the quark-gluon fraction as \\(c_{BS}\\), while it seems to be not very sensitive to the chiral phase transition.
## 4 Summary
We presented results on the production of hypernuclear systems in high energy collisions of heavy ions. In particular we have investigated the production of hyperons in peripheral relativistic heavy ion collisions and their capture by the attractive potential of spectator residues. The absorption rate of hyperons in the excited spectators is shown to be quite substantial. This opens the possibility to study the phase transition in nuclear matter with a strangeness admixture and reveal information about the properties of hypernuclei, their binding energies, and, finally, \\(YN\\) and \\(YY\\) interactions. In the second part of this work we discuss properties of the phase
Figure 3: (Color online) Shown are the strangeness to baryon correlation coefficient \\(c_{BS}\\) (red short dashed line) compared to the quark-gluon fraction \\(\\lambda=e_{Quarks+Gluons}/e_{Tot}\\) (black solid line) as a function of temperature for \\(\\mu_{B}/T=3\\) and \\(\\mu_{S}=0\\). The plot also shows the strangeness per baryon fraction \\(f_{s}\\) (green dashed line) and the quark-gluon fraction \\(\\lambda\\) (grey solid line) as a function of temperature for \\(\\mu_{S}/T=1\\) and \\(\\mu_{B}=0\\).
diagram at finite net-strange density within a SU(3) parity doublet model. We find that the location of the critical endpoint shifts to a slightly higher temperature for a finite net strangeness (lattice results). In particular the strangeness baryon correlation factor \\(c_{BS}\\) and the strangeness per baryon fraction \\(f_{s}\\) both show to be sensitive to the deconfined fraction on the system while \\(c_{BS}\\) also shows a distinct peak at the chiral crossover at finite chemical potential.
This work was supported by the Hessian LOEWE initiative Helmholtz International Center for FAIR, EMMI and used computational resources provided by the (L)CSC at Frankfurt.
## References
* [1] M. Gyulassy and L. McLerran, Nucl. Phys. A **750**, 30 (2005)
* [2] C. Hohne [CBM Collaboration], Nucl. Phys. A **749**, 141 (2005).
* [3] J. Adams _et al._ [STAR Collaboration], Nucl. Phys. A **757**, 102 (2005)
* [4] B. B. Back _et al._, Nucl. Phys. A **757**, 28 (2005)
* [5] I. Arsene _et al._ [BRAHMS Collaboration], Nucl. Phys. A **757**, 1 (2005)
* [6] K. Adcox _et al._ [PHENIX Collaboration], Nucl. Phys. A **757**, 184 (2005)
* [7] Z. Fodor, S. D. Katz and K. K. Szabo, Phys. Lett. B **568**, 73 (2003)
* [8] C. R. Allton _et al._, Phys. Rev. D **66**, 074507 (2002)
* [9] E. Laermann and O. Philipsen, Ann. Rev. Nucl. Part. Sci. **53**, 163 (2003)
* [10] P. de Forcrand and O. Philipsen, JHEP **0811**, 012 (2008)
* [11] J. Y. Ollitrault, Phys. Rev. D **46**, 229 (1992).
* [12] D. H. Rischke, Nucl. Phys. A **610**, 88C (1996)
* [13] H. Sorge, Phys. Rev. Lett. **78**, 2309 (1997)
* [14] H. Heiselberg and A. M. Levy, Phys. Rev. C **59**, 2716 (1999)
* [15] S. Scherer _et al._, Prog. Part. Nucl. Phys. **42**, 279 (1999).
* [16] S. Soff, S. A. Bass, M. Bleicher, H. Stoecker and W. Greiner, arXiv:nucl-th/9903061.
* [17] J. Brachmann _et al._, Phys. Rev. C **61**, 024909 (2000)
* [18] L. P. Csernai and D. Rohrich, Phys. Lett. B **458**, 454 (1999)
* [19] B. Zhang, M. Gyulassy and C. M. Ko, Phys. Lett. B **455**, 45 (1999)
* [20] P. F. Kolb, J. Sollfrank and U. W. Heinz, Phys. Rev. C **62**, 054909 (2000)
* [21] M. Bleicher and H. Stoecker, Phys. Lett. B **526**, 309 (2002)
* [22] H. Stoecker, Nucl. Phys. A **750**, 121 (2005)
* [23] X. l. Zhu, M. Bleicher and H. Stoecker, Phys. Rev. C **72**, 064911 (2005)
* [24] H. Petersen, Q. Li, X. Zhu and M. Bleicher, Phys. Rev. C **74**, 064908 (2006)
* [25] M. Gazdzicki _et al._ [NA49 Collaboration], J. Phys. G **30**, S701 (2004)* [26] M. Gazdzicki and M. I. Gorenstein, Acta Phys. Polon. B **30**, 2705 (1999)
* [27] P. Koch, B. Muller and J. Rafelski, Phys. Rept. **142**, 167 (1986).
* [28] C. Greiner, P. Koch and H. Stoecker, Phys. Rev. Lett. **58**, 1825 (1987).
* [29] E. L. Bratkovskaya, W. Cassing, C. Greiner, M. Effenberger, U. Mosel and A. Sibirtsev, Nucl. Phys. A **681**, 84 (2001)
* [30] P. Braun-Munzinger, J. Stachel and C. Wetterich, Phys. Lett. B **596**, 61 (2004)
* [31] C. Greiner, P. Koch-Steinheimer, F. M. Liu, I. A. Shovkovy and H. Stoecker, J. Phys. G **31**, S725 (2005)
* [32] A. R. Bodmer, Phys. Rev. D **4**, 1601 (1971).
* [33] R. L. Jaffe, Phys. Rev. Lett. **38**, 195 (1977)
* [34] J. T. Goldman, K. Maltman, G. J.. Stephenson, K. E. Schmidt and F. Wang, Phys. Rev. Lett. **59**, 627 (1987).
* [35] J. T. Goldman, K. Maltman, G. J.. Stephenson, J. L.. Ping and F. Wang, Mod. Phys. Lett. A **13**, 59 (1998)
* [36] B. Schwesinger, F. G. Scholtz and H. B. Geyer, Phys. Rev. D **51**, 1228 (1995)
* [37] P. Braun-Munzinger and J. Stachel, J. Phys. GG **21**, L17 (1995)
* [38] J. K. Ahn _et al._, Phys. Rev. Lett. **87**, 132504 (2001).
* [39] H. Takahashi _et al._, Phys. Rev. Lett. **87**, 212502 (2001).
* [40] A. Andronic, P. Braun-Munzinger, J. Stachel and H. Stocker, Phys. Lett. B **697**, 203 (2011)
* [41] J. Schaffner, H. Stoecker and C. Greiner, Phys. Rev. C **46**, 322 (1992).
* [42] J. Schaffner, C. B. Dover, A. Gal, C. Greiner and H. Stoecker, Phys. Rev. Lett. **71**, 1328 (1993).
* [43] E. P. Gilson and R. L. Jaffe, Phys. Rev. Lett. **71**, 332 (1993)
* [44] J. Schaffner-Bielich, C. Greiner, A. Diener and H. Stoecker, Phys. Rev. C **55**, 3038 (1997)
* [45] M. Bleicher _et al._, J. Phys. G **25**, 1859 (1999)
* [46] S. A. Bass _et al._, Prog. Part. Nucl. Phys. **41**, 255 (1998)
* [47] V. D. Toneev and K. K. Gudima, Nucl. Phys. A **400**, 173C (1983).
* [48] I. Ahmad, M. Mian and M. Z. Rahman Khan, Phys. Rev. C **31**, 1590 (1985).
* [49] A. S. Botvina, K. K. Gudima, J. Steinheimer, M. Bleicher and I. N. Mishustin, arXiv:1105.1341 [nucl-th].
* [50] J. Steinheimer, M. Mitrovski, T. Schuster, H. Petersen, M. Bleicher and H. Stoecker, Phys. Lett. B **676**, 126 (2009)
* [51] K. S. Lee and U. W. Heinz, Phys. Rev. D **47**, 2068 (1993).
* [52] K. Fukushima, Phys. Lett. B **591**, 277 (2004)
* [53] C. Ratti, M. A. Thaler and W. Weise, Phys. Rev. D **73**, 014019 (2006)
* [54] J. Steinheimer, S. Schramm and H. Stocker, Phys. Rev. C **84**, 045208 (2011)
* [55] V. Koch, A. Majumder and J. Randrup, Phys. Rev. Lett. **95**, 182301 (2005)
* [56] C. Schmidt, PoS C **POD2009**, 024 (2009) | Two major aspects of strange particle physics at the upcoming FAIR and NICA facilities and the RHIC low energy scan will be discussed. A new distinct production mechanism for hypernuclei will be presented, namely the production abundances for hypernuclei from \\(\\Lambda\\)'s absorbed in the spectator matter in peripheral heavy ion collisions. As strangeness is not uniformly distributed in the fireball of a heavy ion collision, the properties of the equation of state therefore depend on the local strangeness fraction. The same, inside neutron stars strangeness is not conserved and lattice studies on the properties of finite density QCD usually rely on an expansion of thermodynamic quantities at zero strange chemical potential, hence at non-zero strange-densities. We will therefore discuss recent investigations on the EoS of strange-QCD and present results from an effective EoS of QCD that includes the correct asymptotic degrees of freedom and a deconfinement and chiral phase transition.
PACS numbers: 25.75.-q, 21.80.+a, 21.65.Mn, 12.38.Aw, 12.39.Fe | Give a concise overview of the text below. | 235 |
arxiv-format/2406_08027v1.md | Real-time, chirped-pulse heterodyne detection at room-temperature with 100GHz 3dB-bandwidth mid-infrared quantum-well photodetectors
Quyang Lin
1 Michael Hakl
1 Sylvie Lepillet
1 Hua Li
2 Jean-Francois Lampin
1 Emilien Peytavit1 and Stefano Bargieri1.*
1 Institute of Electronics, Microelectronics and Nanotechnology, CNRS, Univ. Lille, Univ. Polytechnique Hauts-de-France, UMR 8520, F-59000 Lille, France
2Key Laboratory of Terahertz Solid State Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, 865 Changing Road, Shanghai 200050, China [email protected]
## 1 Introduction
The quest for broadband photodetectors in the mid-infrared (MIR - \\(\\lambda\\)=3-12 \\(\\mu\\)m), with radio-frequency (RF) bandwidths in the tens of GHz or more, has gained momentum since the end of the 80s with the advent of unipolar devices based on intersubband (ISB) transitions in III-V semiconductor heterostructures (GaAs/AlGaAs and InGaAs/InAlAs) [1]. Thanks to ultrafast electronic non-radiative lifetimes, these structures offer intrinsic response times in the ps range, potentially leading to RF-bandwiths of tens of GHz, provided that the detector RC time constant is short enough [2, 3, 4, 5, 6, 7]. In this respect, the recent exploitation of metallic antennas of micrometric size to in-couple the impinging mid-IR radiation to the semiconductor heterostructure active region, has opened new perspectives by allowing to shrink the detectors area, without compromising the light collection efficiency [8, 9]. On the one hand, compared to standard detectors based on so-called \"mesa\" geometry, this allows reducing the detector's dark current without affecting the responsivity. The other advantage is a reduction of the RC time constant, which can be exploited to increase the device speed [6].
In the first part of this work we have pushed forward the study and optimisation of antenna-coupled MIR quantum-well infrared photodetectors (QWIPs), in order to improve their performance both in terms of responsivity and bandwidth, and, at the same time, to try assessing experimentally what are their limiting factors. To this end we have fabricated and characterised experimentally three sets of GaAs/AlGaAs-based QWIPs, based on two-dimensional matrices of metallic patch-antennas, and measured their frequency response at room-temperature in the 0-110GHz and 140GHz-220GHz frequency bands. Depending on the number of antenna elements, we find that the latter remains within 3dB up to 100GHz (3\\(\\times\\)3 and 2\\(\\times\\)2 matrices), the broadest bandwidth reported to date for photodetectors based on ISB transitions. At higher frequencies we find a roll-off between 7 and 9dB/octave. By fitting the frequency response with the help of a small-signal circuit model that we extract from impedance measurements, we conclude unequivocally that the high frequency roll-off is limited by the intrinsic carrier's capture time, of \\(\\sim 2.5\\)ps.
By optimizing the QWIPs design, a maximum responsivity of 0.3 A/W is obtained at 10.3\\(\\mu\\)m wavelength, a value significantly larger than what previously reported for patch-antenna QWIPs at 300K (\\(\\sim\\) 0.15-0.2A/W) [6, 8]. The responsivity decreases with increasing incident optical power, a fact that we attribute to optical saturation of the ISB transition [10, 11]. The corresponding saturation intensity, of only a few tens of kW/cm\\({}^{2}\\), is consistent with the fact that the antennas allow to obtain a radiation collection area that is larger than the physical area of the detector [12].
Applications of ultrafast QWIPs are only at their early stage, with many exciting developments in disparate fields, such as free-space communications [13, 14, 15, 16, 17, 18], gas sensing and spectroscopy [19, 20, 21, 22, 23, 24, 25, 26], metrology [27, 28], ultrafast physics [29], and astrophysics [30, 31, 32]. In the second part of this work, to assess the potential of our QWIPs for fast sensing/spectroscopy applications, we have used them to detect the heterodyne beating between a quantum cascade laser (QCL) operated in pulsed mode and another one driven in continuous wave (CW). In this way, with the help of a fast oscilloscope, we show that it is possible to measure in real-time the frequency down-chirp resulting from the thermal transient of the pulsed QCL, spanning a range of more than 60GHz. By allowing the acquisition of frequency-calibrated gas spectra with a high signal-to-noise ratio in a single-shot, over timescales from tens of ns to ms, this technique appears to be particularly promising for active remote sensing and laser ranging applications.
## 2 Results
### Spectral characterisation and device responsivity
The QWIP semiconductor active region consists of six, 6nm-thick, \\(n\\)-doped GaAs quantum wells (QWs) separated by 40nm-thick, undoped Al\\({}_{0.2}\\)Ga\\({}_{0.8}\\)As barriers, yielding a nominal bound to quasi-bound ISB transition energy of \\(\\sim 115\\)meV (\\(\\lambda\\sim 10.8\\mu\\)m). Details on the heterostructure layers and device fabrication are given in Methods. The final device geometry is a matrix of square metallic (Ti/Au) patches of side \\(s\\) and separated by a period \\(p\\). Around each patch the semiconductor is etched down to a bottom metallic ground-plane. As shown in the SEM pictures in Fig. 1(a) the patches are electrically connected together, and to a 50\\(\\Omega\\) microwave coplanar line for RF extraction, by \\(\\sim 150\\)nm wide, Ti/Au wire air-bridges.
In this work we have studied matrices with different number of patches in order to probe the effect on the photodetectors RC time constant. The devices are based on a 5\\(\\times\\)5 and a 3\\(\\times\\)3 matrix of period \\(p=5\\mu\\)m, and a 2\\(\\times\\)2 matrix of period \\(p=10\\mu\\)m, that we label M5, M3 and M2 respectively. For all the devices \\(s=1.8\\mu\\)m. This parameter defines the frequency of the fundamental TM\\({}_{010}\\) mode of a single resonator, the one we are interested in, which is, essentially, a \\(\\lambda/2\\) Fabry-Perot mode oscillating in the plane of the patches, perpendicularly to the connecting wire bridges [33, 34]. The TM\\({}_{100}\\) mode oscillating in the orthogonal direction is instead perturbed by the wire bridges (despite their small size), leading to a lower overlap with the QWIP active region, and therefore a weaker absorption [6].
For a given \\(s\\), changing the periodicity \\(p\\) affects the radiation collection area of each individual patch in the array [35, 36]. The experimental characterisation and optimisation of the optical absorption of the patch-antenna arrays, made with the help of a MIR microscope coupled to a Fourier transform (FTIR) spectrometer, was carried out over a large number of matrices by varying \\(s\\) and \\(p\\). The main results are summarised in Supplement 1. In the case where the optical excitation area is smaller than the surface of the matrix (i.e. \"infinite\" matrix approximation), for the TM\\({}_{010}\\) mode we find peak absorptions at \\(\\sim 10.5\\mu\\)m (i.e. virtually coincident with the nominal wavelength of the ISB transition) of 96% and 40% for \\(p=5\\mu\\)m and \\(p=10\\mu\\)m respectively. In the former case we are therefore very close to so-called \"critical\" coupling (100% peak optical absorption). The reason why we choose \\(p=10\\mu\\)m for device M2, is the results of a compromise between the need to keep a sizeable antenna collection area while having a reasonable spatial overlap with the waist of the focused QCLs used throughout this work, of approximately 25\\(\\mu\\)m diameter (see below).
The room-temperature responsivity of the devices _vs_ wavelength in the range 9.9\\(\\mu\\)m-10.8\\(\\mu\\)m, obtained with an extended cavity (EC) QCL polarized perpendicularly to the connecting wires is reported in Fig. 1(a) (dots), for an incident power of 4.3mW. The QCL beam was focused with an AR coated aspheric chalcogenide-glass lens (NA = 0.56; 5 mm focal length), yielding a waist diameter of \\(\\sim\\) 25\\(\\mu\\)m, that we measured with a razor blade. We obtain a maximum responsivity close to 0.3A/W at 10.3\\(\\mu\\)m for device M5. As expected the responsivity is reduced by decreasing the number of patches. Indeed the waist area roughly matches that of a 5\\(\\times\\)5 matrix. As a consequence, especially for devices M3 and M2, part of the incident radiation is directly reflected by the metallic ground-plane.
The dashed lines in Fig. 1(a) represent the experimental optical absorption for each device, normalised to its peak responsivity (Supplement 1). The observed systematic red shift between the peak absorption and peak responsivity is a consequence of the fact that the QWIP ISB transition energy is not perfectly coincident with the energy of the TM\\({}_{010}\\) cavity mode. The
Figure 1: (a) Room-temperature responsivity _vs_ wavelength of the QWIPs studied in this work, measured with an extended-cavity QCL (dots). The incident power is of 4.3mW. For comparison, the absorption spectrum of each device measured with an FTIR spectrometer, normalised to its peak responsivity is also reported (dashed lines). The devices are labelled respectively M5, M3 and M2, and their SEM pictures are shown on the right. For all of them the patches consist of metallic squares of lateral side \\(s=1.8\\mu\\)m. In the M5 and M3 devices, individual patches are separated by a period \\(p=5\\mu\\)m, while for device M2 we used \\(p=10\\mu\\)m. As shown in the pictures, patches are electrically connected together and to a 50\\(\\Omega\\) integrated coplanar line by a suspended gold wires of \\(\\sim\\) 150nm diameter (only the first part of the coplanar line is visible in the SEM images). (b) Responsivity _vs_ incident power at \\(\\lambda=10.3\\mu\\)m for the 3 devices studied, measured at 300K. The applied biases are 3.9V (devices M5 and M3) and 3.75V (device M2). The error bars take into account the uncertainty in the measurement of the incident power (\\(\\sim\\pm 0.4mW\\)). The dashed lines are the fits of the responsivities using the function \\(R=R_{0}/(1+P_{inc}/P_{sat})\\) (see the main text). Inset. Device M5: ratio between the photocurrent at 32.3mW incident power and the dark current.
QWIP absorption can be computed analytically using Coupled Mode Theory (CMT) [35, 37, 12]: for device M5 we find a good agreement with the experimental absorption spectrum assuming an ISB transition energy \\(E_{isb}=115\\) meV and a cavity mode energy of \\(E_{cav}=122.5\\)meV (Supplement 1). This gives an external quantum efficiency of \\(\\sim 15\\%\\) for detector M5. We note that in the case where the ISB transition energy was perfectly coincident with that of the cavity mode (\\(E_{isb}=E_{cav}=122.4\\)meV), this value would raise to \\(\\sim 25\\%\\), with a corresponding peak responsivity of \\(\\sim 0.5\\)A/W.
As reported in Fig. 1(b), the responsivity of the devices measured at \\(\\lambda=10.3\\mu\\)m displays a sizeable decrease (up to \\(\\sim 40-60\\%\\) depending on the number of patches) with increasing power. In Ref. [12] it was shown that the optical saturation intensity of an ISB transition system can be strongly reduced if the latter is embedded inside an optical cavity of sub-wavelength volume, as is the case here. Using CMT, we compute a saturation intensity for our patch-antenna \\(I_{sat}\\sim 35\\)kW/cm\\({}^{2}\\) at \\(\\lambda=10.3\\mu\\)m. To estimate the corresponding incident saturation power, \\(P_{sat}\\), we must take into account the fact that each patch-antenna in the array collects photons on a surface larger than its physical area. As a result, at critical coupling, the incident saturation intensity is obtained by multiplying \\(I_{sat}\\) by the factor \\(s^{2}/p^{2}\\) (Supplement 1). Considering a waist diameter of \\(\\sim 25\\mu\\)m, and taking into account the different peak absorptions of each detector we finally obtain \\(P_{sat}\\sim 30\\)mW, \\(45\\)mW, and \\(20\\)mW for QWIPs M5, M3 and M2 respectively. The dashed lines in Fig. 1(b) represent the fits of the responsivities using the function \\(R=R_{0}/(1+P_{inc}/P_{sat})\\), where \\(P_{inc}\\) is the incident power and \\(R_{0}\\) and \\(P_{sat}\\) are used as fitting parameters (\\(R_{0}\\) is the responsivity at low incident power) [12]. From the fits we obtain \\(P_{sat}=47\\pm 3\\)mW, \\(50\\pm 20\\)mW and \\(20\\pm 0.1\\)mW for QWIPs M5, M3 and M2 respectively, in fairly good agreement with the computed values.
### Frequency response
The experimental setup for the measurement of the QWIPs frequency response is based on the heterodyne mixing of a DFB QCL emitting at \\(\\sim 10.3\\mu\\)m with an EC QCL (the same used for Fig. 1(a)). Both lasers are operated in CW, and a MIR isolator is used to minimise optical feedback. As a consequence the incident radiation is linearly polarised along the diagonal of the square patches, resulting into a \\(\\sim 50\\%\\) drop of absorption compared to Fig. 1. The incident powers on the QWIPs are \\(P_{1}=13\\)mW and \\(P_{2}=17.5\\)mW from the EC and DFB QCLs respectively. To avoid parasitic effects due to wire-bonding/packaging, the measurement of the heterodyne signal, oscillating at the difference between the emission frequencies of the two QCLs, is done directly on-wafer by positioning two sets of coplanar probes at the the edge of the integrated \\(50\\Omega\\) coplanar line, followed by a bias-tee and a calibrated power meter covering respectively the 0-110GHz and 140GHz-220GHz frequency bands.
In Fig. 2 we report representative experimental frequency response functions for devices M5, M3 and M2, obtained by sweeping the emission frequency of the EC QCL using the external grating, while the DFB QCL is kept at constant current. The devices are biased at 3.8V (M5), 3.85V (M3) and 4V (M2), corresponding to the maximum generated photocurrents (Supplement 1). The experimental power values are corrected by the attenuation of the bias-tees and coplanar probes, measured with a Vector Network Analyser (VNA). We obtain 3dB cutoffs of \\(\\sim 90\\)GHz for device M5 and of \\(\\sim 100\\)GHz for devices M3 and M2 (the cutoffs are defined relatively to the peak response). These are the largest bandwidths reported to date in the literature for unipolar MIR photodetectors and, more generally, for MIR photodetectors. Beyond the 3dB cutoff the response drops by approximately 8dB/octave.
The frequency response of the photodetector is essentially the product of two transfer functions, the first including the electrical response, while the second one takes into account the intrinsic response time of the photo-excited electrons [1]. To obtain the electrical response functions of the devices studied, we first measured their impedance and then used the latter to derive an Figure 2: (a) Room-temperature experimental frequency response of devices M5 (red dots), M3 (blue dots) and M2 (green dots) in the frequency bands 0-110GHz and 140-220GHz. The measurement are obtained at \\(\\lambda\\simeq 10.3\\mu\\)m by heterodyne mixing two single-mode QCLs, and the experimental values are corrected by the attenuation of the bias-tees and coplanar probes, measured with a VNA. All data was recorded without the use of any amplification stage. The incident MIR radiation is linearly polarised at 45deg with respect to the metallic wires connecting the patches, and the incident powers are 13mW and 17.5mW. The corresponding \\(dc\\) photocurrent are 4.1mA, 2.8mA, and 1.25mA for devices M5, M3 and M2 respectively. The dashed lines are the computed electrical frequency responses obtained from the small-signal equivalent circuit (Supplement 1, Fig. S4). The solid lines include the intrinsic ISB frequency response with a carrier capture time of 2.5ps. Inset. Example of heterodyne beatnote close to 110GHz, recorded with a spectrum analyser using the M3 photodetector. The spectrum is not corrected by the attenuation through the 110GHz probe and bias-tee.
equivalent small-signal circuit model (Supplement 1). The frequency response can then be obtained by computing the average power, \\(P_{L}(\\omega)\\) dissipated in the 50\\(\\Omega\\) input impedance of the power meter, where \\(\\omega\\) is the difference frequency between the two QCLs, and considering an \\(ac\\) current source term of amplitude \\(I_{s}\\) proportional to the total \\(dc\\) photocurrent generated by the two QCLs (Methods). The dashed lines in Fig. 2 are the so-obtained electrical frequency responses. Clearly, the predicted cutoff frequencies are much larger than those observed experimentally, i.e. the response time of our photodetectors is not limited by the electrical time constant but rather by the intrinsic response time of the ISB system, which can be taken into account by multiplying the electrical transfer function by the term \\([1+(\\omega\\tau)^{2}]^{-1/2}\\), where \\(\\tau\\) represents the shortest between the carriers capture time and transit time [1]. The best agreement with the experimental frequency responses is shown by the solid lines in Fig. 2, obtained with \\(\\tau=2.5\\)ps, that we identify with the carriers capture time. Indeed, under the experimentally applied biases we estimate a drift velocity at room temperature of \\(2-3\\times 10^{6}\\) cm/s, yielding a transit time of \\(\\sim 10\\)ps [6, 9].
### Heterodyne frequency-chirp spectroscopy
It is well-known that driving a QCL in pulsed mode generates a down-chirp of the emission frequency of thermal origin, that can reach up to several tens of GHz. This effect can be exploited to detect in real time different gas species for applications in environmental and combustion monitoring, plasma diagnostic, or high-resolution spectroscopy [38, 39, 40, 41, 42, 43, 44].
In a typical experiment, the beam emitted by a pulsed QCL is transmitted through a gas cell, then focused on a detector of sufficiently high speed to resolve the optical pulse, which is finally connected to an oscilloscope. The resulting electrical pulse will display a number of dips generated each time the QCL frequency goes across a molecular absorption line. One weak point of this technique is that the value of the QCL emission frequency at each instant of time during the pulse is not known, a fact that can be problematic, for instance for the determination of unknown transition lines. For sufficiently short driving pulses the frequency chirp is approximately linear, allowing an absolute frequency pre-calibration using a Fourier transform (FT) spectrometer [40]. The generation of wider frequency spans requires instead longer driving pulses, typically ranging from tens of \\(\\mu\\)s to several ms, during which the time dependence of the QCL frequency is highly non-linear, requiring the use of an etalon for real-time relative frequency calibration [41]. An alternative solution to this problem is offered by the 100GHz bandwidth of our QWIP, giving the possibility to measure in real-time the relative emission frequency of a pulsed QCL through heterodyne detection.
The schematic of the heterodyne frequency-chirp spectroscopy (HFCS) experimental setup exploiting the same QCLs used to characterise the QWIPs frequency response is shown in Fig. 3. The \\(\\sim 10.3\\mu\\)m-wavelength DFB QCL is driven in pulsed mode, with 3.5ms-long pulses and 100Hz repetition rate, producing a frequency down-chirp of approximately 60GHz (see below). The emitted optical beam is transmitted through a 8cm-long gas cell containing NH\\({}_{3}\\) and finally focused on a QWIP nominally identical to the M5 device of Fig. 1(a). The tunable EC QCL is driven in CW and directly focused on the QWIP to provide the local oscillator for heterodyne detection. Its absolute frequency is monitored with a Fourier transform-based \\(\\lambda\\)-meter with a frequency resolution of 1GHz. As for the characterisation of the frequency response, an optical isolator (not shown in the figure) is placed before the QWIP. The QWIP is in series with a 34\\(\\Omega\\) resistor, and is connected to a 67GHz bias-tee. The \\(dc\\) port of the latter is used to bias the QWIP with a \\(dc\\) power supply (\\(\\sim 4.5\\)V applied bias - not shown in the Figure). Simultaneously, we measure the voltage across the 34\\(\\Omega\\) resistor, proportional to the QWIP current, with the help of a 200MHz bandwidth oscilloscope. The \\(ac\\) port of the bias-tee is connected to a 70GHz bandwidth oscilloscope, allowing to measure in real-time the heterodyne frequency pulse resulting from the mixing between the DFB and the EC QCLs. As for the heterodyne measurement of the frequency response, we note the absence of any RF amplification stage in this experimental setup.
An example of heterodyne pulse, recorded in single-shot with the gas cell empty, is shown in the inset of Fig. 4(b) (see the Figure caption for the QCLs driving conditions and the power incident on the QWIP). The heterodyne amplitude oscillations cannot be resolved directly using the full chirped pulse since the latter does not contain a sufficiently large number of points. The instantaneous frequency is therefore obtained by measuring, at different instants of time, single-shot, 10ns-long time traces, and by computing their Fourier transform in real time with the help of the 70GHz oscilloscope. This gives rise to the type of RF spectra shown in Fig. 4(a) obtained, without gas cell, from a chirped pulse different from the one shown in the inset (see caption of Fig. 4). As shown by the one highlighted in blue in the Figure, each RF spectrum consists of a main peak followed by a few low power harmonics, with the former corresponding to the instantaneous beat frequency between the DFB and EC QCLs emission frequencies: \\(f_{b}(t)=\
u_{DFB}(t)-\
u_{EC}\\). As shown by the top arrow, from 0ms to 3.5ms \\(f_{b}(t)\\) spans approximately 60GHz. We note the high dynamic range obtained (up to 60dB) despite the fact that the chirped pulse is acquired without amplification and in single-shot. Indeed, we found that the introduction of an averaging produced a reduction of the pulse amplitude that we attribute to the frequency fluctuations of the EC QCL operating in free-running, automatically transferred to
Figure 3: Schematic of the HFCS experiment (see main text for more details). A DFB QCL, with emission frequency \\(\
u_{DFB}(t)\\), is driven in pulsed mode, producing a frequency down-chirped of approximately 60GHz. Its beam is transmitted through a gas cell containing NH\\({}_{3}\\) and focused on the QWIP. A tunable EC QCL is driven in CW and provides the local oscillator for heterodyne detection. Its absolute frequency, \\(\
u_{EC}\\), is monitored with a \\(\\lambda\\)-meter. The QWIP, in series with a 34\\(\\Omega\\) resistor, is connected to a 67GHz bias-tee and biased with a \\(dc\\) power supply. A 200MHz bandwidth oscilloscope is used to measure the voltage across the 34\\(\\Omega\\) resistor, proportional to the QWIP current. The \\(ac\\) port of the bias-tee is connected to a 70GHz bandwidth oscilloscope, which measures in real-time the heterodyne frequency pulse, oscillating at \\(f_{b}(t)=\
u_{DFB}(t)-\
u_{EC}\\), resulting from the mixing between the DFB and the EC QCLs.
\\(f_{b}(t)\\). This problem could be solved by locking the EC QCL to a more stable reference [27, 28].
The temporal evolution of \\(f_{b}(t)\\) is highly non-linear. This is shown in Fig. 4(b), reporting the beat frequency as obtained from the chirped-pulse in the inset. The observed down-chirp is of pure thermal origin and reflects the heating of the active region due to the applied current pulse. As discussed in Ref. [42] this process involves several time constants, corresponding to joule heating diffusing through the laser active region, waveguide, substrate etc. We note that close to 1ms, \\(f_{b}(t)\\) goes through zero, which corresponds to the point where the DFB and EC QCLs frequencies are equal. This produces a smooth peak in the envelope of the heterodyne pulse, since as \\(f_{b}\\) moves away from \\(dc\\), we have an increase of the microwave propagation losses of the 1m-long, 67GHz coaxial cable connecting the \\(ac\\) port of the bias-tee to the 70GHz oscilloscope. Adding the emission frequency of the EC QCL measured with the \\(\\lambda\\)-meter to the heterodyne frequency of Fig. 4(b) provides the temporal evolution of the DFB QCL absolute emission frequency. This can then be used as a calibration for HFCS.
The result of a proof-of-principle HFCS experiment is shown in Fig. 5, obtained by filling the gas cell with pure NH\\({}_{3}\\) at a nominal pressure of 100Pa. The top panel shows the chirped-frequency pulse, while the current pulse measured on the \\(dc\\) port of the bias-tee is reported in the bottom panel, together with the pulse without gas for comparison. In both time-traces, several absorption dips are visible, corresponding to NH\\({}_{3}\\) absorption lines, while the spike at \\(\\sim 1\\)ms in the QWIP current is an experimental artefact produced by \\(f_{b}(t)\\) passing through 0. It is worth noting that, contrary to the chirped pulse, recorded in single-shot, the current pulse is obtained by averaging
Figure 4: (a) Example of heterodyne spectra obtained by computing, in real time, the FFT of a chirped pulse over a 10ns-long sliding temporal window. Time increases from right to left as schematically shown by the top arrow. For clarity, the lowest frequency spectrum is displayed in blue, showing the fundamental oscillation, close to 5GHz, and a few lower power harmonics, stemming from the QWIP I/V non-linearity and/or a circuit non-linearity (Supplement 1) [45] The chirped pulse is different from the one shown in the inset of panel (b), with the QCLs operating conditions set to obtain always a positive \\(f_{b}(t)\\). The pulse was recorded without gas cell, yielding \\(\\sim 15\\)mW of peak and CW power incident on the QWIP. (b) Beat-frequency \\(vs\\) time obtained from the heterodyne chirped pulse shown in the inset. The frequency was obtained by recording the same type of heterodyne spectra shown in panel (a). Inset. Single-shot chirped pulse obtained by driving the DFB QCL at 23.8\\({}^{\\circ}\\)C with 3.5ms-long 995mA pulses, and 100Hz repetition rate. The beam of the DFB QCL is transmitted through the empty gas cell, producing a \\(\\sim 30\\%\\) power attenuation. The EC QCL is driven in CW at 1A and 17.8\\({}^{\\circ}\\)C yielding an emission frequency 29004.6 GHz (10.343\\(\\mu\\)m). The peak and CW power incidents on the QWIP are of approximately 10mW.
over 100 time-traces (see Methods for a comparison between the chirped pulse and the current pulse in single-shot, and for the pressure detection limit).
The solid green line in Fig. 6 represents the NH\\({}_{3}\\) transmission spectrum extracted from the heterodyne pulses, where the time axis has been replaced by the absolute frequency of the chirped QCL based on the linear interpolation of the frequency \\(vs\\) time curve displayed in Fig. 4(b). The spectrum is the result of the ratio between the squares of the voltage heterodyne pulses (proportional to the transmitted power) with and without gas (the pulse with gas is the one displayed in Fig. 5(a)). To remove the heterodyne oscillations both time traces where numerically averaged. For comparison, the red line shows the NH\\({}_{3}\\) spectrum derived from the ratio between the current pulses with and without gas of Fig. 5(b). As expected, the frequencies of the absorption lines in the two spectra are perfectly coincident. The orange stars represent the frequencies and the transmission intensities of the closest NH\\({}_{3}\\) ro-vibrational transitions, based on the HITRAN database and computed with the commercial software Spectracalc(r), using a gas pressure of 90Pa and a 8-cm gas cell length, i.e. equal to the nominal one. The agreement with the computed line intensities is very good, considering that the difference with the nominal gas pressure of 100Pa is within the measurement error. In Table 1 we report the HITRAN and measured frequencies, showing that for all the lines except the highest frequency one, we find a nearly constant shift of \\(\\sim 600\\)MHz that is within the resolution (1GHz) of the \\(\\lambda\\)-meter used to measure the frequency of the CW QCL. The reason why the saQ(1,1) transition is shifted by only 300MHz could be due to a drift of the EC QCL during the acquisition of the chirped frequency values displayed in Fig. 4(b), which were necessarily measured at different times. Further measurements would be needed to clarify this point, which is h
Figure 5: (a) Single-shot chirped-frequency pulse obtained by driving the QCLs under the same conditions used for Fig. 4(b). The beam from the pulsed DFB QCL is transmitted through the gas cell filled with NH\\({}_{3}\\) at a nominal pressure of 100Pa. (b) Current pulse measured on the \\(dc\\) port of the bias-tee (see Fig. 3), with the gas cell empty (purple) and filled with NH\\({}_{3}\\) (red).
of this work. On this issue, it is anyway important to note that the frequency calibration procedure based on the linear interpolation of the data-points of Fig. 4(b), which has been used here for illustrative reasons, is not strictly necessary. Indeed, a faster and possibly more precise way of determining the absolute frequency of a given transition line, is to directly measure the value of the chirped frequency by using a 10-ns time-window positioned right on top of corresponding transmission dip (after removing the gas if the transmission is too low).
The higher noise visible in the green spectrum compared to the red one, is partly due to slow amplitude oscillations in the heterodyne pulse, due to standing-wave effects (see Fig.4(b)) that could not be completely removed by the normalisation process. Another source of noise is due to the QCLs loosing their coherence, giving rise to short random frequency fluctuations. This problem should be removed by stabilising the two lasers sources. Finally, in Fig. 6, we observe that corresponding to the three highest frequency NH\\({}_{3}\\) transition lines, the red spectrum shows a systematically higher transmission compared to the green one, as well as a slight line asymmetry. We attribute this facts to the finite transient response time of the voltage source used to bias the QWIP, effectively limiting the current rise time when the frequency of the pulsed QCL sweeps across the absorption lines (see Methods). This experimental artifact is not present on the \\(ac\\) port of the bias-tee, where slow bias variations are filtered out, yielding a transmission spectrum with perfectly symmetrical lines (see Supplement 1 for a comparison between the transmittance
Figure 6: Green line. NH\\({}_{3}\\) transmission spectrum obtained from the ratio between the squares of the heterodyne pulses with and without gas (the pulse with gas is the one displayed in Fig. 5(a)). Red line. NH\\({}_{3}\\) spectrum derived from the ratio between the current pulses with and without gas of Fig. 5(b). The orange stars represent the frequency and the transmission intensities of the closest NH\\({}_{3}\\) ro-vibrational transitions, based on the HITRAN database and computed with the commercial software Spectracalc®.
of all the measured transitions as obtained from the chirped pulse, with those computed with Spectracalc(r).
## 3 Conclusions
In this work we have demonstrated that antenna-coupled MIR unipolar quantum-well photodetectors based on ISB transitions can reach a 3dB RF bandwidth of 100GHz at room temperature, with a responsivity of \\(\\sim 0.3\\)A/W at 10.3\\(\\mu\\)m wavelength. By fabricating and characterising photodetectors containing different numbers of patch-antennas we have demonstrated that the high frequency cutoff is not limited by the device parasitics, but rather by the intrinsic properties of the semiconductor heterostructure itself, namely the carriers capture time, of the order of 2.5ps.
Thanks to their ultra-broad bandwidth we believe that the demonstrated detectors are particularly appealing as heterodyne receivers for applications as diverse as MIR astronomy, light detection and ranging (LIDAR), spectroscopy or free-space communications [13, 14, 15, 17, 19, 20, 21, 22, 25, 30, 31, 32, 32, 33, 34]. Indeed operating these devices as direct detectors at room temperature is less attractive due to their high dark current. Instead, besides the obvious benefits of coherent detection, adopting a heterodyne configuration gives in principle the possibility to reach a detection limited by the photon noise if the local-oscillator photocurrent is larger than the thermally activated dark current. As shown in the inset of Fig. 1(b), at the actual operating wavelength of \\(\\sim 10\\mu\\)m, this seems out of reach at T=300K, due to the elevated dark current and to the observed decrease of the responsivity with increasing power that we interpret as the result of a partial optical saturation. This phenomenon was never observed before in a QWIP [10, 11, 46] and is, in a way, the drawback of coupling the ISB structure to an antenna, which permits to achieve a higher detectivity at the price of a lower saturation power [11, 12]. Although \\(I_{sat}\\) can be increased by increasing the doping in the QWs (Supplement 1), however, according to our estimates, this gain would be quickly compensated by the growth of the dark current which depends exponentially on \\(n_{s}\\). On the other hand, preliminary data as a function of temperature indicate that it should be possible, with the present detector, to achieve a shot-noise limited detection in proximity of T=250K (or possibly higher in the case where the frequencies of the ISB transition and of the patch resonators were perfectly matched, see Section 2.1), which can be reached with a thermoelectric cooler. In terms of RF bandwidth, although the present 100GHz is probably enough for most applications, a possibility to improve it would be to reduce the capture time, for instance by reducing the barriers
\\begin{table}
\\begin{tabular}{l c c c} Transition & \\multicolumn{3}{c}{Frequency (GHz)} \\\\ \\hline & HITRAN & This work & \\(\\Delta f\\) \\\\ saQ(3,3) & 29000.3 & 28999.7 & 0.6 \\\\ saQ(3,2) & 29002.1 & 29001.5 & 0.6 \\\\ saQ(3,1) & 29003.4 & 29002.7 & 0.7 \\\\ saQ(2,2) & 29012.1 & 29011.5 & 0.6 \\\\ saQ(2,1) & 29013.2 & 29012.6 & 0.6 \\\\ saQ(1,1) & 29019.8 & 29019.5 & 0.3 \\\\ \\end{tabular}
\\end{table}
Table 1: NH\\({}_{3}\\) ro-vibrational transitions. The table reports the line center frequencies, rounded to 0.1 GHz, obtained from the HITRAN database and from Fig.6(b). \\(\\Delta f\\) is their difference.
width which, at the moment is comparable to the estimated carrier's mean free path [1]. In this respect we note that an experimental study on the dependence of MIR patch-antenna QWIPs performance (e.g. responsivity, bandwidth etc) on parameters such as the active region thickness or the number of QWs is presently lacking [36].
To demonstrate the potential of our detectors as heterodyne receivers we have setup a proof-of-principle experiment where the chirped-frequency emitted by a QCL driven in pulsed mode is down-converted in the microwave range through the mixing with a second QCL operated in CW. In this way it is possible to record in real-time molecular spectra spanning up to 100GHz (and beyond), limited by the bandwidth of our detector. Contrary to conventional chirped pulsed spectroscopy, our HFCS technique simplifies the absolute calibration of the chirped frequency. Most importantly it permits to achieve high SNRs (\\(\\sim 60\\)dB in 100MHz bandwidth with \\(\\sim 15\\)mW of peak and CW power respectively from the pulsed and CW QCLs - see Fig. 4(a)), which in our opinion, makes patch-antenna QWIPs particularly attractive for remote sensing applications and also free-space communications. In particular the reported high SNR shows that the pulsed QCL beam should still be detectable after propagating through the atmosphere by several tens of km in adverse weather conditions [47].
To this end we note that much higher SNRs could be reached by locking the CW QCL to a more stable reference such as a frequency comb, or by replacing it with an intrinsically more stable MIR source such as a CO\\({}_{2}\\) laser.
## 4 Methods
### Devices structure and fabrication
A 100nm-thick, lattice-matched Ga\\({}_{0.51}\\)In\\({}_{0.49}\\)P etch-stop layer followed by the Al\\({}_{0.2}\\)Ga\\({}_{0.8}\\)As/GaAs heterostructure is grown by MBE on top of a semi-insulating GaAs substrate. The heterostructure is sandwiched between 50 and 100nm-thick top and bottom n-doped contact layers with concentrations \\(3\\times 10^{18}\\)cm\\({}^{-3}\\) and \\(4\\times 10^{11}\\)cm\\({}^{-3}\\), and consists of six, 6nm-thick GaAs QWs with the central 5nm \\(n\\)-doped at \\(6\\times 10^{17}\\)cm\\({}^{-3}\\), separated by 40nm-thick, undoped Al\\({}_{0.2}\\)Ga\\({}_{0.8}\\)As barriers.
The epi-layer is first transferred onto a 2\"-diameter high-resistivity Si wafer using Au-Au thermo-compression bonding. The fabrication begins by wet etching the GaAs substrate and the etch-stop layer. Next, a Ti/Au (8nm/300nm) top Schottky contact is realized through e-beam lithography, followed by e-beam evaporation and lift-off. The epi-layers are subsequently ICP etched using the top metal layer as etch-mask. The ground metal layer is dry-etched by an Ar+ ion-beam around the patch-antenna matrix down to the Si substrate. A 100-nm-thick Si\\({}_{3}\\)N\\({}_{4}\\) layer is then deposited on the Si by plasma enhanced chemical vapor deposition. To electrically connect the patch-antennas, suspended \\(\\sim\\)150-nm-wide Ti/Au (20nm/600nm) wire-bridges are fabricated by a two-step e-beam lithography process. A first resist layer is used as support after deposition, e-beam lithography and reflow, followed by a second one to define the wires by standard lift-off process. The same process is used to realize the air-bridge connecting the 2D array to the 50\\(\\Omega\\) coplanar line. The latter is deposited on the Si\\({}_{3}\\)N\\({}_{4}\\) to prevent current leakage between the line's electrodes and the Si substrate.
### Derivation of the electrical frequency response
If \\(P_{1}\\) and \\(P_{2}\\) are the incident powers generated by the two QCLs, the total optical power incident on the biased photo-conductor is given by:
\\[P(t)=P_{tot}\\left[1+m\\cdot sin(\\omega)t\\right], \\tag{1}\\]
where \\(P_{tot}=P_{1}+P_{2}\\), \\(\\omega\\) is the difference between the two optical frequencies, and \\(m=2\\sqrt{P_{1}P_{2}}/P_{tot}\\) is the modulation index. If \\(R\\) is the photodetector responsivity, the generatedphotocurrent \\(I_{ph}(t)=R\\cdot P(t)\\) can be split into a \\(dc\\) component \\(I_{dc}=R\\cdot P_{tot}\\), which corresponds to the measured \\(dc\\) photocurrent, and an \\(ac\\) component of amplitude \\(I_{ac}=m\\cdot R\\cdot P_{tot}=m\\cdot I_{dc}\\). In the absence of a sizeable resistance in series with the QWIP active region, as is the case here, it can be shown that the amplitude of the current source \\(I_{s}\\) in the photodetector small signal equivalent circuit (Supplement 1) is precisely equal to \\(I_{ac}\\simeq I_{dc}\\) (since \\(m\\simeq 1\\) for the powers used in this work) [45, 6]. The electrical frequency response of the QWIP is then obtained from the expression of the average \\(ac\\) power dissipated in the \\(R_{L}=50\\Omega\\) input impedance of the microwave power-meter:
\\[P_{L}(\\omega)=\\frac{1}{2}I_{dc}^{2}\\left|\\frac{R}{R+(R_{L}+i\\omega L)(1+i \\omega RC)}\\right|^{2}R_{L}. \\tag{2}\\]
To match quantitatively the power levels obtained experimentally in Fig. 2 in the main text, we used an amplitude of the \\(ac\\) current source \\(I_{s}=I_{dc}/2\\) where \\(I_{dc}\\) is the experimental \\(dc\\) photocurrent generated by the two QCLs (\\(I_{dc}=4.1\\)mA, \\(2.8\\)mA, and \\(1.25\\)mA for devices M5, M3 and M2 respectively). However, as discussed above, ideally we would rather expect \\(I_{s}=I_{dc}\\), i.e. the generated heterodyne power should be \\(\\sim 4\\) times higher than what found experimentally. At the moment, we don't have a clear explanation for this discrepancy, that could be in part attributed to a partial saturation of the ISB transition, each time the incident optical power oscillating at the difference frequency between the two QCLs reaches its maximum. Further measurements will be needed to validate this hypothesis.
### Comparison of single-shot acquisition and pressure detection limit
In Fig.M1 we report the absorption dip in the time domain corresponding to the saQ(3,3) transition at a nominal pressure of 10Pa, obtained from the chirped pulse (panel (a)) and from the QWIP current pulse (panel (b)). The black lines were recorded in single-shot, while the red one was obtained with 100 averages (same averaging used for Fig. 5(b)). The SNRs in single-shot from the chirped and current pulse and are respectively \\(\\sim 8\\) and \\(2\\). From these numbers, based on the transmission intensities computed with Spectracalc(r), we estimate, for our 8cm-long gas cell, a minimum detectable gas pressures in single-shot of \\(\\sim 0.3\\)Pa and \\(\\sim 1.2\\)Pa.
### Voltage source response time
In Fig. 6, the three highest frequency NH\\({}_{3}\\) transition lines of the red spectrum (derived from the current pulse) present a systematically higher transmission compared to the green one (derivedfrom the heterodyne pulse), as well as a slight line asymmetry. We attribute these facts to the finite transient response time, of approximately 30\\(\\mu\\)s, of the voltage source used to bias the QCL (Keithely 2440 5A SourceMeter). Indeed, from longer to shorter times (i.e. from lower to higher absolute frequencies in Fig. 6) the increase of the frequency chirp (see Fig. 4(b)), leads to progressively temporally narrower transmission dips as shown in Fig. 5(b). As a result, at some point the rise time associated to a given transition becomes too short compared to the time needed by the voltage source to change its current in order to maintain a constant bias across the QWIP. Eventually this fact prevents reaching the transmission minimum. This is clearly the case for the highest frequency transition (i.e. the temporally narrowest), for which the associated rise time is of only \\(\\sim 10\\mu\\)s, contrarily to the \\(\\sim 100\\mu\\)s of the lowest frequency one. Such experimental artifact is not present on the \\(ac\\) port of the bias-tee, where slow bias variations are filtered out.
We gratefully acknowledge Raffaele Colombelli for helpful discussions on intersubband saturation and Etienne Okada for technical support during the RF measurements.
ANR Project Hispanid; RENATECH (French Network of Major Technology Centres); Project COMPTERA - ANR 22-PEEL-0003; Contrat de Plan Etat-Region (CPER) WaveTech. Wavetech is supported by the Ministry of Higher Education and Research, the Hauts-de-France Regional council, the Lille European Metropolis (MEL), the Institute of Physics of the French National Centre for Scientific Research (CNRS) and the European Regional Development Fund (ERDF).
The authors declare no conflicts of interest.
See Supplement 1 for supporting content.
## References
* [1] H. Schneider and H. C. Liu, _Quantum Well Infrared Photodetectors: Physics and Applications_ (Springer, 2007).
* [2] S. Ehret, H. Schneider, J. Fleissner, P. Koidl, and G. Bohm, \"Ultrafast intersubband photocurrent response in quantum-well infrared photodetectors,\" Appl. Phys. Lett. **71**, 641 (1997).
* [3] P. D. Grant, R. Dudek, M. Buchanan, and H. C. Liu, \"Room-temperature heterodyne detection up to 110GHz with a quantum well infrared photodetector,\" Appl. Phys. Lett. **18**, 2218 (2006).
* [4] T. Dougakiuchi, A. Ito, M. Hitaka, K. Fujita, and M. Yamanishi, \"Ultimate response time in mid-infrared high-speed low-noise quantum cascade detectors,\" Appl. Phys. Lett. **118**, 041101 (2021).
* [5] J. Hillbrand, L. M. Kruger, S. D. Cin, H. Knoig, J. Heidrich, A. M. Andrews, G. Strasser, U. Keller,, and B. Schwarz, \"High-speed quantum cascade detector characterized with a mid-infrared femtosecond oscillator,\" Opt. Expr. **29**, 5774 (2021).
* [6] M. Hakl, Q. Lin, S. Lepillet, M. Billet, J.-F. Lampin, S. Pirotta, R. Colombelli, W. Wan, J. C. Cao, H. Li, E. Peytavit, and S. Barbieri, \"Ultrafast quantum-well photodetectors operating at 10\\(\\mu\\)m with a flat frequency response up to 70GHz at room temperature,\" ACS Photon. **8**, 464 (2021).
* [7] G. Quinchard, C. Mismer, M. Hakl, J. Pereira, Q. Lin, S. Lepillet, V. Trinite, A. Evrgen, E. Peytavit, J. L. Reverchon, J. F. Lampin, S. Barbieri, and A. Delga, \"High speed, antenna-enhanced 10.3\\(\\mu\\)m quantum cascade detector,\" Appl. Phys. Lett. **120**, 091108 (2022).
* [8] D. Palaferri, Y. Todorov, A. Mottaghizadeh, G. Frucci, G. Biasiol, and C. Sirtori, \"Ultra-subwavelength resonators for high temperature high performance quantum detectors,\" New J. Phys. **18**, 113016 (2016).
* [9] D. Palaferri, Y. Todorov, A. Bigioli, A. Mottaghizadeh, D. Gacemi, A. Calabrese, A. Vasanelli, L. Li, A. G. Davies, E. H. Linfield, F. Kapsalidis, M. Beck, J. Faist, and C. Sirtori, \"Room-temperature 9\\(\\mu\\)m wavelength photodetectors and GHz-frequency heterodyne receivers,\" Nature **556**, 85 (2018).
* [10] K. L. Vodopyanov, V. Chazapis, C. C. Phillips, B. Sung, and J. S. H. Jr, \"Intersubband absorption saturation study of narrow iii-v multiple quantum wells in the \\(\\lambda=2.8\\)-9 \\(\\mu\\)m spectral range,\" Semicond. Sci. Technol. **12**, 708 (1997).
* [11] M. Jeannin, E. Cosentino, S. Pirotta, M. Malerba, G. Biasiol, J. Mancceau, and R. Colombelli, \"Low intensity saturation of an ISB transition by a mid-IR quantum cascade laser,\" Appl. Phys. Lett. **122**, 241107 (2023).
* [12] M. Jeannin, J. Mancceau, and R. Colombelli, \"Unified description of saturation and bistability of intersubband transitions in the weak and strong light-matter coupling regimes,\" Phys. Rev. Lett. **127**, 187401 (2021).
* [13] L. Flannigan, L. Yoell, and C.-Q. Xu, \"Mid-wave and long-wave infrared transmitters and detectors for optical satellite communications--a review,\" J. Opt. **24**, 043002 (2022).
* [14] X. Pang, O. Ozolins, R. S. J. Storck, A. Udalcovs, J. R. Navarro, A. Kakkar, G. Maisons, M. Carras, G. Jacobsen, S. Popov, and S. Lourdudoss, \"Gigabit free-space multi-level signal transmission with a mid-infrared quantum cascade laser operating at room temperature,\" Opt. Lett. **42**, 3646 (2017).
* [15] H. Dely, T. Bonazzi, O. Spitz, E. Rodriguez, D. Gacemi, Y. Todorov, K. Pantzas, G. Beaudoin, I. Sagnes, L. Li, A. G. Davies, E. H. Linfield, F. Grillot, A. Vasanelli, and C. Sirtori, \"10 Gbit/s free space data transmission at 9um wavelength with unipolar quantum optoelectronics,\" Laser Photonics Rev. **16**, 2100414 (2022).
* [16] F. Wang, S. Slivken, and M. Razeghi, \"Harmonic injection locking of high-power mid-infrared quantum cascade lasers,\" Photon. Res. **9**, 1078 (2021).
* [17] P. Didier, H. Deby, T. Bonazzi, E. A. O. Spitz, E. Rodriguez, A. Vasanelli, C. Sirtori, and F. Grillot, \"High-capacity free-space optical link in the midinfrared thermal atmospheric windows using unipolar quantum devices,\" Adv. Photonics **4**, 056004 (2022).
* [18] P. Didier, H. Knotig, O. Spitz, L. Cerutti, A. Lardschneider, E. Awwad, D. Diaz-Thomas, A. N. B. R. Weih, J. Koeth, B. Schwarz, and F. Grillot, \"Interband cascade technology for energy-efficient mid-infrared free-space communication,\" Photon. Res. **11**, 582 (2023).
* [19] D. Weidmann, W. J. Reburn, and K. M. Smith, \"Ground-based prototype quantum cascade laser heterodyne radiometer for atmospheric studies,\" Rev. Sci. Instrum. **78**, 073017 (2007).
* [20] Y. Wang, M. G. Soskind, W. Wang, and G. Wysocki, \"High-resolution multi-heterodyne spectroscopy based on Fabry-Perot quantum cascade lasers,\" Appl. Phys. Lett **104**, 031114 (2014).
* [21] N. A. Macleod, F. Molero, and D. Weidmann, \"Broadband standoff detection of large molecules by mid-infrared active coherent laser spectrometry,\" Opt. Expr. **23**, 912 (2015).
* [22] A. Diaz, B. Thomas, P. Castillo, B. Gross, and F. Moshary, \"Active standoff detection of CH\\({}_{4}\\) and N\\({}_{2}\\)O leaks using hard-target backscattered light using an open-path quantum cascade laser sensor,\" Appl. Phys. B **122**, 121 (2016).
* [23] C. S. Goldenstein, R. M. Spearrin, J. B. Jeffries, and R. K. Hanson, \"Progress in energy and combustion science,\" Progr. Energy Combust. Sci. **60**, 132 (2016).
* [24] P. Asselin, Y. Berger, T. R. Huet, L. Margules, R. Motiyenko, R. J. Hendricks, M. R. Tarbutt, S. K. Tokunagaef, and B. Darquie, \"Characterising molecules for fundamental physics: an accurate spectroscopic model of methyl-trioxorhenium derived from new infrared and millimetre-wave measurements,\" Phys.Chem.Chem.Phys. **19**, 4576 (2017).
* [25] A. Kawai, K. Hashimoto, T. Dougakiuchi, V. R. Badarla, T. Imamura, T. Edamura, and T. Ideguchi, \"Time-stretch infrared spectroscopy,\" Comm. Phys. **3**, 152 (2020).
* [26] T. Dougakiuchi and N. Akikusa, \"Application of high-speed quantum cascade detectors for mid-infrared, broadband, high-resolution spectroscopy,\" Sensors **21**, 5706 (2021).
* [27] P. L. T. Sow, S. Mejri, S. K. Tokunaga, O. Lopez, A. Goncharov, B. Argence, C. Chardonnet, A. Amy-Klein, C. Daussy, and B. Darquie, \"A widely tunable 10-um quantum cascade laser phase-locked to a state-of-the-art mid-infrared reference for precision molecular spectroscopy,\" Appl. Phys. Lett. **104**, 264101 (2014).
* [28] B. Argence, B. Chanteau, O. Lopez, D. Nicolodi, M. Abgrall, C. Chardonnet, C. Daussy, B. Darquie, Y. L. Coq, and A. Amy-Klein, \"Quantum cascade laser frequency stabilization at the sub-Hz level,\" Nat. Photon. **9**, 456 (2015).
* [29] H. Pires, M. Baudisch, D. Sanchez, M. Hemmer, and J. Biegert, \"Ultrashort pulse generation in the mid-IR,\" Progr. Quantum Electron. **43**, 1 (2015).
* [30] D. D.S.Hale, M. Bester, W. C. Danchi, W. Fitelson, S. Hoss, E. A. Lipman, J. D. Monnier, P. G. Tuthill, and C. H. Townes, \"The berkeley infrared spatial interferometer: a heterodyne stellar interferometer for the mid-infrared,\" Astrophys. J. **537**, 998 (2000).
* [31] G. Sonnabend, D. Stupar, M. Sornig, T. Stangier, T. Kostiuk, and T. Livengood, \"A search for methane in the atmosphere of Mars using ground-based mid infrared heterodyne spectroscopy,\" J. Mol. Spectr. **291**, 98 (2013).
* A first laboratory proof of concept,\" Astron. Astrophys. **639**, A53 (2020).
* [33] C. A. Balanis, _Antenna Theory: Analysis and Design, 3rd Edition_ (Wiley, 2005).
* [34] Y. Todorov, L. Tosetto, J. Teissier, A. M. Andrews, P. Klang, R. Colombelli, I. Sagnes, G. Strasser, and C. Sirtori, \"Optical properties of metal-dielectric-metal microcavities in the THz frequency range,\" Opt. Expr. **18**, 13886 (2010).
* [35] M. Jeannin, T. Bonazzi, D. Gacemi, A. Vasanelli, L. Li, A. G. Davies, E. Linfield, C. Sirtori, and Y. Todorov, \"Absorption engineering in an ultrasubwavelength quantum system,\" Nano Lett. **20**, 4430 (2020).
* [36] E. Rodriguez, T. Bonazzi, H. Dely, M. Mastrangelo, K. Pantzas, B. G, I. Sagnes, A. Vasanelli, Y. Todorov, and C. Sirtori, \"Metamaterial engineering for optimized photon absorption in unipolar quantum devices,\" Opt. Expr. **30**, 20515 (2020).
* [37] H. A. Haus, _Waves and Fields in Optoelectronics_ (Prentice Hall, 2004).
* [38] K. Namjou, S. Cai, E. A. Whittaker, J. Faist, C. Gmachl, F. Capasso, D. L. Sivco,, and A. Y. Cho, \"Sensitive absorption spectroscopy with a room-temperature distributed-feedback quantum-cascade laser,\" Opt. Lett. **23**, 219 (1998).
* [39] M. T. McCulloch, E. L. Normand, N. Langford, G. Duxbury, and D. A. Newnham, \"Highly sensitive detection of trace gases using the time-resolved frequency downchirp from pulsed quantum-cascade lasers,\" J. Opt. Soc. Am. B **20**, 1761 (2003).
* [40] E. Normand, M. McCulloch, G. Duxbury, and N. Langford, \"Fast, real-time spectrometer based on a pulsed quantum-cascade laser,\" Opt. Lett. **28**, 16 (2003).
* [41] B. Grouiez, V. Zeninari, L. Joly, and B. Parvi, \"Pulsed quantum-cascade-laser spectroscopy with intermediate-size pulses: application to NH\\({}_{3}\\) in the 10\\(\\mu\\)m region,\" Appl. Phys. B **100**, 265 (2010).
* [42] L. Tombez, F. Cappelli, S. Schilt, G. D. Domenico, S. Bartalini, and D. Hofstetter, \"Wavelength tuning and thermal dynamics of continuous-wave mid-infrared distributed feedback quantum cascade lasers,\" Appl. Phys. Lett. **103**, 031111 (2013).
* [43] M. T. Michael, N. Langford, and G. Duxbury, \"Real-time trace-level detection of carbon dioxide and ethylene in car exhaust gases,\" Appl. Opt. **44**, 2887 (2005).
* [44] S. Welzel, F. Hempel, M. Hubner, N. Lang, P. B. Davies, and J. Ropcke, \"Quantum cascade laser absorption spectroscopy as a plasma diagnostic tool: An overview,\" Sensors **10**, 06861 (2010).
* [45] E. Peytavit, G. Ducournau,, and J.-F. Lampin, _\"THz Photomixers,\" in Fundamentals of Terahertz Devices and Applications_ (John Wiley & Sons, 2021).
* [46] J. S. Gomez-Diaz, M. Tymchenko, J. Lee, M. A. Belkin,, and A. Alu, \"Nonlinear processes in multi-quantum-well plasmonic metasurfaces: electromagnetic response, saturation effects, limits, and potentials,\" Phys. Rev. B **92**, 125429 (2015).
* [47] P. Corrigan, R. Martini, E. A. Whittaker, and C. Bethea, \"Quantum cascade lasers and the Kruse model in free space optical communication,\" Opt. Expr. **17**, 4355 (2021). | Thanks to intrinsically short electronic relaxation on the ps time scale, III-V semiconductor unipolar devices are ideal candidates for ultrahigh-speed operation at mid-infrared frequencies. In this work, antenna-coupled, GaAs-based multi quantum-well photodetectors operating in the 10-11\\(\\mu\\)m range are demonstrated, with a responsivity of 0.3A/W and a 3dB-cutoff bandwidth of 100GHz at room-temperature. The frequency response is measured up to 220GHz: beyond 100GHz we find a roll-off dominated by the 2.5 ps-long recombination time of the photo-excited electrons. The potential of the detectors is illustrated by setting up an experiment where the time dependent emission frequency of a quantum cascade laser operated in pulsed mode is measured electronically and in real-time, over a frequency range \\(>\\)60GHz. By exploiting broadband electronics, and thanks to its high signal-to-noise ratio, this technique allows the acquisition, in a single-shot, of frequency-calibrated, mid-infrared molecular spectra spanning up to 100GHz and beyond, which is particularly attractive for fast, active remote sensing applications in fields such as environmental or combustion monitoring.
oeurm | Provide a brief summary of the text. | 258 |
arxiv-format/2010_15404v1.md | # On Efficient and Scalable Time-Continuous Spatial Crowdsourcing -- Full Version
Ting Wang12, Xike Xie12, Xin Cao3, Torben Bach Pedersen4, Yang Wang12 and Mingjun Xiao12
1School of Computer Science and Technology, University of Science and Technology of China, China
2Suzhou Institute for Advanced Study, University of Science and Technology of China, China
3University of New South Wales, Australia
4Aalborg University, Denmark
Email: {tingwt@mail, xkxie@, angyan@, xiaomj@}ustc.edu.cn, [email protected], [email protected]
## I Introduction
Spatial crowdsourcing or crowdsensing refers to harnessing human knowledge or sensors of participants' smart phones to retrieve qualitative or quantitative details related to physical locations of crowdsourced tasks. For conventional spatial crowdsourcing, task assignment and fulfillment are often \"atomic\" in that either they are fully executed or not at all [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. In this work, we consider a special type of spatial crowdsourcing, called _time-continuous spatial crowdsourcing_ (TCSC _in short_). TCSC is different, due to its temporal continuity, such that a spatial crowdsourced task takes long to finish, necessitating the time-sharing collaboration of multiple workers. It finds broad applications in capturing the presence and duration of environmental features, e.g., air/water pollution monitoring [12] and traffic surveillance [13]-[14], which are prevalent in citizen science projects [15].
For example, in Fig. 1, a crowdsourcer would like to analyze the microbial content in the water for a period. Upon receiving the task, the TCSC server looks up the records of preregistered workers' spatiotemporal information. Indicating the undertaking of workers giving to the TCSC server, the registered spatiotemporal information consists of workers' available time slots, working regions, and so on, e.g., \\(\\{worker_{1},\\langle place\\ A,1-2pm\\rangle,\\langle place\\ B,7-8pm\\rangle,\\dots\\}\\). The task is then decomposed into a set of subtasks. Each subtask corresponds to a specific time slot and location. Subtasks are assigned to appropriate workers according to the assignment policy. Workers finish the assignment, e.g., probing environmental values, and send their results to the server. The crowdsourced results are aggregated and delivered to the crowdsourcer.
Quality is essential for such applications. It is infeasible to accomplish a crowdsourced task for all time slots, due to limited budgets and availability of workers. So, the probed data in crowdsourced results is inherently incomplete. Interpolation (or extrapolation) alleviates the data incompletion problem by inferring unproved values with the probed ones. However, the interpolation error may further affect the data precision, incurring the so-called data deficiency problem. Ignnone of the facts of data incompletion and data imprecision would cause unreliable crowdsourced results. Thus, it is of paramount importance to consider the data quality problem in the TCSC setting. To this end, we propose a general entropy-based metric for summarizing the amount of incompleteness and impreciseness of the crowdsourced results, which enables quality-aware TCSC assignment and balances the plannable expense and observable essence.
There is a substantial difference between the TCSC problem and existing task assignment problems in spatial crowdsourc
Fig. 1: General TCSC Frameworking. For example, existing works mostly focused on maximizing the total number of completed tasks [1], on maximizing the number of performed tasks for an individual worker [2], or on maximizing the reliability-and-diversity score of assignments [3]. These solutions cannot be directly applied, as none of them look into the temporal continuous nature and corresponding quality issues of the TCSC problem. To our best knowledge, we are the first to study the TCSC problem.
Nevertheless, the computational overhead of quality-aware TCSC assignment is high. Even a simplified version of the problem, i.e., single TCSC task assignment, is NP-hard, as shown in Section III. In this work, we study how the quality-aware TCSC assignment can be handled in an efficient and scalable way. For ease of presentation, we start with the simplified version, single-task assignment, with the target of maximizing the task quality under budget constraints. We prove that the problem is NP-hard and further show that it can be approximated by a polynomial-time solution with guaranteed ratios. We also devise novel pruning and indexing techniques for efficiency enhancement. Based on that, we introduce the multi-task case, where technical challenges arise in handling the correlations between a given set of single tasks. We devise a parallel framework by distributing multiple correlated tasks to independently running computation cores, so as to maximally utilize the independence between tasks.
The main contributions of this paper are as follows.
* We propose and formalize the novel TCSC problem.
* We prove that the problem is NP-hard and therefore study approximation algorithms for accelerating the task assignment with quality guarantees, for both single- and multi-task scenarios.
* We investigate novel indexing and pruning techniques for the efficiency of the single-task case.
* We devise an efficient parallelization framework for the multi-task case, by breaking ties of correlated task groups with devised synchronization mechanisms.
* We conduct extensive experiments on synthetic and real data to evaluate the efficiency and scalability.
The rest of this paper is organized as follows. Section II introduces preliminaries, including concepts, quality metrics, and properties. Section III starts with the single-task assignment. Section IV extends the solution to the multi-task scenario. Section V evaluates our proposals with extensive experiments. Section VI presents related works. Section VII concludes the paper. Table I summarizes the symbols and notations used.
## II Preliminaries
In this section, we introduce basic concepts, propose the quality metric, and investigate its properties. For ease of presentation, we start with the single-task case (Sections II and III), and extend it to the multi-task case (Section IV).
### _Basic Concepts_
**TCSC tasks and subtasks.** A single TCSC task \\(\\tau\\) has its location \\(\\tau.loc\\) and duration \\(\\tau.dur\\). According to the batch size that tasks arrive in, the duration consists of at most \\(m\\) equal-sized time slots. Thus, \\(\\tau\\) can be represented by a set of subtasks \\(\\{\\tau^{(j)}\\}_{j=1}^{m}\\) such that each subtask \\(\\tau^{(j)}\\) takes \\(\\tau.loc\\) as its location, and the corresponding time slot as its duration. Formally, \\(\\tau^{(j)}.loc=\\tau.loc\\) and \\(\\tau^{(j)}.dur=\\frac{\\tau.dur}{m}\\).
**Worker.** Let \\(W=\\{w_{1},w_{2}, ,w_{n}\\}\\) be a set of \\(n\\) workers. Each worker is registered with a set of consecutive states to the SC server, indicating whether s/he is online for providing crowdsourcing services. A worker \\(w_{i}\\)'s temporal state can be represented by \\(w_{i}^{(j)}\\), indicating the availability of worker \\(w_{i}\\) at time slot \\(t_{j}\\).
**Task Assignment.** Task assignment is the mapping of workers to subtasks. In Fig. 2, there are \\(5\\) subtasks in \\(\\tau\\). At each time slot, there exist a set of workers. In this example, workers \\(w_{2}^{(2)}\\) and \\(w_{4}^{(4)}\\) are assigned to subtasks \\(\\tau^{(2)}\\) and \\(\\tau^{(4)}\\), respectively. \\(\\tau^{(3)}\\) is not assigned to any worker as none are available at that time slot. \\(\\tau^{(1)}\\) is not mapped to any worker because of cost and budget limits, e.g., workers at time slot \\(1\\) are far away from \\(\\tau^{(1)}\\).
**Cost.** The cost of a subtask \\(\\tau^{(j)}\\) is denoted as \\(c(\\tau^{(j)})\\). Following the common setting of spatial crowdsourcing, we assume the travel cost of a subtask \\(c(\\tau^{(j)})\\) is the Euclidean distance from the location of a subtask \\(\\tau^{(j)}\\) and the assigned worker \\(w\\)1. For simplicity, we assume the unit cost of all workers is the same. Our work is general w.r.t. the type of cost. The cost of a task is the summation of all its subtasks' costs, i.e., \\(c(\\tau)=\\sum_{j=1}^{m}c(\\tau^{(j)})\\).
Footnote 1: If considering traveling distances as costs [4][16], the nearest worker is usually selected in order to minimize the cost of taking a subtask. It is also possible to take the second nearest neighbor or the latter in multi-task scenarios, as covered by Section IV.
Fig. 2: An Example of TCSC Task Assignment (\\(m=5\\))
### _Quality Metric_
Due to the limited budget or the availability of workers, it is necessary to measure the _quality_ of a TCSC task, given the fact that a task cannot be fully assigned. There can be two possible states for a subtask, _executed_ and _unexecuted_, corresponding to whether the subtask value is _probed_ by assigned workers or _interpolated_ by other probed subtasks. For example, in Fig. 2, \\(\\tau^{(2)}\\) and \\(\\tau^{(4)}\\) are executed subtasks, while the other \\(3\\) are unexecuted and need to be interpolated. Initially, all subtasks are \"null\". If some subtasks are assigned and probed, their states are changed from \"unexecuted\" to \"executed\". Meanwhile, remaining unexecuted subtasks can be inferred from the executed subtasks by interpolation (or extrapolation). So, the quality metric of a TCSC task should, 1) distinguish the quality between a probed item and an interpolated item; 2) be universal for covering the integrative action of different stated subtasks for the overall quality measurement.
**Quality Metric.** For condition 1), we define the concept of _subtask finishing probability_, based on the calculation of potential interpolation errors. For condition 2), we utilize an entropy function that returns a real-valued score for conveniently indicating the amount of inaccuracies in accordance with specific assignment strategies, as shown below.
**Definition 1**: _(**Task Quality.**) Let \\(\\tau\\) be a task consisting of \\(m\\) subtasks, \\(\\tau=\\{\\tau^{(j)}\\}_{1\\leq j\\leq m}\\). Each subtask \\(\\tau^{(j)}\\) is associated with a finishing probability \\(p^{(j)}\\). The quality of \\(\\tau\\), denoted by \\(q(\\tau)\\), is:_
\\[q(\\tau)=-\\sum_{j=1}^{m}p^{(j)}\\log_{2}\\left(p^{(j)}\\right) \\tag{1}\\]
Next, we introduce the finishing probability \\(p^{(j)}\\) for subtask \\(\\tau^{(j)}\\), that serves as building blocks for the quality metric.
**Subtask Finishing Probability.** The finishing probability of a TCSC task equals \\(1\\), if all its subtasks are done, in correspondence to an ideal case that all subtasks are executed. In practice, for a task consisting of \\(m\\) equally sized time slots, the finishing probability \\(p^{(j)}\\) for each subtask \\(\\tau^{(j)}\\) is at most \\(\\frac{1}{m}\\). Without losing generality, we can thus use an error ratio \\(\\rho_{err}\\) to measure the amount of information loss caused by interpolation errors.
\\[p^{(j)}=\\frac{1}{m}(1-\\rho_{err}(\\tau^{(j)})) \\tag{2}\\]
Accordingly, the probability \\(p^{(j)}\\) equals \\(0\\) for the \"null\" case, representing the zero knowledge about the subtask, and equals \\(\\frac{1}{m}\\) for the executed case, representing the total information gain of finishing the subtask. Next, we show the calculation of error ratio \\(\\rho_{err}\\) for an interpolated subtask.
**Interpolation Error Ratio.** A common way of inferring a missing value from a discrete set of known values is known as inverse distance interpolation [17][18][19], which averages the values of its \\(k\\) nearest neighbors, i.e., \\(k\\) nearest subtasks on the timeline of a TCSC task.
Intuitively, the error ratio is proportional to the distances between the interpolated value and its neighboring values [17][18]. Between two subtasks \\(\\tau^{(i)}\\) and \\(\\tau^{(j)}\\), the temporal distance is denoted as \\(|\\tau^{(i)},\\tau^{(j)}|_{i}\\), referring to the absolute difference of \\(\\tau^{(i)}\\) and \\(\\tau^{(j)}\\)'s timestamps. For example, in Fig. 2, we have \\(|\\tau^{(1)},\\tau^{(2)}|=1\\) and \\(|\\tau^{(2)},\\tau^{(4)}|=2\\). Then, the interpolation error of an unexecuted subtask can be evaluated by the distances from a set of executed subtasks. Assume function \\(S_{kNN}(.)\\) returns the set of \\(k\\) executed subtasks with the smallest distances. An unexecuted subtask \\(\\tau^{(j)}\\) can thus be interpolated by \\(S_{kNN}(\\tau^{(j)})\\), with the interpolation error measured by the error ratio function \\(\\rho_{err}\\) as follows.
\\[\\rho_{err}(\\tau^{(j)})=\\frac{\\sum_{e\\in S_{kNN}(\\tau^{(j)})}|\\tau^{(j)},e|_{i} }{k\\cdot m} \\tag{3}\\]
The value range of the error ratio \\(\\rho_{err}\\) is from \\(0\\) to \\(1\\). The error ratio equals \\(100\\%\\), if none of the subtasks of a task is executed. Accordingly, a lower error ratio value is achieved, if the target subtask has more proximate executed subtasks (i.e., smaller interpolation distances). For example, in Fig. 2, \\(\\tau^{(1)}\\)'s \\(2\\)-NN results are \\(\\tau^{(2)}\\) and \\(\\tau^{(4)}\\), whose distances from \\(\\tau^{(1)}\\) are \\(1\\) and \\(3\\), respectively. So, \\(\\rho_{err}(\\tau^{(1)})\\) can be calculated as \\(\\frac{1+3}{2*100}=0.02\\). Since \\(\\tau^{(2)}\\) is an executed subtask, its error ratio equals zero 2. Similarly, we can derive the error ratios and the subtask finishing probabilities for all the subtasks. Based on that, we can calculate the task quality, by Equation 1.
Footnote 2: It is possible that a subtask \\(\\tau^{(j)}\\) has less than \\(k\\) nearest neighbors, e.g., at the starting stage of subtask assignment. If \\(|S_{kNN}|<k\\), we let \\(|\\tau^{(j)},e|_{i}\\) be \\(m\\), indicating the largest possible interpolation distance.
In summary, the concept of entropy is adopted to quantify the quality of crowdsourced results. The value of \\(q(\\tau)\\) ranges from \\(0\\), i.e., the lowest information degrees (none of the subtasks are executed), to \\(log_{2}m\\), i.e., the highest information degree (all subtasks are executed).
**Extension for Worker Reliability.** Our quality metric, i.e., Equation 1, is general in addressing the reliability issues [3], where workers are not assumed to be entirely trustable. Instead, each worker \\(w_{i}\\) is assumed to have a reliability score, represented by \\(\\lambda_{i}\\in[0,1]\\). Incorporating the reliability, the subtask finishing probability can be defined by both worker confidence and potential interpolation error. In particular, given subtask \\(e\\), the finishing probability of an assigned task is \\(\\frac{\\lambda_{e}}{m}\\), where \\(\\lambda_{e}\\) is the reliability of the worker assigned to subtask \\(e\\).
Due to the incorporation of worker reliability, for an interpolated subtask \\(\\tau^{(j)}\\), the maximum finishing probability is no longer \\(\\frac{1}{m}\\), but the product of \\(\\frac{1}{m}\\) and the average of \\(\\{\\lambda_{e}\\}_{e\\in S_{kNN}(\\tau^{(j)})}\\). The extended form of subtask probability is as Equation 4.
\\[p^{(j)}=\\frac{1}{m}\\cdot[\\frac{\\sum_{e\\in S_{kNN}(\\tau^{(j)})}\\lambda_{e}}{k}- \\rho_{err}(\\tau^{(j)})] \\tag{4}\\]
Accordingly, the error ratio is determined by the summation of interpolation distances weighted by the worker reliabilityand then divided by \\(k\\cdot m\\), similar to Equation 5. The extended form of error ratio is as Equation 5.
\\[\\rho_{err}(\\tau^{(j)})=\\frac{\\sum_{e\\in S_{kNN}(\\tau^{(j)})}\\lambda_{e}\\cdot|\\tau^ {(j)},e|_{i}}{k\\cdot m} \\tag{5}\\]
Notice that probability \\(p^{(j)}\\) equals \\(0\\), if \\(S_{kNN}(\\tau^{(j)})\\) is an empty set, meaning that null of subtasks are currently done. The probability \\(p^{(j)}\\) is \\(\\frac{\\lambda_{e(j)}}{m}\\), if \\(\\tau^{(j)}\\) is an executed subtask. If \\(\\tau^{(j)}\\) is interpolated, and the reliability of each worker for executing \\(S_{kNN}(\\tau^{(j)})\\) equals \\(1\\), Equation 5 degenerates into Equation 3.
**Summary.** We have shown the effectiveness of the metric in capturing the quality of various scenarios. In the sequel, we show two important properties of the metric, submodularity and non-decreasingness, enabling efficient task assignment algorithms with approximation guarantees.
### _Properties_
We derive the properties of the task quality metric. We first recall the definition of submodular functions.
**Definition 2**: _(**Submodular Function [20].**) Let \\(S\\) be a finite set, and \\(2^{S}\\) be the power set of \\(S\\). A submodular function is a set function \\(f:2^{S}\\rightarrow\\mathbb{R}\\), if for any \\(X,Y\\subset S\\) satisfying \\(f(X\\cap Y)+f(X\\cup Y)\\leq f(X)+f(Y)\\)._
**Lemma 1**: _(**Composite Submodular Functions [21].**) Assume a set \\(V\\) and a function \\(h:2^{V}\\rightarrow\\mathbb{R}\\) that returns a real value for any subset \\(S\\subseteq V\\). If \\(h\\) is a non-decreasing submodular function and \\(\\phi\\) is a non-decreasing concave function \\(\\phi:\\mathbb{R}\\rightarrow\\mathbb{R}\\), then function \\(\\phi(h(S))\\) is non-decreasing and submodular._
**Lemma 2**: _Function \\(q(\\tau)\\) is non-decreasing and submodular._
**Proof** _The quality metric function \\(q(.)\\) is a composite function of the entropy function and the finishing probability function, i.e., \\(p(.)\\). It can be proved that function \\(p(.)\\) is submodular and non-decreasing, as proved by Lemmas 7 and 8 in Appendix. More, the entropy function is known as non-decreasing and concave. According to Lemma 1, the quality metric function \\(q(.)\\) is non-decreasing and submodular w.r.t. \\(\\tau\\)._
The submodularity property captures the effect of marginal benefits decreasing in the measuring the TCSC quality. As the number of workers on finishing a task is increasing, the marginal value of adding a new worker is decreasing. Also, the submodular property of the quality metric enables efficient optimization, e.g., submodular function maximization, where constant factor approximation algorithms are often available [22]. In the sequel, we discuss how the property is utilized and generalized for optimization problems in the TCSC task assignment applications.
## III Single Task Assignment
We proceed to study the single TCSC task assignment problem. In particular, we consider maximizing the quality of a single task with budget limits. We prove that the problem is NP-hard. For efficient task assignment, we devise heuristic algorithms that approximate the optimization targets in polynomial time with quality guarantees. Nevertheless, we show that the polynomial solution incurs overheads unaffordable for the real-time task assignment scenarios. Therefore, we propose novel indexing and pruning techniques to further enhance the task assignment efficiency.
We formalize and analyze the problem in Section III-A. We devise the approximation algorithm in Section III-B. We study efficient indexing and pruning techniques in Section III-C.
### _Problem Definition and Analysis_
**Problem 1** **Single-task Quality Maximization with Fixed Budgets (sQM in short).**_Given a TCSC task \\(\\tau\\) and a fixed budget \\(b\\), the sQM problem is to find an assignment for \\(\\tau\\), such that the quality \\(q(\\tau)\\) is maximized, and the cost \\(c(\\tau)\\) does not violate the given budget \\(b\\)._
\\[Maximize q(\\tau)\\] \\[subject to \\sum_{j=1}^{m}c(\\tau^{(j)})\\leq b\\]
To solve the sQM problem is equivalent to evaluate a task assignment matrix yielding the maximum quality. To give a sense of the size of the solution space, we assume a given set of \\(n\\) workers and a task consisting of \\(m\\) subtasks/timeslots. Thus, there could be an exponential number of possible worker-and-subtask assignment pairs \\(O(m^{n})\\). Then, we prove that the \\(sQM\\) problem is NP-hard, by Lemma 3.
**Lemma 3**: _The \\(sQM\\) problem is NP-hard._
**Proof** _It is well known that maximizing a submodular function under a cardinality constraint (i.e., selecting at most \\(k\\) elements) is NP-hard [23]. If we consider the workers have unit cost, the budget constraint in our problem becomes the cardinality constraint. It has been proved in Lemma 2 that the quality function \\(q(\\tau)\\) (i.e., the objective function to be maximized) is submodular, and thus a special case of our sQM problem is also NP-hard._
### _Approximation Algorithms_
Hereby, we provide a suboptimal solution with guaranteed approximation ratios, based on the submodular property of the quality metric, as discussed in Section II-C. Since the problem is of budgeted maximization for a submodular function, we can have a heuristic algorithm, which repeatedly selects an element (e.g. a subtask) that maximizes the quality increment until the budget is exceeded. The process is detailed in Algorithm 1.
Hence, the heuristic value is defined as the quality increment divided by the corresponding cost. Let the currently assigned set of subtasks be \\(T_{cur}\\). At each iteration, the algorithm greedily selects a subtask \\(\\tau^{(*)}\\) from the set \\(\\tau-T_{cur}\\), such that the heuristic value is maximized. Formally, the greedy rule is to find a subtask \\(\\tau^{(*)}\\) as follows.
\\[\\tau^{(*)}=\\underset{\\tau^{(j)}}{argmax}\\frac{\\Delta q(\\tau)}{c(\\tau^{(j)})}= \\underset{\\tau^{(j)}}{argmax}\\frac{q(\\,T_{cur}\\cup\\tau^{(j)})-q(\\,T_{cur})}{c( \\tau^{(j)})}\\]We use an example to illustrate the process. Assume that there are \\(4\\) executed subtasks, i.e., \\(T_{cur}=\\{\\tau^{(2)},\\tau^{(4)},\\tau^{(7)},\\tau^{(9)}\\}\\), of a TCSC task, represented by shaded slots, as shown in Fig. 3 (a). Algorithm 1 enumerates all remaining slots/subtasks. At each iteration, a slot is selected for tentative execution in order to find the one with the highest heuristic value. If \\(\\tau^{(1)}\\) is chosen and tentatively executed, \\(\\rho(\\tau^{(1)})\\) is reduced to \\(0\\) and the heuristic value increment is \\(0.0016\\). However, if \\(\\tau^{(5)}\\) is tentatively executed, \\(\\tau^{(6)}\\)'s \\(2\\)-NN result is changed from \\(\\{\\tau^{(4)},\\tau^{(7)}\\}\\) to \\(\\{\\tau^{(5)},\\tau^{(7)}\\}\\). Therefore, both \\(\\rho(\\tau^{(5)})\\) and \\(\\rho(\\tau^{(6)})\\) should be recalculated for getting the quality and the heuristic values. The process is repeated for all the unexecuted slots. Finally, \\(\\tau^{(1)}\\) is selected, since it derives the maximum heuristic value in this example. By setting the greedy strategy as such, Algorithm 1 guarantees a \\((1-1/\\sqrt{e})\\) approximation to the optimal solution, as shown in [22].
**Complexity Analysis**. First, the number of iterations, i.e., the outer loop of Algorithm 1 (line \\(4\\)), is at the level of \\(O(m)\\). Second, the inner loop (line \\(5\\)) is at the level of \\(O(m)\\), since one needs to try all \\(m\\) subtasks and get their heuristic values in order to find the one maximizing the overall quality. Third, for each trial of a subtask, one needs to calculate the corresponding heuristic value, i.e., the overall quality increase of implementing the subtask, according to the quality metric function. The overall quality increase is the summation of individual quality increments of all other \\(m-1\\) subtasks, so the complexity is \\(O(m)\\) (line \\(6\\)). Fourth, for quality increment of a individual subtask, \\(k\\) timeslots in the neighborhood should be visited. In our implementation, we maintain a sorted list for subtasks that are sorted in the ascending order of the corresponding time slots. During the query evaluation, \\(O(log(m))\\) cost is used for finding the nearest assigned subtask, and then \\(O(k)\\) cost is used for refining the exact \\(k\\)-NN. Therefore, the total time complexity is \\(O(m^{3}log(m))\\).
In summary, the approximation algorithm gives a polynomial alternative for tackling the NP-hard optimization problem. However, the computational overhead makes it impractical in real-time task assignment scenarios. In the sequel, we propose a series of techniques for better efficiency and scalability.
### _Efficient Heuristic Value Calculation_
The idea of accelerating the algorithm is in two parts. First, for finding the maximum heuristic value, how to avoid unnecessary enumeration of all \\(m\\) subtasks. Second, the calculation of a heuristic value refers to the summation of partial qualities of all subtasks. For the heuristic value calculation, how to maximally reuse the computation so as to avoid unnecessary checking of all time slots.
**Locality of \\(k\\)-NN Searching.** We try to scale down the problem by considering the locality of \\(k\\)-NN searching. We observed from Fig. 3 (b) that if two slots are proximate, their \\(k\\)-NN results tend to be similar. For example, \\(\\tau^{(1)}\\) and \\(\\tau^{(2)}\\) share the same \\(k\\)-NN results, i.e., \\(\\{\\tau^{(2)},\\tau^{(4)}\\}\\). Theoretically, the solution space of \\(k\\)-NN searching over the \\(m\\) subtasks is a one-dimensional order-\\(k\\) Voronoi diagram. The domain space is a one-dimensional interval, i.e., from \\(1\\) to \\(m\\). The diagram splits \\(m\\) slots into disjoint intervals, called Voronoi cells, such that the \\(k\\)-NN searching provides the same result if queries are within the same Voronoi cell. In Fig. 3 (c), for example, the Voronoi cell \\(V(\\tau^{(2)},\\tau^{(4)})\\) covers slots from \\(\\tau^{(1)}\\) to \\(\\tau^{(4)}\\), meaning that all slots from \\(\\tau^{(1)}\\) to \\(\\tau^{(4)}\\) take \\(\\{\\tau^{(2)},\\tau^{(4)}\\}\\) as their \\(2\\)-NN results.
Fig. 3: An Example, (\\(k=2\\), \\(t_{s}=4\\), \\(m=100\\)): (a) Current state \\(T_{cur}\\); (b) \\(2\\)-Nearest Neighbors; (c) \\(1\\)-dimensional Order-\\(2\\) Voronoi Diagram; (d) Tree-structured Index; (e) Auxiliary Information.
Such a structure facilitates the algorithm evaluation in two aspects. First, the time cost for \\(k\\)-NN searching can be reduced from \\(O(log(m))\\) to constant time \\(O(1)\\), as the diagram precomputes the solution space for the \\(k\\)-NN queries. Second, the diagram accelerates the calculation of heuristic values. For the tentative execution of a subtask, the heuristic value is calculated by enumerating all other slots which takes \\(O(m)\\) (Algorithm 1, line 6).
Recall that the finishing probability of an unexecuted subtask depends on its \\(k\\)-NN interpolation. So, the value of finishing probability of an unexecuted subtask does not change, if the order-\\(k\\) Voronoi cell to which it belongs does change, with the tentative execution. This way, the problem of quality increment calculation is transformed to the reformulation of Voronoi cells w.r.t. the tentative subtask execution, which can be handled locally, since the diagram handles such updates locally [24]. We cover more details below.
The technical challenge arising is that there could be a large number of order-\\(k\\) Voronoi cells, making the gains of local computation not worthy of the overhead of the Voronoi diagram construction. In particular, the average number of order-\\(k\\) cells is \\(O(k(m-k))\\)[24].
**Approximated One-dimensional Voronoi Diagram.** To handle that, we propose an approximate version of one-dimensional order-\\(k\\) Voronoi diagram. The idea is to use an aggregated binary tree for Voronoi cell indexing and Voronoi diagram approximation (Fig. 3 (d)). In the tree, each node represents a time segment \\([l,r]\\), where \\(l\\) and \\(r\\) are for the two slots on the segment's left and right ends, respectively. The root node is the interval of the entire \\(m\\) slots. For each node, we store the auxiliary information in the form of a quadruples, i.e., \\(\\langle k\\text{-}set,knn(l),knn(r),q^{\\prime}\\rangle\\), as shown in Fig. 3 (e).
The \\(k\\)-set of a node is the union of \\(k\\)-NN results for all its offsprings. \\(knn(l)\\) and \\(knn(r)\\) are \\(k\\)-NN results of the two end slots. The \\(k\\)-NN results are sorted in ascending order of the distance to \\(l\\) (or \\(r\\)), so that the distance from \\(l\\) (or \\(r\\)) to its \\(k\\)-th nearest neighbor can be fast retrieved, denoted by \\(k_{max}(l)\\) (or \\(k_{max}(r)\\)). So, we can derive the influence range of a node as \\([max(1,l-k_{max}(l)),min(m,r+k_{max}(r))]\\), such that the quadruple of the node can be affected if a tentatively executed slot is within the influence range. \\(q^{\\prime}\\) is the partial quality value of the node. For subtask \\(\\tau^{(j)}\\), its partial quality equals \\(p^{(j)}log(p^{(j)})\\). The quality of a node is the summation of partial qualities of all subtasks in its offspring. This way, the quality value can be fast retrieved by querying the upper level nodes of the tree structure. Upon updating, only necessary subtrees are retrieved and revised, following the style of updating an aggregated tree-structure.
**Maximum Heuristic Value Calculation.** With the tree structure, it takes much less computational overhead than enumerating all \\(m\\) slots. The process is implemented by traversing the tree in a best-first manner with an associated heap. The elements of the heap are tree nodes which are sorted in descending order of the upper bounds. The higher an element ranks in the heap, the more likely it corresponds to the maximum heuristic value increment.
Next, we study how to upper bound the effect of a tentative insertion. A tentative insertion affects the \\(k\\)-NN interpolation results so the quality value varies. The effect can be categorized into _inter-node_ and _intra-node_ cases3.
Footnote 3: We consider the leaf node for ease of presentation. The calculation of upper bounds for non-leaf nodes can be done in a bottom-up manner, as the construction process of an aggregated tree-structure.
For the intra-node case, if an unexecuted slot is tentatively executed, of the same node, another unexecuted slot's \\(k\\)-th NN distance is reduced to \\(1\\) at most. It corresponds to the extreme case that the slot is next to the tentative executed slot, whereas the distance between them is \\(1\\). So, the interpolation error ratio can be lower bounded as follows.
\\[\\rho_{err}(\\tau^{(j)})\\geq\\frac{1}{k\\cdot m}[\\sum_{e\\in S_{(k-1)NN}(\\tau^{(j)}) }|e,\\tau^{(j)}|_{i}+1] \\tag{6}\\]
Therefore, the upper bound of a node's quality can be derived by the lower bound of \\(\\rho_{err}\\) (Equation 6), since the value of \\(q(.)\\) is inversely proportional to that of \\(\\rho_{err}(.)\\). So, the upper bound of a node's heuristic value is calculated by the maximum quality change divided by the minimum cost of all unexecuted subtasks in the node.
For the inter-node case, a tentatively executed slot of a node would also change the \\(k\\)-NN interpolation result of other nodes. For example, in Fig. 3 (d), we have two leaf nodes, \\(n_{1}\\) and \\(n_{2}\\). If \\(\\tau^{(3)}\\in n_{1}\\) is tentatively executed, \\(\\tau^{(5)}\\in n_{2}\\)'s second nearest neighbor changes from \\(\\tau^{(7)}\\) to \\(\\tau^{(3)}\\). The corresponding quality value change should also be incorporated and updated.
At each iteration, the top element of the heap is popped up and the child nodes are inserted. If the top element is a time slot/subtask, the exact heuristic value increase can be obtained. Suppose the value is \\(\\theta\\). Then, all other elements in the heap whose upper bounds are below \\(\\theta\\) can be pruned. The process repeats until the heap is empty. The slot that gives the maximum heuristic value is then obtained.
The advantages of using the tree structure are two-fold. First, using best-first searching for obtaining the slot with the potentially maximum heuristic value is expected to take \\(O(log(m))\\) time. Second, the locality of \\(k\\)-NN searching is reflected by the decomposition of tree leaf nodes. Suppose that calculating the updated heuristic value of a leaf node takes constant time. Then, instead of enumerating \\(O(m)\\) slots for calculating the updated heuristic value, one takes \\(O(log(m))\\) time to retrieve and update the relevant leaf nodes, reducing the corresponding computation overhead from \\(O(m)\\) to \\(O(log(m))\\). The overall cost is \\(O(mlog^{3}(m))\\).
**Tree Construction.** We consider the tree construction in an incremental manner. Suppose that a time slot \\(e\\) is to be tentatively executed, meaning that \\(e\\) is with the maximum current heuristic value. The process of tree construction is triggered accordingly, and is revoked recursively. At each iteration, we test if a subtree will be affected. If yes, we update the associated quadruples and forward the updates to the descendants. Otherwise, the entire subtree is skipped.
We summarize two cases to determine whether the current node will be affected and therefore updated by the tentative execution. Based on that, we can disqualify irrelevant nodes at a higher level to save the overhead of the tree construction.
* _Case 1._ A node will be affected, if the tentatively executed slot is within the influence range of the node.
* _Case 2._ A node will not be affected, if its parent node is not affected, by \\(e\\).
By doing so, we can find appropriate nodes for updating. The updates can thus be propagated to the leaf level so that the node splitting should be handled.
**Splitting and Stopping Conditions.** During splitting, a node is decomposed into two sub-nodes, with its time segment split into two equal sized sub-segments. The quadruple of the node can thus be partially inherited by its sub-nodes. For example, the left sub-node inherits the \\(knn(l)\\) of its parent, because they share the same left end slot. In particular, we consider two stopping conditions.
* _Condition 1._ For a node with segment \\([l,r]\\), splitting stops, if \\(knn(l)=knn(r)\\).
* _Condition 2._ If the length of a node's segment is smaller than a pre-specified threshold, \\(t_{s}\\), splitting stops.
Condition \\(1\\) guarantees that current segment belongs to the same order-\\(k\\) Voronoi cell, so that there is no need for further splitting. Condition \\(2\\) limits the depth of the tree structure to \\(\\lceil log_{t_{s}}(m)\\rceil\\). It serves as a knob for tuning the approximation accuracies so as to control the construction overhead of the tree structure. The correctness of Conditions \\(1\\) is guaranteed by Lemma 8 in Appendix.
## IV Multiple Task Assignment
The multi-task assignment problem is essential to the practical deployment of crowdsourcing platforms, where multiple tasks are submitted, scheduled, and executed, simultaneously. However, the computational overhead of multi-task assignment is high, even its simplified version, i.e., single-task assignment, is NP-hard. Even with the approximation solution, the algorithm has to iteratively retrieve a subtask from all given \\(|\\mathcal{T}|\\) tasks that maximizes the heuristic value, making the algorithm scale quadratically with \\(|\\mathcal{T}|\\), which is not scalable for handling a large number of tasks. A practical way of handling multi-task case is to fully exploit the hardware capabilities of the TCSC server, with parallel computing techniques.
In this section, we study the multi-task assignment scenario, by considering two variants regarding the settings of optimization targets. The first variant evaluates the overall quality by the summation of qualities of individual tasks, that belong to the given task set. The second variant improves the overall quality by reinforcing the \"weakest\" single task, i.e., maximizing the minimum single task quality. Both variants are on improving the overall quality of the given set of tasks with budget constraints4. In particular, we use \\(q_{sum}\\) and \\(q_{min}\\) to represent the optimization target functions, respectively.
Footnote 4: A dual version of our problem can be minimizing the task costs with quality constraints. It can be handled with the primal-dual method [25], which reduces the problem to the one studied in this work.
### _Maximizing Summation Quality_
The first optimization target is on maximizing the summation quality of all tasks as following.
**Definition 3** **Summation Quality**.: _Given a set of tasks \\(\\mathcal{T}=\\{\\tau_{1},\\tau_{2}, \\}\\), we define the summation quality as:_
\\[q_{sum}(\\mathcal{T})=\\sum_{i=1}^{|\\mathcal{T}|}q(\\tau_{i}\\big{|}\\tau_{i}\\in \\mathcal{T}) \\tag{7}\\]
**Problem 2** **Multiple-task Summation Quality Maximization with Fixed Budgets (MSQM in short)**.: _Given a set of tasks \\(\\mathcal{T}=\\{\\tau_{1},\\tau_{2}, \\}\\), the MSQM problem is to find an assignment for the tasks in \\(\\mathcal{T}\\), such that the summation quality is maximized, and the overall cost \\(\\sum c(\\tau_{i})\\) does not exceed the given budget \\(b\\)._
\\[\\begin{split} Maximize\\ \\ q_{sum}(\\mathcal{T})\\\\ subject\\ to\\ \\sum_{i=1}^{|\\mathcal{T}|}c(\\tau_{i})\\leq b\\end{split} \\tag{8}\\]
We can prove that the MSQM problem is NP-hard, by reducing it from the sQM problem, whose NP-hardness is proved by Lemma 3.
**Lemma 4**: \\(q_{sum}(.)\\) _is submodular and non-decreasing._
**Proof** _We have proved that \\(q(.)\\) is non-decreasing and submodular, by Lemma 2. The summation function is known as both convex and concave. The \\(q_{sum}(.)\\) function is a composite function of a summation function and \\(q(.)\\) function. So, the lemma is proved, according to Lemma 1._
Based on the properties of submodularity and non-decreasingness, the framework of single task assignment, i.e., Algorithm 1, can be applied for handling the multi-task assignment case. The heuristic value is set as the increase of the summation quality divided by the corresponding cost (of a tentatively selected subtask), following the same greedy strategy. Then, the algorithm is to iteratively retrieve a subtask from all given \\(|\\mathcal{T}|\\) tasks that maximizes the heuristic value, so that the solution space is \\(|\\mathcal{T}|\\) times the size of the single task case, making the algorithm scale quadratically with \\(|\\mathcal{T}|\\). The time complexity is \\(O(|\\mathcal{T}|^{2}mlog^{3}(m))\\). To improve the scalability, we aim to derive a parallelization framework for distributing the calculation workload onto multiple computation cores. Ideally, each task can be running independently on different cores, so that the time cost would be \\(\\frac{|\\mathcal{T}|}{\\text{\\# of cores}}\\) times that of running a single task, assuming the value of \\(|\\mathcal{T}|\\) is larger than the number of cores. But, there occurs correlations between tasks, if two subtasks running on different cores \"compete\" for one worker at some time slot. It happens because it is possible that two subtasks choose the same worker with lowest costs for minimizing the budget decrement. We call this _worker conflicting_, as exemplified in Fig. 4 (a), where there are three tasks (\\(\\tau_{1}\\) to \\(\\tau_{3}\\)) and two workers (\\(w_{1}\\) and \\(w_{2}\\)). There exist conflicts between \\(\\tau_{2}\\) and \\(\\tau_{3}\\), since they both take \\(w_{1}\\) as the worker with the lowest cost.
#### Iv-A1 Group-level Parallelization
We can have a graph of independent groups, if taking each task as a node and drawing an edge between any two conflicted tasks. If a group of nodes do not have any connections with other groups, it is an independent group. The optimization process of such a group is independent of others and therefore independent groups can be run in parallel. For example, in Fig. 4 (a), tasks can be divided into two independent groups, \\(\\{\\tau_{1}\\}\\) and \\(\\{\\tau_{2},\\tau_{3}\\}\\), if any pair of tasks do not compete for workers with lowest costs.
Is it sufficient for considering the conflicts on the lowest costs for deriving the independent groups? The answer is NO. For example, in Fig. 4 (b), \\(\\tau_{2}\\) and \\(\\tau_{3}\\) have conflicts on \\(w_{1}\\), so \\(\\tau_{3}\\) opts for \\(w_{2}\\), who ranks as the worker with the second lowest cost. Unfortunately, \\(\\tau_{3}\\) further have conflicts with \\(\\tau_{1}\\), which has \\(w_{2}\\) as the worker with the lowest cost. In general, if two subtasks belonging to different tasks are to be executed, one of them has to choose a worker with the second lowest cost, or even the latter.
One may get the independence graph by gradually expanding the searching regions. For example, assume costs are calculated by the travelling distances from workers to tasks [16][4]. Then, a subtask takes the nearest worker as the worker with the lowest cost. For a given task, we call the circle centered at the task's position with the distance between the task and its nearest neighbor as its \\(1\\)-NN bound. The independence graph can be obtained by the following steps. Initially, we draw \\(1\\)-NN bounds for each task, as shown in Fig. 4 (c). An edge between \\(\\tau_{2}\\) and \\(\\tau_{3}\\) is added, since the two tasks share the same worker that causes conflicts. Next, we draw \\(2\\)-NN bounds for both \\(\\tau_{2}\\) and \\(\\tau_{3}\\), so that there exist enough workers for being assigned to the two conflicting tasks, as shown in Fig. 4 (d). However, it shows that another worker is within \\(\\tau_{1}\\)'s \\(1\\)-NN bound and \\(\\tau_{3}\\)'s \\(2\\)-NN bound, so the two tasks have conflicts and the edge between them is added. After that, we draw \\(2\\)-NN bound for \\(\\tau_{1}\\) and \\(3\\)-NN bound for \\(\\tau_{3}\\), as shown in Fig. 4 (e). In general, if a node of the independence graph is with degree \\(d\\), the (d+1)-NN bound should be drawn. The process repeats until no conflicts are detected. A drawback of the gradual expanding method is on incurring large groups and heavyweight computation tasks, deteriorating the parallelization performance.
#### Iv-A2 Task-level Parallelization
Hereby, we devise the task-level parallelization framework, as depicted in Fig. 5. We set a thread pool, with a master thread and a set of worker threads waiting for tasks to be concurrently executed. The master thread is for maintaining the thread pooling on heartbeat monitoring, conflicting controlling, scheduling, and logging. To support the functionalities of the master thread, there are several associated data structures, _Heartbeat Table_, _Conflicting Table_, and _Logging Table_.
Heartbeat Table stores periodically reported heuristic values from currently executed tasks. Logging Table traces the historical records of Heartbeat table. Conflicting Table stores a series of records for breaking the ties of conflicts. Assume three tasks, \\(\\tau_{1}\\), \\(\\tau_{2}\\), and \\(\\tau_{3}\\), are conflicted at time slot \\(t\\). The information, including the conflicting task sets and corresponding conflicting time slot, is stored at the Conflicting Table of the master thread. In this example, a tuple \\(\\langle\\{\\tau_{1},\\tau_{2},\\tau_{3}\\},t,1\\rangle\\) is recorded in the conflicting table. Here, \\(1\\) means the three tasks are to compete for the worker of \\(1\\)-NN. Then, during the task processing, if \\(\\tau_{1}\\) is to execute conflicting slot \\(t\\), its associated thread sends a message to inform the master thread. Upon receiving the message, the master thread looks up the Conflicting table and the Heartbeat table to retrieve the current heuristic values of \\(\\tau_{1}\\), \\(\\tau_{2}\\), and \\(\\tau_{3}\\). If \\(\\tau_{1}\\)'s current heuristic value is higher than \\(\\tau_{2}\\) and \\(\\tau_{3}\\), it continues with the execution of slot \\(t\\). Meanwhile, the master thread checks the availability of workers and updates the record on the conflicting table, by changing the field \"_k-th NN_\" from \\(1\\) to \\(2\\), so that \\(\\tau_{2}\\) and \\(\\tau_{3}\\) would compete for the worker with the \\(2^{nd}\\) lowest cost next time, because the first one has been taken by \\(\\tau_{1}\\). Otherwise \\(\\tau_{1}\\) is suspended and the process continues.
**Discussion.** The task-level parallel approach is deterministic, meaning that the parallelized task assignment plan is consistent with the non-parallel plan. The master thread periodically stores and descendingly sorts the heuristic values that are collected in Heartbeat Table, so that the derived plan is the same as the serialized task execution (Algorithm 1). This way,
Fig. 4: Worker Conflicting and Group-level Parallelization: an example of considering travel distances between workers and subtasks as costs.
Fig. 5: Task-level Parallelization: () Periodically Sending Heartbeats to Master Thread; () Reporting to Master Thread that Conflicts Detected; () Looking Up Conflicting Table and Heartbeat Table, then Ask Worker Thread to Continue or Suspend; () Adjusting Priorities of Worker Threads.
the parallel algorithm follows the approximation framework with guaranteed ratio. On the other hand, in the parallel environment, it is hard to strictly control the stopping condition, i.e., the timeline when the given budget is exhausted. It is unavoidable that threads with lower heuristic values are executed earlier than those with higher values. But this can mostly be alleviated with our priority settings. We set priorities in accordance with the heuristic values of worker threads dynamically, so that the tasks with higher heuristic values are more likely to be processed. This is also consistent with the greedy strategy of Algorithm 1. The priorities of worker threads are initialized as infinity to avoid thread starvation.
### _Maximizing Minimum Quality_
The second optimization target is on maximizing the minimum quality of all tasks, so that the overall quality is optimized. The problem is formalized.
**Definition 4**: _(Minimum Quality) Given a set of tasks \\(\\mathcal{T}=\\{\\tau_{1},\\tau_{2},\\dots\\}\\), we define the minimum quality as:_
\\[q_{min}(\\mathcal{T})=min\\left\\{q(\\tau_{i})\\big{|}\\tau_{i}\\in\\mathcal{T}\\right\\} \\tag{9}\\]
**Problem 3**: **Multi-task Minimum Quality Maximization with Fixed Budgets (MMQM).** _Given a set of tasks \\(\\mathcal{T}=\\{\\tau_{1},\\tau_{2},\\dots\\}\\), the MMQM problem is to find a task assignment for each task \\(\\tau_{i}\\in\\mathcal{T}\\), such that the minimum quality is maximized, and the overall cost \\(\\sum c(\\tau_{i})\\) does not exceed the given budget \\(b\\)._
\\[Maximize\\ q_{min}(\\mathcal{T})\\] \\[Subject\\ to\\ \\sum_{i=1}^{|\\mathcal{T}|}c(\\tau_{i})\\leq b\\]
We can prove the NP-hardness of the MMQM problem by reducing it to the sQM problem. The submodularity and non-decreasingness of \\(q_{min}(.)\\) function can be proved by Lemma 5. Hence, the \\((1-1/\\sqrt{\\epsilon})\\) approximation ratio of \\(q_{min}\\) is achieved by iteratively executing the selected subtask from the task yielding the minimum quality. The subtask execution follows the framework of Algorithm 1. To fast retrieve the task with minimum quality, we maintain a heap for \\(\\mathcal{T}\\) tasks. Notice that there is no worker conflict issues for the MMQM problem, since the subtasks are executed in a sequence. So, the total time complexity is \\(O(mlog^{3}(m)log(|\\mathcal{T}|))\\).
**Lemma 5**: \\(q_{min}(\\mathcal{T})\\) _is submodular and non-decreasing._
**Proof** _We have proved that \\(q(.)\\) is non-decreasing and submodular, by Lemma 2. The minimization function is known as a concave function. The \\(q_{min}(.)\\) function is a composite function of a minimization function and \\(q(.)\\) function. So, the lemma is proved, according to Lemma 1._
## V Experiments
We cover the experimental setup in Section V-A, and report the performance of our proposals in Sections V-B and V-C.
### _Experiment Setup_
**Dataset.5** We use a real dataset6 of \\(10,357\\) worker trajectories for representing workers' movements. For each worker trajectory, we randomly cut out a set of pieces, ranging from \\(1\\) to \\(5\\) time slots, as a worker's active slots. We use a public data generator7 to generate a series of datasets to simulate the locations of TCSC tasks, following **uniform**, Gaussian, and Zipfian distributions. For parameters of Gaussian distribution, the mean is set as the domain center and the sigma is set as the \\(1/6\\) of the domain sidelength, so that most of generated data are within the domain space. For Zipfian distribution, the exponent is set to \\(1\\), which is a common setting, and the only option of the generator. We also use a Beijing POI dataset for representing tasks' locations8. We set the cost for an assignment to be the distance that a worker moves to the assigned task, following the common setting of spatial crowdsourcing [16][4]. We vary the number of TCSC tasks to test the scalability of our proposals by setting the number of tasks as 100, **300**, and 500, respectively. For each TCSC task, we set the task length (i.e., the number of subtasks) to 300, **500**, and 1000, respectively. The budget is set to $\\(50\\), $\\(100\\), $\\(200\\), corresponding to about 12.5%, 25%, and 50% of the average cost of a TCSC task in the default setting. By default, \\(k\\) is set to \\(3\\) for the \\(k\\)-NN interpolation, \\(t_{s}\\) is set to \\(4\\). and the number of cores is set to \\(10\\) for multi-task parallelization.
Footnote 5: Default parameters are bolded.
Footnote 6: [https://www.microsoft.com/en-us/research/publication/t-drive-trajectory-data-sample/](https://www.microsoft.com/en-us/research/publication/t-drive-trajectory-data-sample/)
Footnote 7: [http://chorochronos.datastories.org/sites/default/files/algorithms/SpatialDataGenerator.zip](http://chorochronos.datastories.org/sites/default/files/algorithms/SpatialDataGenerator.zip)
**Implementation.** All algorithms are implemented in Java and run a PC with Intel(R) Xeon(R) CPU \\(E5\\)-\\(2698v4\\) @ \\(2.20\\)GHz and \\(256\\)GB main memory. By default, we use \\(12\\) cores for running experiments on multi-task assignment. Each reported value is the average of \\(20\\) runs.
### _Results on Quality_
We test the effectiveness of our quality-aware task assignment method in Figure 6. We compare the quality of our method, Approx, with two competitors, OPT and Rand. OPT offers the optimal result by traversing the solution space. Rand accomplishes a task by randomly assigning a subtask to its nearest worker. The results with different data distributions is shown in Fig. 6 (a). In all testing, Approx achieves a high quality result which is: 1) close to the optimal result; 2) better than randomized heuristic algorithms. The randomized heuristic algorithm does not offer a deterministic solution, and therefore incurs quality fluctuations. The gap between Approx and Rand is bigger, if the budget is smaller, as shown in Fig. 6 (b), which is the essential scenario in TCSC problem.
We further test the results for the multi-task case in Fig. 7 (a-d). In Fig. 7 (a) and (c), it can be observed that the quality of Approx is much better than its competitors, for both \\(q_{sum}\\) and \\(q_{min}\\) cases. We also examine how the quality change w.r.t. the budget in Fig. 9 (b) and (d). In all cases, Approx gives much better quality than baselines. The gap between them can be smaller, if the budget is sufficiently large, which is consistent with the problem setting.
In summary, Approx offers a high quality task assignment solution, with a deterministic output and theoretical guarantees, which outperforms the baselines.
### _Results on Efficiency_
We examine the efficiency and scalability of our proposal, by comparing two variants Approx and Approx*. The Approx solution is described by Algorithm 1, but without optimization techniques in Section III-C. Approx* improves Approx by: 1) using tree-structured order-\\(k\\) Voronoi diagrams to avoid redundant \\(k\\)-NN pre-computation; 2) using best-first searching and upper bound pruning for identifying the one with largest heuristic value. We test the efficiency in single-task assignment in Fig. 8 and multi-task assignment in Fig. 9.
**Single task assignment.** First, we test the efficiency of Approx and Approx*, by varying the number of subtasks (\\(m\\)) in Fig. 8 (a). Approx* improves over Approx by two orders of magnitude. As \\(m\\) increases, the improvement is more significant. It shows that the optimization techniques, i.e., the tree-structured order-\\(k\\) Voronoi diagram (Section III-C), bring in good scalability to the approximation framework. Second, we test the efficiency by varying the number of workers in Fig. 8 (b). The time cost keeps stable and increases only slightly w.r.t. \\(|W|\\). The reasons are two-fold: 1) the increasing trend is moderate due to the good scalability of best-first NN searching algorithm; 2) the slight increase shows that, with larger \\(|W|\\), the completion ratios of tasks increase, and are with higher costs. In all cases, Approx* outperforms Approx by at least two orders of magnitude, showing good efficiency and scalability in terms of \\(m\\) and \\(|W|\\).
To understand how the efficiency is achieved, we make detailed analysis in Fig. 8 (c-e). The improvements made by Approx* are in two parts, as shown in Fig. 8 (c). First, Approx* utilizes the implementation of the approximation of order-\\(k\\) Voronoi diagram, and thus maximally reuses the computation of \\(k\\)-NN results. It can be observed that the cost of the interpolation (i.e., finding \\(k\\) nearest subtasks) can be reduced by \\(4\\) orders of magnitude. Second, the tree-based pruning techniques can further reduce the cost of heuristic value calculation by more than an order of magnitude. The little extra cost for Approx* on the tree-structure is well-spent, given the efficiency gained in the total execution time.
To examine the pruning effects supported by the tree structure, we report the pruning ratios, by varying \\(m\\) on different task distributions in Fig. 8 (d). The ratio is calculated by the dividing the number of slots executed with pruning (Section III-C) by the one without pruning. It can be observed that our methods prune away more than 70% subtask execution and therefore effectively accelerate the entire task processing. Similar trend is observed for the result on the real data. We report the time cost spent on the tree-structure construction by varying the value of fanout of the tree structure, \\(t_{s}\\), in Fig. 8 (e). In all testing, the construction time is no more than \\(25\\) ms. Also, the time decreases w.r.t. the increase of \\(t_{s}\\), since a larger \\(t_{s}\\) corresponds to a smaller number of tree nodes and therefore less construction time.
We continue to examine the effects of other factors on the efficiency. In Fig. 8 (f), we compare the two solutions by varying the distributions of tasks' locations. In all cases, the performance of Approx* dominates that of Approx by more than two orders of magnitude. More, the time cost of Approx* remains relatively stable with tasks' location distributions. We also test the effect of parameter \\(k\\) for data interpolation, in Fig. 8 (g). The time cost increases with \\(k\\), since the cost of \\(k\\)-NN interpolation is higher for a bigger \\(k\\). We study the effect of budgets in Fig. 8 (h). The time cost increases moderately w.r.t. \\(b\\), since the number of executed subtasks also increases w.r.t. \\(b\\). Zipfian distribution has the lowest construction time. A task tends to incur higher cost under skewed distributions, so that the number of executed subtasks is reduced and the corresponding time cost is less. In summary, Approx* dominates Approx in different parameter settings, and has better adaptivity to the skewness of data distributions.
**Multiple task assignment.** We provide the results on the summation quality case in Fig. 9 (a-f) and the results on the minimum quality case in Fig. 9 (g-h).
First, we compare the performance of the three variants, group-based parallelization, task-based parallelization, and the basic solution without parallelization in Fig. 9 (a). It shows that parallelization with smaller granularity achieves better scaleup. The task-based parallelization outperforms the other two. In particular, when the number of cores reaches \\(10\\), the task-based parallelization solution takes about an order of magnitude less running time than the basic solution, which is consistent with the analysis in Section IV-A. We then compare the two parallelization methods by varying task location distributions in Fig. 9 (b). We can see that the Gaussian and Zipf distributions incur higher costs. It is because that skewed datasets tend to incur larger numbers of worker conflicts. The cost increases moderately because of the good scalability achieved by the optimization techniques for indexing and scheduling. Also, the number of worker conflicts increases with the number of tasks, as reported in Fig. 9 (c).
We then report the scalability of our proposal w.r.t. the number of tasks in Fig. 9 (d). It can be observed that the task-based parallelization solution increases moderately w.r.t. the number of tasks, and grows only slowly. We examine the performance of the algorithm by varying the parameter \\(m\\), in Fig. 9 (e). All methods increase moderately, where Zipfian and Gaussian distributions take longer than the uniform
Fig. 6: Quality of Single-task Case case. The result is consistent with our analysis, because skewed task distributions have a larger chance for incurring worker conflicts. We then make more analysis of the task-based parallelization method. We test the effect of thread priority setting in Fig. 9 (f). It reflects that threads with lower heuristic values are scheduled to execute earlier by the priority adjustment module, which breaks the ties of blocked threads and improves the performance of parallelization.
We show the results on the minimum quality metric in Fig. 9 (g) and (h). First, we examine the time costs by varying \\(|\\mathcal{T}|\\) in Fig. 9 (g). It can be observed that the time cost increases w.r.t. the number of tasks. Second, we test the result by varying \\(m\\) in Fig. 9 (h). It can be observed that the running time increases as \\(m\\) increases. In both experiments, Approx* steadily outperforms Approx, demonstrating better scalability in terms of the number of tasks and subtasks.
## VI Related Work
There are many studies in spatial crowdsourcing, requiring workers traveling to locations of spatial tasks and performs tasks, such as taking photos/videos, repairing a house, and waiting in line at shopping malls. These works focused on assigning available workers to tasks with distinct goals, such as maximizing the number of assigned tasks [1][5][26], minimizing the total travel costs of all workers [7][8], maximizing the quality score [3][5][27], maximizing quality task assignment by considering both present and future workers/tasks [6], or minimizing maximum task assignment delay [9]. Existing works cannot be directly used to handle the quality issues of the TCSC problem. According to our comprehensive survey [16], TCSC is related to the categories of data collection, and of task matching with quality constraints. We thus review existing works in the two categories.
Regarding applications of data collection, there exist papers
Fig. 8: Results of single-task assignment
Fig. 7: Quality of Multi-task Case
Fig. 9: Results of multi-task assignment on floorplan generation [28], traffic anomalies detection [29], voluntary services [30], and geo-spatial linked open data post-processing [31], etc. They mostly collect data in a specified spatiotemporal context, and do not address the issues in long-term data acquisition. Our work can support extending these works for the continuous data acquisition, e.g., monitoring routing behaviours, by incorporating the quality-aware crowdsourcing framework.
Regarding quality constraints, most existing papers are on the quality of task responses, based on the workers' expertise, reputation, or reliability [32][10][3][33]. They usually involve a pre-task qualification test [32], or the assignment based on the expertise [10], or abilities [3][33] of the worker. These papers are similar to the TCSC problem in the sense that they require data aggregation from multiple workers, but the aggregation methods are totally different. To our best knowledge, the most relevant work is [3], which considers the diversity (or distribution) of spatial and temporal tasks. Differently, they do not consider the mutual interaction between the interpolated and crowdsourced data. So, the optimization target and corresponding techniques are totally different.
To summarize, a TCSC task has a temporally continuous nature, and requires time-sharing collaboration of multiple workers, necessitating quality-aware data management.
## VII Conclusion
In this paper, we study the problem of TCSC, which enables time-sharing collaboration among multiple workers towards long-term continuous spatial crowdsourcing applications. We propose an entropy-based quality metric for measuring the incompleteness of the crowdsourced results. Based on that, we study quality-aware task assignment algorithms with budget constraints for both single- and multi-task cases. For both variants, we prove its NP-hardness and submodularity of quality functions, so that a unified approximation framework can be applied. We devise novel indexing and parallel mechanisms for accelerating the processing. Extensive experiments on real and synthetic datasets show that our proposals achieve good efficiency and scalability. In the future, we will extend the approximation framework and optimization techniques from supporting temporal interpolation to spatiotemporal interpolation scenarios.
## Acknowledgments
We thank anonymous reviewers for their insightful comments on crowdsourcing with spatiotemporal interpolation.
## References
* [1] L. Kazemi and C. Shahabi, \"Geocrowd: enabling query answering with spatial crowdsourcing,\" in _SIGSPATIAL_, 2012, pp. 189-198.
* [2] D. Deng, C. Shahabi, and U. Demiryurek, \"Maximizing the number of worker's self-selected tasks in spatial crowdsourcing,\" in _SIGSPATIAL_, 2013, pp. 324-333.
* [3] P. Cheng, X. Lian, Z. Chen, R. Fu, L. Chen, J. Han, and J. Zhao, \"Reliable diversity-based spatial crowdsourcing by moving workers,\" _PVLDB_, pp. 1022-1033, 2015.
* [4] Y. Tong, L. Wang, Z. Zhou, B. Ding, L. Chen, J. Ye, and K. Xu, \"Flexible online task assignment in real-time spatial data,\" _PVLDB_, pp. 1334-1345, 2017.
* [5] H. To, C. Shahabi, and L. Kazemi, \"A server-assigned spatial crowdsourcing framework,\" _ACM Transactions on Spatial Algorithms and Systems_, vol. 1, no. 1, pp. 2:1-2:28, 2015.
* [6] P. Cheng, X. Lian, L. Chen, and C. Shahabi, \"Prediction-based task assignment in spatial crowdsourcing,\" in _ICDE_, 2017, pp. 997-1008.
* [7] D. Deng, C. Shahabi, and L. Zhu, \"Task matching and scheduling for multiple workers in spatial crowdsourcing,\" in _SIGSPATIAL_, 2015, pp. 21:1-21:10.
* [8] X. Zhang, Z. Yang, Y. Liu, and S. Tang, \"On reliable task assignment for spatial crowdsourcing,\" _IEEE Transactions on Emerging Topics in Computing_, vol. 7, no. 1, pp. 174-186, 2019.
* [9] Z. Chen, P. Cheng, Y. Zeng, and L. Chen, \"Minimizing maximum delay of task assignment in spatial crowdsourcing,\" in _ICDE_, 2019, pp. 1454-1465.
* [10] P. Cheng, X. Lian, L. Chen, J. Han, and J. Zhao, \"Task assignment on multi-skill oriented spatial crowdsourcing,\" _IEEE Transactions on Knowledge and Data Engineering_, vol. 28, no. 8, pp. 2201-2215, 2016.
* [11] P. Cheng, L. Chen, and J. Ye, \"Cooperation-aware task assignment in spatial crowdsourcing,\" in _ICDE_, 2019, pp. 1442-1453.
* [12] J. Li, J. Wu, and Y. Zhu, \"Selecting optimal mobile users for long-term environmental monitoring by crowdsourcing,\" in _IWQos_, 2019, pp. 8:1-8:10.
* [13] C. Zhang, L. Zhu, C. Xu, X. Du, and M. Guizani, \"A privacy-preserving traffic monitoring scheme via vehicular crowdsourcing,\" _Sensors_, vol. 19, no. 6, p. 1274, 2019.
* [14] N. Sabir, A. A. Memon, and F. K. Shaikh, \"Threshold based efficient road monitoring system using crowdsourcing approach,\" _Wireless Personal Communications_, vol. 106, no. 4, pp. 2407-2425, 2019.
* [15][https://www.nationalgeographic.org/encyclopedia/citizen-science/](https://www.nationalgeographic.org/encyclopedia/citizen-science/), accessed:2020-01-10.
* [16] S. R. B. Gummidi, X. Xie, and T. B. Pedersen, \"A survey of spatial crowdsourcing,\" _ACM Transactions on Database Systems_, vol. 44, no. 2, pp. 8:1-8:46, 2019.
* [17] J. Gao and P. Revesz, \"Voting prediction using new spatiotemporal interpolation methods,\" in _DGO_, 2006, pp. 293-300.
* [18] J. Gao, \"Adaptive interpolation algorithms for temporal-oriented datasets,\" in _TIME_, 2006, pp. 145-151.
* [19] O. Babak and C. V. Deutsch, \"Statistical approach to inverse distance interpolation,\" _Stochastic Environmental Research and Risk Assessment_, vol. 23, no. 5, pp. 543-553, 2009.
* [20] T. M. Cover and J. A. Thomas, _Elements of information theory_. John Wiley & Sons, 2012.
* [21] H. Lin and J. Bilmes, \"A class of submodular functions for document summarization,\" in _ACL_, 2011, pp. 510-520.
* [22] A. Krause and C. Guestrin, \"A note on the budgeted maximization of submodular functions,\" 2005. [Online]. Available: [http://reports-archive.adm.cs.cmu.edu/annol/cdd/CMU-CALD-05-103.pdf](http://reports-archive.adm.cs.cmu.edu/annol/cdd/CMU-CALD-05-103.pdf)
* [23] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, \"An analysis of approximations for maximizing submodular set functions--i,\" _Mathematical Programming_, vol. 14, no. 1, pp. 265-294, 1978.
* [24] A. Okabe, B. Boots, K. Sugihara, and S. Chiu, _Spatial Tessellations: Concepts and Applications of Voronoi Diagrams_. Wiley, 2000.
* [25] C. Chekuri and A. Kumar, \"Maximum coverage problem with group budget constraints and applications,\" in _APPROX-RANDOM_, 2004, pp. 72-83.
* [26] P. Cheng, X. Jian, and L. Chen, \"An experimental evaluation of task assignment in spatial crowdsourcing,\" _PVLDB_, pp. 1428-1440, 2018.
* [27] F. Daniel, P. Kucherbaev, C. Cappiello, B. Benatallah, and M. Allahbakhsh, \"Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions,\" _ACM Computing Surveys_, vol. 51, no. 1, pp. 7:1-7:40, 2018.
* [28] M. Alzantot and M. Youssef, \"Crowdinside : Automatic construction of indoor floorplans,\" in _GIS_, 2012, pp. 99-108.
* [29] B. Pan, Y. Zheng, D. Wilkie, and C. Shahabi, \"Crowd sensing of traffic anomalies based on human mobility and social media,\" in _GIS_, 2013, pp. 344-353.
* [30] X. Xie, P. Jin, M. L. Yiu, J. Du, M. Yuan, and C. S. Jensen, \"Enabling scalable geographic service sharing with weighted imprecise voronoi cells,\" _TKDE_, vol. 28, no. 2, pp. 439-453, 2016.
* [31] R. Karam and M. Melchiori, \"Improving geo-spatial linked data with the wisdom of the crowds,\" in _EDBT Workshop_, 2013, pp. 68-74.
* [32] X. Liu, M. Lu, B. C. Ooi, Y. Shen, S. Wu, and M. Zhang, \"CDAS: A crowdsourcing data analytics system,\" _PVLDB_, pp. 1040-1051, 2012.
* [33] L. Kazemi, C. Shahabi, and L. Chen, \"Geotrucrowd: trustworthy query answering with spatial crowdsourcing,\" in _SIGSPATIAL_, 2013, pp. 304-
## VIII Appendix
### _Submodularity and Non-decreasingness of \\(p^{(j)}\\)_
We show Lemmas 6 and 7, which are on the properties of finishing probability functions. Assume a set \\(S\\) of executed subtasks, and a to-be-executed subtask \\(e\\), satisfying \\(S\\cap\\{e\\}=\\emptyset\\). We define \\(p^{S}(\\tau^{(j)})\\) as the finishing probability of subtask \\(\\tau^{(j)}\\) given that the subtasks in \\(S\\) are executed.
Similarly, we define \\(\\rho_{err}^{S}(\\tau^{(j)})\\) and \\(I^{S}(\\tau^{(j)})\\) as the error ratio and the interpolation distance (i.e., \\(I^{S}(\\tau^{(j)})=\\sum_{e\\in S_{KNN}}|\\tau^{(j)},e|_{i}\\)) of subtask \\(\\tau^{(j)}\\), respectively, given that all subtasks in \\(S\\) are executed. Without causing any ambiguities, in the proofs, we simplify \\(\\rho_{err}^{S}(\\tau^{(j)})\\) and \\(I^{S}(\\tau^{(j)})\\) as \\(\\rho_{err}^{S}\\) and \\(I^{S}\\), respectively.
**Lemma 6**: _The function \\(p^{(j)}\\) is submodular._
**Proof** _To prove function \\(p^{(j)}\\) is submodular, it is equivalently to prove the following._
\\[p^{S\\cap\\{e\\}}(\\tau^{(j)})+p^{S\\cup\\{e\\}}(\\tau^{(j)})\\leq p^{S}(\\tau^{(j)})+p^{ \\{e\\}}(\\tau^{(j)}) \\tag{10}\\]
_By substituting it with Equation 2, we can rewrite it as \\(\\rho_{err}^{S\\cap\\{e\\}}(\\tau^{e})+\\rho_{err}^{S\\cup\\{e\\}}\\geq\\rho_{err}^{S}+ \\rho_{err}^{\\{e\\}}\\).Equivalently, it is sufficient to prove that_
\\[\\rho_{err}^{S\\cap\\{e\\}}+\\rho_{err}^{S\\cup\\{e\\}}-\\rho_{err}^{S}-\\rho_{err}^{\\{ e\\}}\\geq 0 \\tag{11}\\]
_From Equation 3, we know that \\(\\rho_{err}\\)'s value is dependent on the \\(\\tau^{(j)}\\)'s \\(k\\)-NN set, \\(S_{KNN}\\), which must be a subset of \\(S\\). Next, we show the correctness of Equation 11, by enumerating all three possible case of \\(S\\)._
_Case 1. When \\(S=\\emptyset\\), we have \\(\\rho_{err}^{S\\cap\\{e\\}}+\\rho_{err}^{S\\cup\\{e\\}}-\\rho_{err}^{S}-\\rho_{err}^{ \\{e\\}}=\\rho_{err}^{\\{e\\}}=0\\), and thus Equation 11 holds._
_Case 2. When \\(0<|S|<k\\), we have \\(\\rho_{err}^{S\\cap\\{e\\}}=1\\) and \\(\\rho_{err}^{\\{e\\}}=1-\\frac{1}{k}+\\frac{I^{\\{e\\}}}{km}\\). Here, \\(\\rho_{err}^{S\\cup\\{e\\}}\\) has two subcases, depending on \\(|S|\\)._
_The subtase (a) refers to \\(|S|=k-1\\). In that case, there are \\(k\\) finished subtasks after executing \\(e\\). Then, \\(\\rho_{err}^{S\\cup\\{e\\}}=\\frac{I^{S\\cup\\{e\\}}}{km}\\), and \\(\\rho_{err}^{S}=\\frac{I^{S}}{km}+\\frac{1}{k}\\). We can have \\(\\rho_{err}^{S\\cap\\{e\\}}+\\frac{\\rho_{err}^{S\\cup\\{e\\}}}{km}-\\rho_{err}^{S}- \\frac{I^{\\{e\\}}}{km}=\\frac{I^{S\\cup\\{e\\}}}{km}=\\frac{I^{S\\cup\\{e\\}}}{km}- \\frac{I^{\\{e\\}}}{km}=0\\). Equation 11 holds._
_The subtase (b) means that the total number of finished subtasks does not exceed \\(k\\), after executing subtask \\(e\\). Thus, we have \\(\\rho_{err}^{S\\cup\\{e\\}}=1-\\frac{|S|+1}{k}+\\frac{I^{S\\cup\\{e\\}}}{km}\\) and \\(\\rho_{err}^{S}=1-\\frac{|S|}{k}+\\frac{I^{S}}{km}\\). Then, \\(\\rho_{err}^{S\\cap\\{e\\}}+\\frac{\\rho_{err}^{S\\cup\\{e\\}}}{km}-\\rho_{err}^{\\{e\\}} =\\frac{I^{S\\cup\\{e\\}}}{km}-\\frac{I^{S}}{km}=\\frac{I^{S}}{km}-\\frac{I^{\\{e\\}}} {km}=0\\). Equation 11 holds._
_Case 3. When \\(|S|\\geq k\\), \\(\\rho_{err}^{S\\cap\\{e\\}}=1\\), we have \\(\\rho_{err}^{\\{e\\}}=1-\\frac{1}{k}+\\frac{I^{\\{e\\}}}{km}\\), \\(\\rho_{err}^{S}=\\frac{I^{S}}{km}\\), and \\(\\rho_{err}^{S\\cup\\{e\\}}=\\frac{I^{S\\cup\\{e\\}}}{km}\\). Based on whether the execution of subtask \\(e\\) changes \\(S_{KNN}\\) (\\(\\tau^{(j)}\\)'s \\(k\\)-NN set), there can be two subcases._
_If \\(S_{KNN}\\) is not affected by \\(e\\), we have that \\(\\rho_{err}^{S\\cup\\{e\\}}=\\rho_{err}^{S}\\). So, \\(\\rho_{err}^{S\\cap\\{e\\}}+\\rho_{err}^{S\\cup\\{e\\}}-\\rho_{err}^{S}-\\rho_{err}^{ \\{e\\}}=\\rho_{err}^{S\\cap\\{e\\}}-\\rho_{err}^{\\{e\\}}=\\frac{1}{k}-\\frac{I^{\\{e\\}} }{km}\\). As the interpolation distance \\(I^{\\{e\\}}\\) is less than \\(m\\), we have \\(\\frac{1}{k}+\\frac{I^{\\{e\\}}}{km}>0\\). So, Equation 11 holds._
_If \\(S_{KNN}\\) is affected by \\(e\\), it implies that a subtask in \\(S_{KNN}\\) is updated by \\(e\\). Suppose the replaced subtask in original \\(S_{KNN}\\) be \\(e^{\\prime}\\), and the updated interpolation distance be \\(I^{S\\cup\\{e\\}}=I^{S}-I^{\\{e^{\\prime}\\}}+I^{\\{e\\}}\\). We can have \\(\\rho_{err}^{S\\cap\\{e\\}}+\\rho_{err}^{S\\cup\\{e\\}}-\\rho_{err}^{S}-\\rho_{err}^{ \\{e\\}}=\\frac{I^{S\\cup\\{e\\}}}{km}-\\frac{I^{S}}{km}+\\frac{1}{k}-\\frac{I^{\\{e\\}} }{km}\\)._
_As the interpolation distance \\(I^{\\{e\\}}\\) is less than \\(m\\), we have \\(\\frac{1}{k}-\\frac{I^{\\{e^{\\prime}\\}}}{km}>0\\). So, Equation 11 holds._
_If \\(S_{KNN}\\) is affected by \\(e\\), it implies that a subtask in \\(S_{KNN}\\) is updated by \\(e\\). Suppose the replaced subtask in original \\(S_{KNN}\\) be \\(e^{\\prime}\\), and the updated interpolation distance be \\(I^{S\\cup\\{e\\}}=I^{S}-I^{\\{e^{\\prime}\\}}+I^{\\{e\\}}\\). We can have \\(\\rho_{err}^{S\\cap\\{e\\}}+\\rho_{err}^{S\\cup\\{e\\}}-\\rho_{err}^{S}-\\rho_{err}^{ \\{e\\}}=\\frac{I^{S\\cup\\{e\\}}}{km}-\\frac{I^{S}}{km}+\\frac{1}{k}-\\frac{I^{\\{e\\}} }{km}\\)._
_As the interpolation distance \\(I^{\\{e\\}}\\) is less than \\(m\\), we have \\(\\frac{1}{k}-\\frac{I^{\\{e^{\\prime}\\}}}{km}>0\\). Equation 11 holds._
_In summary, Equation 11 holds in all three cases. The lemma is proved._
**Lemma 7**: _The function \\(p^{(j)}\\) is non-decreasing._
**Proof** _We prove the finishing probability function \\(p^{(j)}\\) is non-decreasing by showing that the error rate function \\(\\rho_{err}(\\tau^{(j)})\\) is non-increasing. Or, equivalently,_
\\[\\rho_{err}^{S\\cup\\{e\\}}-\\rho_{err}^{S}\\leq 0 \\tag{12}\\]
_There can be two possible cases for set \\(S\\), \\(0\\leq|S|<k\\) and \\(|S|\\geq k\\). We hereby prove the correctness of Equation 12 by considering the two cases._
_Case 1. When \\(0\\leq|S|<k\\), \\(\\rho_{err}^{S\\cup\\{e\\}}\\) have two subcases, depending on the size of set \\(S\\)._
_The first subcase is for \\(|S|=k-1\\), and thus the total subtasks number is \\(k\\) after adding the executed subtask \\(e\\). Then, \\(\\rho_{err}^{S\\cup\\{e\\}}=\\frac{I^{S\\cup\\{e\\}}}{km}\\), and \\(\\rho_{err}^{S}=\\frac{I^{S}}{km}+\\frac{I^{S}}{k}\\). So, we have \\(\\rho_{err}^{S\\cup\\{e\\}}-\\rho_{err}^{S}=\\frac{I^{S\\cup\\{e\\}}}{km}-\\frac{I^{S}} {km}+\\frac{I^{\\{e\\}}}{km}-\\frac{1}{k}\\). As the interpolation distance \\(I^{\\{e\\}}\\) is less than \\(m\\), Equation 12 holds._
_The second subcase is for \\(|S|<k-1\\), meaning that the total number of subtasks is less than \\(k\\) after the execution of \\(e\\). We can thus have \\(\\rho_{err}^{S\\cup\\{e\\}}=1-\\frac{|S|+1}{k}+\\frac{I^{S\\cup\\{e\\}}}{km}\\), and \\(\\rho_{err}^{S}=1-\\frac{|S|}{k}+\\frac{I^{S}}{km}\\). So, \\(\\rho_{err}^{S\\cup\\{e\\}}-\\(|a,e|_{t}+|l,a|_{t}\\geq 0\\) and \\(r+|r,a|_{t}-(e+|a,e|_{t})=|r,e|_{t}-|a,e|_{t}+|r,a|_{t}\\geq 0\\), then the range of \\(b\\) is within the range \\([l-|l,a|_{t},r+|r,a|_{t}]\\). The case can not exist._
_The second case is for \\(a<l\\), and the range of \\(b\\) is \\((e,e+|a,e|_{t})\\). As \\(a\\) is the \\(NN\\) of \\(l\\) and \\(r\\), there is no other executed subtask within the time segment \\((a,r+|a,r|_{t})\\), then the range of \\(b\\) is within the range \\((a,r+|a,r|_{t})\\). The case can not exist._
_The third case is for \\(a>r\\), then the range of \\(b\\) is \\((e-|a,e|_{t},a)\\). As \\(a\\) is the \\(1-nn\\) of \\(l\\) and \\(r\\), there is no other executed subtask within the time segment \\((l-|a,l|_{t},a)\\), then the range of \\(b\\) is within the range \\((l-|a,l|_{t},a)\\). The case can not exist._
### _Extension to Spatiotemporal Interpolation_
_Spatiotemporal Interpolation._ Suppose a set of tasks \\(\\mathcal{T}=\\{\\tau_{1},\\tau_{2}, \\}\\). Each task \\(\\tau_{i}\\) consists of a set of \\(m\\) subtasks, \\(\\tau_{i}=\\{\\tau_{i}^{(j)}\\}_{1\\leq j\\leq m}\\). If a subtask \\(\\tau_{i}^{(j)}\\) is not probed, it can either be _temporally interpolated_ by the executed subtasks belonging to the same task \\(\\tau_{i}\\), or be spatially interpolated by subtasks satisfying that: 1) being executed at the same time slot \\(j\\); 2) belonging to other tasks than \\(\\{\\tau_{i}\\}\\). For example, in Fig. 10, subtask \\(\\tau_{i}^{(5)}\\) can either be temporally interpolated by \\(\\tau_{i}^{(3)}\\) or spatially interpolated by \\(\\tau_{j}^{(5)}\\).
_Extensions on Quality Metrics._ The spatial interpolation error is proportional to the spatial distances between the interpolated values and their neighboring values. The error ratio function \\(\\rho_{err}^{s}(\\tau_{i}^{(j)})\\) for spatial interpolation can thus be written as follows.
\\[\\rho_{err}^{s}(\\tau_{i}^{(j)})=\\frac{\\sum_{e\\in S_{kNN}^{s}(\\tau_{i}^{(j)})}| \\tau_{i}^{(j)},e|_{e}}{k\\cdot|\\mathcal{D}|} \\tag{13}\\]
Here, \\(e\\) represent an executed subtask, and \\(|\\tau_{i}^{(j)},e|_{e}\\) represents the spatial distance between subtask \\(\\tau_{i}^{(j)}\\) and \\(e\\). Function \\(S_{kNN}^{s}(.)\\) returns the \\(k\\) executed subtasks with the smallest spatial distances. \\(|\\mathcal{D}|\\) in the denominator represents the spatial domain size so that the value range of the spatial interpolation error ratio \\(\\rho_{err}^{s}\\) is from \\(0\\) to \\(1\\), to be consistent with the form of temporal interpolation error ratio function.
We can use a weighted summation function to combine the interpolation errors of both spatial and temporal domains.
\\[\\rho_{err}=w_{s}\\cdot\\rho_{err}^{s}+w_{t}\\cdot\\rho_{err}^{t} \\tag{14}\\]
Here, \\(w_{s}\\) and \\(w_{t}\\) are weights of the two components, whose sum equals \\(1\\). \\(\\rho_{err}^{t}\\) represents the temporal interpolation error (Equation 3 in the manuscript). So, the subtask finishing probability \\(p_{i}^{(j)}\\) can be written as:
\\[p_{i}^{(j)}=\\frac{1}{m}(1-\\rho_{err}(\\tau_{i}^{(j)}))\\] \\[=\\frac{\\frac{1}{m}(1-w_{t}\\cdot\\rho_{err}^{t}(\\tau_{i}^{(j)}))}{ \\text{The temporal interpolation part}}+\\underbrace{\\frac{1}{m}(1-w_{s}\\cdot\\rho_{ err}^{s}(\\tau_{i}^{s(j)}))}_{\\text{The spatial interpolation part}}-\\frac{1}{m}\\]
Both temporal and spatial interpolation parts can be proved to be submodular and non-decreasing, following the proofs of Lemmas 6 and 7. So, the summation of the two parts preserves the submodularity and non-decreasingness, according to the properties of composite submodular functions (Lemma 1 in the manuscript). Similarly, quality function \\(q(\\tau_{i})=-\\sum_{j=1}^{m}p_{i}^{(j)}\\log_{2}\\left(p_{i}^{(j)}\\right)\\) can be proved to be submodular and non-decreasing, since the entropy function is known as concave and non-decreasing.
_Extensions on Multiple Task Assignment._ Since the spatial interpolation process refers to the interactions between multiple TCSC tasks, we hereby examine the two variants of multi-task assignment scenarios, with aggregated quality metrics \\(q_{sum}\\) and \\(q_{min}\\) as the maximization targets, respectively.
**Problem 4**: _Spatio-Temporal Continuous Crowdsourcing (_**STCC in short**_) Given a set of tasks \\(\\mathcal{T}=\\{\\tau_{1},\\tau_{2}, \\}\\), the problem is to find a task assignment for \\(\\tau_{i}\\in\\mathcal{T}\\), such that the summation quality \\(q_{sum}(\\mathcal{T})=\\sum_{i=1}^{|\\mathcal{T}|}q(\\tau_{i}|\\tau_{i}\\in\\mathcal{T})\\), or the minimum quality \\(q_{min}(\\mathcal{T})=min\\left\\{q(\\tau_{i})\\middle|\\tau_{i}\\in\\mathcal{T}\\right\\}\\) can be maximized with given budgets._
In the settings of spatiotemporal interpolation, we can prove that the summation quality and minimum quality functions are still submodular and non-decreasing with Lemma 1, because 1) SUM and MIN are concave functions; 2) the extended quality function is proved to be submodular and non-decreasing. Therefore, the approximation framework of Algorithm 1 can be applied for handling the multi-task assignment scenario. The heuristic value is set as the increase of the quality metrics divided by the corresponding cost (of a tentatively selected subtask), following the same greedy strategy and approximation ratio. To this end, the framework of approximation algorithm can be preserved.
_Experimental Results._ We conduct experiments with the updated quality metric in Fig. 11, following the default setting of the manuscript. SAPprox refers to the results with spatiotemporal interpolation and Approx refers to the results with only temporal interpolation. By default, we set \\(w_{s}\\) and \\(w_{t}\\) to \\(0.3\\) and \\(0.7\\), respectively, for SAPprox. For Approx, the \\(w_{s}\\) is set to \\(1\\), since it does not do spatial interpolation.
Fig. 11 (a) reports the quality values w.r.t. data distributions. It can be observed that both SAPprox and Approx are very close to the optimal result, OPT. SAPprox is better than Approx, because of the quality improvement made by spatial interpolation. We also test how the quality varies w.r.t. the budgets. In all testing, SAPprox is better than Approx, and both the two have significant improvement over the baselines. To examine the effect of parameter tuning on \\(w_{s}\\) and \\(w_{t}\\), we
Fig. 10: Spatiotemporal Interpolation (Executed subtasks are shaded.)plot Fig. 11 (c), where X-axis is for the value of \\(w_{t}\\). It shows that when \\(w_{t}\\) equals 0.7 the highest quality value is achieved. Therefore, in our experiments, \\(w_{t}\\) is set to \\(0.7\\), by default.
_Extensions on Indexing._ To support efficient evaluation, current indexing techniques need to be redesigned. For TCSC, the index structure is based on a one-dimensional Voronoi diagram. For STCC, the index structure is based on a multi-dimensional weighted order-k Voronoi diagram. We should study how to approximate such a diagram with indexing structures, including node splitting and stopping conditions, index-based maximum heuristic value calculation, etc.
Fig. 11: Results with Spatiotemporal Interpolation | The proliferation of advanced mobile terminals opened up a new crowdsourcing avenue, spatial crowdsourcing, to utilize the crowd potential to perform real-world tasks. In this work, we study a new type of spatial crowdsourcing, called time-continuous spatial crowdsourcing (TCSC _in short_). It supports broad applications for long-term continuous spatial data acquisition, ranging from environmental monitoring to traffic surveillance in citizen science and crowdsourcing projects. However, due to limited budgets and limited availability of workers in practice, the data collected is often incomplete, incurring data deficiency problem. To tackle that, in this work, we first propose an entropy-based quality metric, which captures the joint effects of incompletion in data acquisition and the imprecision in data interpolation. Based on that, we investigate quality-aware task assignment methods for both single- and multi-task scenarios. We show the NP-hardness of the single-task case, and design polynomial-time algorithms with guaranteed approximation ratios. We study novel indexing and pruning techniques for further enhancing the performance in practice. Then, we extend the solution to multi-task scenarios and devise a parallel framework for speeding up the process of optimization. We conduct extensive experiments on both real and synthetic datasets to show the effectiveness of our proposals. | Give a concise overview of the text below. | 247 |
arxiv-format/1607_06591v1.md | # Understanding the Plasmonics of Nanostructured Atomic Force Microscopy Tips
A. Sanders
R.W. Bowman
L. Zhang
V. Turek
D.O. Sigle
A. Lombardi
L. Weller
J.J. Baumberg
Nanophotonics Centre, Department of Physics, Cavendish Laboratory, Cambridge, CB3 0HE
May 19, 2022
######
+
Footnote †: Email: [email protected]; Site: www.np.phy.cam.ac.uk
+
Footnote †: Email: [email protected]; Site: www.np.phy.cam.ac.uk
Within the last decade nano-optics has benefited from the advent of metallic tip-based near-field enhancement techniques such as TERS and scanning near-field microscopy (SNOM), leading to successes in single molecule detection [1] and spatial mapping of chemical species [2]. Despite their high spatial resolution and scanning capabilities, there remains confusion about the plasmonic response of metallic tips. Tip systems built on AFM probes can exhibit electric field enhancements close to 100 at the apex (Raman enhancements up to \\(10^{8}\\)) [2], due to a combination of plasmonic localisation and a non-resonant lightning rod effect. The factors determining a tip's ability to enhance the near-field include the experimental excitation/collection geometry, tip sharpness, surface metal morphology, and constituent material.
Despite large measured near-field enhancements, the standard sharp AFM tip geometry does not support radiative plasmons. The extended (\\(\\sim\\)20 um) size and single curved metal-dielectric interface of an AFM tip supports only weakly confined localised surface plasmons (LSPs) [3] and propagating surface plasmon polaritons (SPPs), which may be localised by adiabatic nanofocussing [4; 5; 6; 7; 8; 9]. Lack of a dipole moment means that neither LSPs or SPPs strongly couple with radiative light in the same manner as multipolar plasmons in sub-wavelength nanoparticles [3]. For this reason, the tip near-field is often excited with evanescent waves [10] or via nanofabricated gratings [6] to access the optically-dark SPPs, with resonant scattering of evanescent waves [11; 12; 13], resonances in the TERS background [14; 15] and depolarised scattering images [16] providing evidence for localised plasmon excitation. For Au tips such plasmon resonances are typically found between 600-800 nm.
Improvements in enhancement are often found in roughened tips with grains acting as individual nano-antennae for more confined LSPs, however this approach lacks reproducibility [16]. In recent years controlled nanostructuring of the tip apex with a distinct sub-wavelength-size metallic feature has been explored in order to engineer and tune a plasmonic optical antenna precisely at the apex and better incorporate more localised multipolar plasmons [16; 17; 18; 19; 20; 21]. Etching [21; 22], focussed-ion-beam machining [23; 24; 25], selective deposition [26], nanoparticle pickup [27], nanostructure grafting [28] and electrochemical deposition [29] have all been successfully used to nanostructure optical antenna tips.
Scattering resonances in the visible-NIR spectrum have been directly measured on a subset of these [25; 26; 29] while other reports use improvements in the field enhancement as a measurement of antenna quality [20; 21; 28]. In such cases the field enhancement has been attributed to give improvements by an order of magnitude through plasmon excitation [20; 23; 24; 29].
The simplest geometry for a tip apex is a spherical nanoparticle (NP), giving LSPs similar to those in an
Figure 1: (a) Hyperspectral imaging with supercontinuum laser focussed onto the tip apex for imaging, and the tip is raster scanned across the beam with scattering spectra of both polarisations acquired at each position. (b) Ball-tip imaged in dark-field microscopy.
isolated spherical metallic nanoparticle. In this paper we demonstrate an effective method for characterising the radiative plasmon modes of a tip and clearly show the benefits of utilising spherically nanostructured tips as near-field enhancers.
The optical properties of AFM tips are studied using a custom-built confocal microscope with a supercontinuum laser source for dark-field scattering spectroscopy (Fig. 1). Both illumination and collection share the optical axis of a 0.8 NA IR objective. Supercontinuum laser light is filtered into a ring and incident on a tip at 0.6-0.8 NA while light scattered by the tip is confocally collected from the central laser focus using an iris to restrict the collection NA below 0.6. Broadband polarising beamsplitters are used to simultaneously measure spectra which are linearly polarised both along the tip axis (axial) and perpendicular to the tip axis (transverse).
A scanning hyperspectral imaging technique is applied to determine the local optical response at the tip apex. Tips are raster scanned under the laser spot and the dark field scattering from the confocal sampling volume measured at each point, forming a hyperspectral data cube. Images are formed at each wavelength contained in the cube, with each image pixel digitised into 1044 wavelengths between 400-1200 nm. Measured spectra are normalised to a spectrum of flat metal of the same material to show only structural effects. Image slices at individual wavelengths or wavelength bands are then readily constructed to display localised spectral features. Fast image acquisition is made possible by the high brightness supercontinuum laser source (100 \\(\\mu\\)W.\\(\\mu\\)m\\({}^{-2}\\)) and cooled benchtop spectrometers, enabling 10 ms integration times (5 mins per image). Within plasmonics, this approach to hyperspectral imaging has been used to identify distributed plasmon modes in aggregated AuNP colloids [30] and to image SPPs [31] but has yet to be applied to tips. By using this technique, radiative plasmons can be spatially identified with a resolution around 250 nm.
To investigate the radiative plasmonic properties of nanostructured tips, hyperspectral images are taken of both standard (sharp) and spherical-tipped Au AFM tips. Spherical tips are either 300 nm diameter, 50 nm Au-coated NanoTools B150 AFM probes or electrochemically-deposited AuNP-on-Pt AFM probes, fabricated in-house [29] (shown in Fig. 2). Fabricated tips are pre-treated where possible prior to use with ambient air plasma and/or piranha solution to remove organic surface residue and, in some cases, smooth out surface roughness.
Comparisons between spherical- and sharp-tipped Au probes using hyperspectral image slices (Fig. 3) shows that spherical tips exhibit a characteristic red (600-700 nm) scatter, separated from the bulk tip. No similar localised scattering is seen in the visible spectrum with sharp Au tips, which have a ten-fold weaker optical response and appear similar to non-plasmonic Pt tips. This delocalised apex scatter can also be directly seen in dark-field microscopy images (Fig. 1b). The AuNP-on-Pt
Figure 3: Hyperspectral images of (a) sharp Au tip, (b) Au-coated spherical tip (Nanotools), and (c) electrochemically-deposited AuNP-on-Pt tip. Collected light is polarised along tip axis, colour maps all have same normalisation. Scale bar is 600 nm. (d,e) Scattering spectra of both sharp and spherical metal tips, extracted from hyperspectral images around the apex region, in (d) axial and (e) transverse polarisations. (f) Integrated SERS background from sharp and spherical Au tips. Scattering spectrum of spherical Au tip apex shown shaded.
Figure 2: SEM images of (a) sharp Au AFM tip, (b) Au-coated spherical AFM tip (Nanotools), and (c) electrochemically-deposited AuNP-on-Pt AFM tip.
structure behaves very similarly to the Au-coated spherical tip (which has diamond-like-carbon inside), likely because the 50 nm coating thickness is greater than the skin depth [32, 33]. As we show below, differences in plasmon resonances arise due to the Au-Pt and Au-Au neck boundaries.
Integrating spectra around each tip better shows the 600-700 nm scattering resonance from spherical Au tips (Fig. 3d,e), which are reliably present in all spherical-tipped AFM probes, both vacuum-processed and electrochemically deposited. We attribute these to localised surface plasmon excitation, while electron microscopy confirms this resonance correlates only with spherical Au tip shapes. The response of sharp Au tips shows no similar plasmonic features, while the slow rise in scattering towards the NIR is consistent with lightning rod scattering [3].
Broadband tuneable SERS measurements [34] confirm that the optical scattering resonance seen in spherical Au tips is indeed caused by radiative plasmon excitation. The trapped plasmon fields enhance optical processes on the surface such as surface-enhanced Raman scattering (SERS) and here we use the SERS background [34, 35] as a reporter of the plasmonic near-field strength. SERS background spectra are integrated across a range of excitation wavelengths between 500 and 700 nm, spaced 10 nm apart, to extract any scattering resonances. The resulting spectrum (Fig. 3f) shows a distinct peak around the spherical Au tip scattering resonance, while no such resonance is seen for sharp Au tips. Further confirmation stems from direct observation of plasmon coupling between spherical tips, as has been previously reported [36].
Plasmon resonances in spherical AuNP tips correspond to _radiative_ antenna-like modes, similar to those in plasmonic nanoparticles, that efficiently couple far-field light into strong collective free electron oscillations without the need for SPP momentum matching. As with nanoparticles, the signature of these plasmons is an optical resonance indicating their large dipole moment (Fig. 3d). Such radiative plasmons only form if multipolar surface charge oscillations are supported, requiring a structure with multiple metal-dielectric interfaces. Since spherical metallic tips possess a neck behind the tip, they can support NP plasmonics. Sharp tips do not have this back surface, hence cannot support radiative plasmon resonances, although the single metal-dielectric surface supports launching of evanescent SPPs and a strong lightning rod component.
Simulated near-field spectra (using the boundary element method) around the apex of 300 nm spherical Au and AuNP-on-Pt tips with 120 nm neck diameters (\\(d_{\\text{neck}}=0.4d_{\\text{sphere}}\\)) are shown in Fig. 4a. Tips are simulated with a length of 1.88 um to avoid truncation artefacts which are commonly seen in tip simulations and erroneously suggest plasmonic performance even in sharp tips. Strong modes appear along the tip axis for all spherical tips between 550-700 nm, as in experiments with peak wavelengths that match our hyperspectral results. Near-field maps corresponding to the main resonance in each tip (Fig. 4b,c) show dipole-like resonances with the neck spatially splitting the underside of each mode, mixing it with quadrupolar modes and shifting it towards the blue.
In order to directly compare the _plasmonic_ behaviour of spherical and sharp Au tips independent of lightning rod contributions, the neck width is incrementally increased. This allows us to study structures which smoothly transition from a nanoparticle attached to the apex of a sharp Au tip, into a rounded tip geometry, without the apex radius ever changing. The field enhancement and peak positions extracted from this morphology transition (Fig. 4d) show resonances insensitive to the neck width until \\(d_{\\text{neck}}>0.8d_{\\text{sphere}}\\), explaining the
Figure 4: (a) Numerically simulated near-field apex spectra of spherical Au and AuNP-on-Pt tips with (b,c) near-field maps of the main resonance in each, as highlighted by circles in (a). Simulated tips have a 300 nm spherical radii, 120 nm neck widths, 20\\({}^{\\circ}\\) opening angles and 1.88 μm lengths to best match typical experimental tip geometries and avoid truncation artefacts. Tips are illuminated by plane waves orientated along the tip axis. (d) Interpolated field enhancement map with superimposed resonant wavelengths, as the neck width varies from a spherical to a sharp tip. Tips have a 250 nm apex diameter, 1.88 μm length, and 10\\({}^{\\circ}\\) opening angle.
robustness of observed spherical tip plasmons between different tip morphologies. However a steady decrease in the field enhancement is observed once \\(d_{\\text{neck}}>0.4d_{\\text{sphere}}\\), decreasing faster once \\(d_{\\text{neck}}>0.8d_{\\text{sphere}}\\). This supports the claim that sharp tips cannot sustain antenna-like plasmons and that the majority of enhancement is from lightning rod effects. We note that the lateral spatial localisation of the field approaches \\(0.3d_{\\text{sphere}}\\) independent of this neck diameter.
These results demonstrate the importance of considering which plasmons might exist in a particular experiment and nanostructure geometry, and that it is vital to characterise nanostructures prior to their application. Apex nanostructuring can controllably introduce radiative plasmons into the tip geometry, lifting the evanescent illumination restriction of sharp tips and permitting use of a wider range of microscope configurations. While the lightning rod effect will always contribute to the field enhancement and favour sharp tips, exploiting resonant plasmonic enhancement in a carefully optimised spherical tip can further improve the near-field enhancement. The spherical tip geometry and materials shown here are optimised for use with the typically-used \\(633\\,\\mathrm{nm}\\) laser wavelengths.
Demonstrated interactions between spherical tip plasmons [36] also suggests coupling with an image charge in a planar surface is possible and could be used in nanometric tip-surface gaps to further localise the field on resonance with near infrared lasers. Exploiting radiative tip plasmons in this manner bridges the gap between SERS and conventional TERS, forming a spatially-mappable version of the highly successful nanoparticle-on-mirror geometry [37; 38]. These systems repeatedly produce Raman enhancements of up to \\(10^{7}\\) with nanometric mode volumes, much like tips, and demonstrate that plasmonic gaps can exhibit comparatively large field enhancements without relying only on the lightning rod effect.
Secondly, without prior knowledge of the tip-system spectral response it is difficult to properly interpret any measurements, such as TERS spectra. Improved tip characterisation is crucial to understanding variations in TERS spectra. Standard, wide-field microscopy/spectroscopy is not a particularly effective tool for optically characterising tips. Instead, confocal hyperspectral imaging provides a viable method for mapping the local scattering response while broadband tuneable SERS offers a unique way of optically characterising the near-field. Incorporating these techniques into existing microscopes is relatively simple and will greatly improve the reliability of tip-based near-field microscopy.
###### Acknowledgements.
The authors thank EPSRC grants EP/G060649/1 and EP/L027151/1, and ERC grant LINASS 320503 for funding and NanoTools for their services providing Au-coated spherical AFM tips. RWB thanks Queens' College and the Royal Commission for the Exhibition of 1851 for financial support.
## References
* Zhang _et al._ [2013]R. Zhang, Y. Zhang, Z. Dong, S. Jiang, C. Zhang, L. Chen, L. Zhang, Y. Liao, J. Aizpurua, Y. Luo, _et al._, Nature **498**, 82 (2013).
* Pettinger _et al._ [2012]B. Pettinger, P. Schambach, C. J. Villagomez, and N. Scott, Annual review of physical chemistry **63**, 379 (2012).
* Zhang _et al._ [2009]W. Zhang, X. Cui, and O. J. Martin, Journal of Raman Spectroscopy **40**, 1338 (2009).
* Stockman [2004]M. I. Stockman, Physical review letters **93**, 137404 (2004).
* Pile and Gramotnev [2006]D. Pile and D. K. Gramotnev, Applied Physics Letters **89**, 041111 (2006).
* Berweger _et al._ [2010]S. Berweger, J. M. Atkin, R. L. Olmon, and M. B. Raschke, The Journal of Physical Chemistry Letters **1**, 3427 (2010).
* Lee _et al._ [2011]J. S. Lee, S. Han, J. Shirdel, S. Koo, D. Sadiq, C. Lienau, and N. Park, Optics express **19**, 12342 (2011).
* Berweger _et al._ [2012]S. Berweger, J. M. Atkin, R. L. Olmon, and M. B. Raschke, The Journal of Physical Chemistry Letters **3**, 945 (2012).
* Lindquist _et al._ [2013]N. C. Lindquist, J. Jose, S. Cherukulappurath, X. Chen, T. W. Johnson, and S.-H. Oh, Laser & Photonics Reviews **7**, 453 (2013).
* Hamann _et al._ [1998]H. F. Hamann, A. Gallagher, and D. J. Nesbitt, Applied Physics Letters **73**, 1469 (1998).
* Neacsu _et al._ [2005]C. Neacsu, G. Steudle, and M. Raschke, Applied Physics B **80**, 295 (2005).
* Mehtani _et al._ [2006]D. Mehtani, N. Lee, R. Hartschuh, A. Kisliuk, M. Foster, A. Sokolov, F. Csajko, and I. Tsukerman, Journal of Optics A: Pure and Applied Optics **8**, S183 (2006).
* Barrios _et al._ [2009]C. A. Barrios, A. V. Malkovskiy, A. M. Kisliuk, A. P. Sokolov, and M. D. Foster, The Journal of Physical Chemistry C **113**, 8158 (2009).
* Pettinger _et al._ [2007]B. Pettinger, K. F. Domke, D. Zhang, R. Schuster, and G. Ertl, Physical Review B **76**, 113409 (2007).
* Pettinger _et al._ [2009]B. Pettinger, K. F. Domke, D. Zhang, G. Picardi, and R. Schuster, Surface Science **603**, 1335 (2009).
* Mino _et al._ [2014]T. Mino, Y. Saito, and P. Verma, ACS nano **8**, 10187 (2014).
* Hayazawa _et al._ [2001]N. Hayazawa, Y. Inouye, Z. Sekkat, and S. Kawata, Chemical Physics Letters **335**, 369 (2001).
* Bailo and Deckert [2008]E. Bailo and V. Deckert, Chemical Society Reviews **37**, 921 (2008).
* Hayazawa _et al._ [2012]N. Hayazawa, T.-a. Yano, and S. Kawata, Journal of Raman Spectroscopy **43**, 1177 (2012).
* Umakoshi _et al._ [2012]T. Umakoshi, T.-a. Yano, Y. Saito, and P. Verma, Applied Physics Express **5**, 052001 (2012).
* Kharintsev _et al._ [2013]S. Kharintsev, G. Hoffmann, A. Fishman, and M. K. Salakhov, Journal of Physics D: Applied Physics **46**, 145501 (2013).
* Uebel _et al._ [2005]P. Uebel, S. T. Bauerschmidt, M. A. Schmidt, and P. S. J. Russell, Applied Physics Letters **103**, 021101(2013).
* Weber-Bargioni _et al._ (2010)A. Weber-Bargioni, A. Schwartzberg, M. Schmidt, B. Harteneck, D. Ogletree, P. Schuck, and S. Cabrini, Nanotechnology **21**, 065306 (2010).
* Fleischer _et al._ (2011)M. Fleischer, A. Weber-Bargioni, M. V. P. Altoe, A. M. Schwartzberg, P. J. Schuck, S. Cabrini, and D. P. Kern, ACS nano **5**, 2570 (2011).
* Maouli _et al._ (2015)I. Maouli, A. Taguchi, Y. Saito, S. Kawata, and P. Verma, Applied Physics Express **8**, 032401 (2015).
* Zou _et al._ (2009)Y. Zou, P. Steinvurzel, T. Yang, and K. B. Crozier, Applied Physics Letters **94**, 171107 (2009).
* Denisyuk _et al._ (2012)A. I. Denisyuk, M. A. Tinskaya, M. I. Petrov, A. V. Shelaev, and P. S. Dorozhkin, Journal of nanoscience and nanotechnology **12**, 8651 (2012).
* Huth _et al._ (2013)F. Huth, A. Chuvilin, M. Schnell, I. Amenabar, R. Krutokhvostov, S. Lopatin, and R. Hillenbrand, Nano letters **13**, 1065 (2013).
* Sanders _et al._ (2015)A. Sanders, L. Zhang, R. W. Bowman, L. O. Herrmann, and J. J. Baumberg, Particle & Particle Systems Characterization **32**, 182 (2015).
* Herrmann _et al._ (2013)L. Herrmann, V. Valev, J. Aizpurua, and J. J. Baumberg, Optics express **21**, 32377 (2013).
* Bashevoy _et al._ (2007)M. Bashevoy, F. Jonsson, Y. Chen, and N. Zheludev, Optics Express **15**, 11313 (2007).
* Stockman (2011)M. I. Stockman, Optics express **19**, 22029 (2011).
* Huber _et al._ (2014)C. Huber, A. Trugler, U. Hohenester, Y. Prior, and W. Kautek, Physical Chemistry Chemical Physics **16**, 2289 (2014).
* Lombardi _et al._ (2016)A. Lombardi, A. Demetriadou, L. Weller, P. Andrae, F. Benz, R. Chikkaraddy, J. Aizpurua, and J. J. Baumberg, ACS Photonics **3**, 471 (2016).
* Hugall and Baumberg (2015)J. T. Hugall and J. J. Baumberg, Physical Review Letters (2015).
* Savage _et al._ (2012)K. J. Savage, M. M. Hawkeye, R. Esteban, A. G. Borisov, J. Aizpurua, and J. J. Baumberg, Nature **491**, 574 (2012).
* Mertens _et al._ (2013)J. Mertens, A. L. Eiden, D. O. Sigle, F. Huang, A. Lombardo, Z. Sun, R. S. Sundaram, A. Colli, C. Tserkezis, J. Aizpurua, _et al._, Nano letters **13**, 5033 (2013).
* Taylor _et al._ (2014)R. W. Taylor, F. Benz, D. O. Sigle, R. W. Bowman, P. Bao, J. S. Roth, G. R. Heath, S. D. Evans, and J. J. Baumberg, Scientific reports **4** (2014). | Structured metallic tips are increasingly important for optical spectroscopies such as tip-enhanced Raman spectroscopy (TERS), with plasmonic resonances frequently cited as a mechanism for electric field enhancement. We probe the local optical response of sharp and spherical-tipped atomic force microscopy (AFM) tips using a scanning hyperspectral imaging technique to identify plasmonic behaviour. Localised surface plasmon resonances which radiatively couple with far-field light are found only for spherical AFM tips, with little response for sharp AFM tips, in agreement with numerical simulations of the near-field response. The precise tip geometry is thus crucial for plasmon-enhanced spectroscopies, and the typical sharp cones are not preferred. | Summarize the following text. | 145 |
arxiv-format/1912_12756v1.md | # Target-less registration of point clouds: A review
Yue Pan
ETH Zurich, D-BAUG
[email protected]
## 1 Introduction
In recent decades, point cloud has become a more and more common representation of the 3D world. Point cloud collected by laser scanner or RGBD-cameras can be used for landslide monitoring [32], solar potential analysis [21], three dimensional model reconstruction [3], cultural heritage protection [25], forest management [19][8], robot ego-localization [47] and high definition maps production for self-driving cars [45].
The issue is that the collection of point cloud is limited to the perspective so that it is not possible to achieve the 360 degree complete sampling of the target object's surface from a single viewpoint (station). Usually, we would set up several stations around the target object to have a throughout scan. However, these scans from different viewpoints are in their corresponding station center coordinate system. To unify these scans into a common mapping coordinate system, we should accomplish the so-called point cloud registration procedure, as shown in Fig.1.
As the basic step of point cloud processing and the prerequisites of segmentation, classification and 3D model reconstruction, point cloud registration plays an important role in various of remote sensing and robotics applications. By adopting point cloud registration to get the transformation between two adjacent frames (scans), we can get the change of pose of a robot or an unmanned vehicle. This is called the LiDAR odometry, which is a heated topic in Simultaneous Localization and Mapping (SLAM) technology.
The traditional solution to point cloud registration is using some highly-reflective targets as the tie points for coordinate system transformation. Since this solution still needs the assist of artificial targets and the manual pining of targets in the point clouds, the process is labor-consuming and time-consuming. To automate the process, over the past twenty years, plenty of target-less point cloud registration approaches have been proposed in the fields of remote sensing and computer vision so as to automatically register point clouds together.
In literature, the task of point cloud registration generally follows a two-step workflow: determine correspondences and then estimate the transformation. The first step is correspondences determination. The correspondences can be geometric primitives like points, lines, planes and even specific objects. As the preparation, we usually need to detect the key points, fit the key lines, planes or extract the specific objects. Then we can extract the neighborhood feature and match those geometric primitives according to the feature similarity. Alternatively, the geometric or adjacency relationship can be adopted to get correspondences. Besides, we can keep randomly sampling a minimum set of correspondences and finally choose the set which leads to the transformation with the largest number of inliers.
Figure 1: An example of registration of laser scans [38]The second step is transformation estimation. Given the correspondences, our goal is to solve the transformation (namely, translation and rotation) between two point clouds. Generally, we firstly define a reasonable target function with regard to the transformation parameters. It guarantees a good registration result when the functions value is minimized. Then we can minimize (optimize) the target function using methods like Singular Value Decomposition (SVD), Linear Least Square (LLS) and also non-linear optimization algorithms such as Gauss-Newton and Levenberg-Marquardt. The transformation parameters corresponding to the minimal target function value are what wed like to achieve.
The closely related studies would be briefly reviewed and discussed as follows. In section 2, common transformation estimation methods shared by various registration algorithm would be reviewed. In section 3, 4 and 5, registration methods based on feature matching, iterative closest points and randomized hypothesize-and-verify would be reviewed respectively. Section 6 consists of the summary of introduced algorithms and the outlook of existing challenges and open questions.
## 2 Transformation estimation
### Target function
Given the correspondence points \\(p_{i},q_{i}\\) in the source (moving) point cloud and the target (referenced) point cloud, we'd like to estimate the transformation \\(\\{R^{*},t^{*}\\}\\) from the source to the target point cloud, as shown in Fig.2. the target function under point-to-point distance metric can be drawn as Eq.1, which leads to minimum sum of distance between correspondences after registration. In this case, at least three pair of correspondence are needed.
\\[\\{R^{*},t^{*}\\}=\\underset{\\{R,t\\}}{\\arg\\min}\\left(\\sum_{i}\\|Rp_{i}+t-q_{i}\\|^{ 2}\\right) \\tag{1}\\]
If the geometric primitive used is planes instead of points as shown in Fig.3, the target function goes like Eq.2. When fitting the planes, we can get the normal vector \\(n\\) and the distance from coordinate origin to the plane \\(\\rho\\). Registration's target is to minimize the sum of difference of the normal vector and the distance between corresponding planes after updating the estimated transformation. In this case, still at least three pair of corresponding planes are needed.
### Singular Value Decomposition (SVD)
A popular closed-form solution to point-to-point target function Eq.1 is the method based on SVD[15]. Firstly, we calculate the centroids of the source and target point clouds (Eq.3) and the decentralized coordinate of all the correspondences (Eq.4). After that, we apply SVD (Eq.5) and get the rotation matrix \\(R\\) and translation vector \\(t\\) from the decomposed matrices as Eq.6.
\\[\\bar{p}=\\frac{1}{N}\\sum_{i=1}^{N}p_{i},\\bar{q}=\\frac{1}{N}\\sum_{i=1}^{N}q_{i} \\tag{3}\\]
\\[p_{i}^{{}^{\\prime}}=p_{i}-\\bar{p},\\ q_{i}^{{}^{\\prime}}=q_{i}-\\bar{q} \\tag{4}\\]
\\[U\\Sigma V^{T}=\\sum_{i=1}^{N}p_{i}^{\\prime}q_{i}^{T} \\tag{5}\\]
\\[R^{*}=VU^{T},\\ t^{*}=\\bar{q}-R^{*}\\bar{p} \\tag{6}\\]
### Linear Least Square (LLS)
Another popular solution to Eq.1 is linear least square parameter estimation [22]. Since we can do the approximation \\(\\sin\\left(\\alpha\\right)\\approx\\alpha\\ when\\ \\alpha\\to 0\\), the rotation matrix can be represented as Eq.7. Then we can construct the observation function as Eq.8, which can be arranged as Eq.9. Since the design matrix \\(A\\) and observation vector \\(l\\) can be calculated, the transformation parameters \\(\\hat{x}\\) can then be estimated as Eq.10. The rotation matrix is then restored from the rotation vector.
\\[R\\approx\\begin{bmatrix}1&-\\gamma&\\beta\\\\ \\gamma&1&-\\alpha\\\\ -\\beta&\\alpha&1\\end{bmatrix}=\\left[\\begin{array}{c}\\alpha\\\\ \\beta\\\\ \\gamma\\end{array}\\right]_{\\times}+I \\tag{7}\\]
\\[R\\approx\\begin{bmatrix}1&-\\gamma&\\beta\\\\ \\gamma&1&-\\alpha\\\\ -\\beta&\\alpha&1\\end{bmatrix}=\\left[\\begin{array}{c}\\alpha\\\\ \\beta\\\\ \\gamma\\end{array}\\right]_{\\times}+I \\tag{8}\\]\\[\\underbrace{\\left[\\begin{array}{c}v_{i}^{(x)}\\\\ v_{i}^{(y)}\\\\ v_{i}^{(z)}\\end{array}\\right]}_{v_{i}}=\\underbrace{\\left[\\begin{array}{ccccc}0&z_{i}^ {(p)}&-y_{i}^{(p)}&1&0&0\\\\ -z_{i}^{(p)}&0&x_{i}^{(p)}&0&1&0\\\\ y_{i}^{(p)}&-x_{i}^{(p)}&0&0&0&1\\end{array}\\right]}_{A_{i}}\\underbrace{\\left[ \\begin{array}{c}\\alpha\\\\ \\beta\\\\ \\gamma\\\\ t_{x}\\\\ t_{y}\\\\ t_{z}\\end{array}\\right]}_{x}\\] \\[\\underbrace{\\left[\\begin{array}{c}x_{i}^{(q)}-x_{i}^{(p)}\\\\ y_{i}^{(q)}-y_{i}^{(p)}\\\\ z_{i}^{(q)}-z_{i}^{(p)}\\end{array}\\right]}_{l} \\tag{9}\\]
\\[\\hat{x}=\\left[\\begin{array}{ccccc}\\alpha&\\beta&\\gamma&t_{x}&t_{y}&t_{z}\\end{array} \\right]^{T}=\\left(A^{T}A\\right)^{-1}A^{T}l \\tag{10}\\]
### Three plane correspondence method
As for the plane-to-plane target function Eq.2, a simple solution is the three plane plus one intersection point method [20]. The selected planes have to be linearly independent and intersect at a unique point in order for the transformation parameters to be fully recovered. We can calculate rotation matrix from normal vectors as Eq.11 and Eq.12. The intersection point is calculated as Eq.13. The translation vector is then calculated from the vector between the corresponding intersection point of these planes as Eq.14.
\\[getRotation\\left(v_{1},v_{2}\\right)=I+\\left[v_{1}\\times v_{2}\\right]_{\\times}+ \\left[v_{1}\\times v_{2}\\right]_{\\times}^{2}\\frac{1-v_{1}\\cdot v_{2}}{\\left\\|v _{1}\\times v_{2}\\right\\|} \\tag{11}\\]
\\[\\begin{split} R_{1}=& getRotation\\left(n_{1}^{(s)},n_{1}^{(t)} \\right)\\\\ R_{2}=& getRotation\\left(R_{1}n_{2}^{(s)},n_{2}^{(t)} \\right)\\\\ R_{3}=& getRotation\\left(R_{2}R_{1}n_{3}^{(s)},n_{3}^{(t)} \\right)\\\\ R^{*}=& R_{3}R_{2}R_{1}\\end{split} \\tag{12}\\]
Figure 3: An example of transformation estimation with plane correspondences.
Figure 2: An example of transformation estimation with point correspondences.
\\[x_{int}^{(t)}=\\begin{bmatrix}a_{1}^{(t)}&b_{1}^{(t)}&c_{1}^{(t)}\\\\ a_{2}^{(t)}&b_{2}^{(t)}&c_{2}^{(t)}\\\\ a_{3}^{(t)}&b_{3}^{(t)}&c_{3}^{(t)}\\end{bmatrix}^{-1}\\Bigg{[}\\begin{array}{c }-d_{1}^{(t)}\\\\ -d_{2}^{(t)}\\\\ -d_{3}^{(t)}\\end{array}\\Bigg{]} \\tag{13}\\]
\\[t^{*}=x_{int}^{(t)}-x_{int}^{(s)} \\tag{14}\\]
## 3 Feature matching based algorithms
### Feature matching workflow
Most of the feature matching based registration algorithms (as shown in Fig.4) follow the similar workflow.
Firstly, keypoint detectors such as intrinsic shape signature [48], 3D Harris [36] and local curvature maximum are explored to detect keypoints from original point clouds. These keypoints are more significant geometrically so that more representative features can be extracted from them.
Secondly, local feature descriptors such as Spin Image [18], Fast Point Feature Histograms (FPFH) [33], SHOT descriptor [40], Rotational Projection Statistics (RoPS)[13], 3D Shape Context [39] and Binary Shape Context [10] are generated to encode the local neighborhood information of each keypoint. These feature descriptors should be invariant or insensitive to rigid transformation (translation and rotation) and have high precision and recall for matching. Several popular handcrafted feature descriptors are shown in Fig.5.
Recently, apart from these manually craft features, there are some learned features using deep neural network. A state-of-art point-based model is the PointNet [31], which is able to learn the descriptive point-wise feature of the point cloud for classification and semantic segmentation. A better network structure for point feature extraction used for matching and registration is the so-called siamese network with triplet loss function. An example is the Perfect Match Net [12], which outperforms all the existing handcrafted and learned features on matching accuracy and efficiency with only about 16 dimensional learned feature. Since deep learning has already proved its superiority to traditional methods on both 2D and 3D computer vision, point cloud registration based on deep learning may finally be the main stream solution in the near future.
Thirdly, various feature matching strategies such as reciprocal nearest neighbor, nearest neighbor similarity ratio test and bipartite graph minimal match [27] as shown in Fig.6 are adopted to identify the initial match.
However, there may still be a lot of outlier matches (red lines in Fig.7) among them. Then the incorrect correspondences are eliminated based on methods such as RANSAC [11], geometric consistency constraints [43] or Game Theory based matching algorithm [46]. The RANSAC based
Figure 4: An example of feature matching based registration [14].
Figure 5: Popular handcrafted point features: (a)Spin Images (SI)[18], (b)Fast Point Feature Histogram (FPFH)[33], (c)3D Shape Context (3DSC)[39], (d)SHOT descriptor[40] (e)Binary Shape Context (BSC) [10]
Figure 6: Feature matching by Bipartite Graph minimum weight match[27].
on correspondence selects the best correspondence triplet which gives rise to most inliers after transformation.
Finally, the spatial transformation between the point cloud pair is estimated based on the correspondences after filtering.
### Analysis and evaluation
Feature matching based registration methods are global registration approaches because there's no requirement of transformation initial guess. The drawback is that they are not accurate enough since they are based on keypoints instead of the denser raw point cloud so that they are often regarded as coarse registration [9]. Only with proper matching strategy and outlier filter, can feature matching based methods be robust to noise, occlusion and low overlapping ratio. Besides, these methods are usually time-consuming due to their complex feature extraction, matching and filtering procedure.
## 4 Iterative Closest Points based algorithms
### Classic ICP
The Iterative Closest Point (ICP) algorithm [2] is the most commonly used fine registration method due to its conceptual simplicity and high usability. With a good initial transformation, ICP accomplishes a locally optimal registration by alternately solving for point-to-point closest correspondences and optimal rigid transformation until convergence. At the correspondence determination step, ICP simply assumes closest points of source and target point cloud as correspondences. Then the transformation is estimated from the closest points by minimizing Eq.1. The source point cloud is updated with the transformation and the new closest point correspondences can be calculated so that the aforementioned process can be done iteratively, as shown in Fig.8(a).
### ICP variants
The variants of ICP mainly focus on different processing steps (correspondence determination, outlier correspondence rejection and transformation estimation target function construction) of classic ICP algorithm [28].
For correspondence determination, as shown in Fig.9, there are alternative principle like normal shooting which is suitable for registration of smooth structure and viewpoint projection which is more efficient when the viewpoint is already known.
As for the outlier correspondence rejection, as shown in Fig.10, we can set the correspondence distance threshold, normal vector compatibility and matching uniqueness to get rid of correspondence outliers. [7] propose the Trimmed-ICP algorithm which estimates the distance threshold according to the approximate overlapping ratio.
There are also some variants of ICP focus on the distance metrics [14] of transformation estimation target function as shown in Fig.11. In comparison with the point-to-point distance, the point-to-plane [6], point-to-line [5] distance metrics have better performance on scenarios with plenty of facades (plane) or pillars (lines). Their target functions for transformation estimation are listed in Eq.15 and Eq.16 respectively, in which \\(n_{i}^{(p)}\\) and \\(n_{i}^{(q)}\\) are the normal vector of \\(p_{i}\\) and \\(q_{i}\\)'s neighborhood. The idea of multiple distance metrics is applied in the state-of-art Lidar odometry solution LOAM [47], which uses non-linear optimization to solve the point-to-plane and point-to-plane ICP. Furthermore, [34] proposed the Generalized ICP, which adopts the neighborhood covariance matrix to combine different dis
Figure 8: Example of ICP algorithm: (a) Successful registration of different iterations, (b) bad initial guess with too large rotation difference, (c) failed registration result of ICP with bad initial guess.
Figure 7: An example of feature matching and correspondence filter [4]
Figure 9: ICP variants with different correspondence determining principle.
tance metrics together. For these methods, neighborhood Principle Component Analysis (PCA) needs to be done to get normal vector as well as neighborhood covariance.
\\[\\{R^{*},t^{*}\\}=\\underset{\\{R,t\\}}{\\text{arg}\\min}\\left(\\sum_{i}\\|(Rp_{i}+t-q_{i}) \\cdot n_{i}^{(q)}\\|^{2}\\right) \\tag{15}\\]
\\[\\{R^{*},t^{*}\\}=\\underset{\\{R,t\\}}{\\text{arg}\\min}\\left(\\sum_{i}\\|(Rp_{i}+t-q_{ i})\\times\\widetilde{\\left(n_{i}^{(q)}\\times n_{i}^{(p)}\\right)}\\|^{2}\\right) \\tag{16}\\]
Since ICP tends to converged to wrong local optima with bad transformation initial guess[29] as shown in Fig.8(b-c), other variants of ICP focus on broadening the basin of convergence and avoiding the local optimum. ICP with Invariant Features (ICPIF) [35] combined invariant features with geometric distance in the closest distance calculation. ICPIF is more likely to converged to global optimum than ICP under ideal, noise free conditions. [44] proposed the Global Optimal ICP (Go-ICP) to integrate ICP with a branch-and-bound (BnB) scheme so that a coarse registration is not needed. However, Go-ICP is much more time consuming than ICP and sensitive to outliers.
### Analysis and evaluation
In conclusion, the advantages and limitations of ICP are listed as follows. On one hand, ICP is extremely dependent on a good initialization, without which the algorithm is likely to be trapped in a local optimum so that it's a kind of local registration method. On the other hand, ICP can achieve high registration accuracy when the rotation deviation to the ground truth is small so that ICP is often known as a preferred fine registration method. The general strategy to do registration for TLS point cloud is to apply coarse registration method at first and then use ICP to refine the coarse result. Besides, since ICP is somewhat efficient and following a simple and versatile processing structure, it is the most popular algorithm for SLAM related applications nowadays [29].
## 5 Randomized hypothesize-and-verify based algorithms
### RANSAC without global correspondences determination
The most representative randomized hypothesize-and-verify based algorithms is RANSAC [11]. Without doing correspondence determination (feature matching), we can also apply RANSAC without correspondence to find the largest common pointset for determining the correct registration. Randomly select three different points from the source point cloud and three from the target point cloud to form a group of correspondence bases, estimate the candidate transformation that register the base pairs, and then count the number of point from transformed source point cloud that within a inlier distance threshold from the nearest points in the target point cloud. The transformation estimated by the base pairs with the most points within distance threshold would be accepted at last.
Figure 11: ICP variants with different distance metrics [14].
Figure 10: ICP variants with different correspondence rejection methods [14].
The problem is the efficiency due to the equation of minimum iteration number for a trustworthy sample set as shown in Eq.17, in which \\(r\\) is the point inlier ratio, \\(N\\) is the sample number (6 here), \\(0.99\\) is the confidence and \\(M\\) is the trial number.
\\[p=1-\\left(1-r^{N}\\right)^{M}>0.99 \\tag{17}\\]
The better solution is to pick base points randomly from source point cloud, and efficiently look for geometric congruent corresponding points in target point cloud. Though the time complexity for congruent group searching is \\(O(n^{3})\\), the sample number \\(N\\) would decrease to 3 and the inlier ratio \\(r\\) would increase a lot, thus leading to less total trial number.
### 4 Point Congruent Sets (4PCS) algorithm
The 4-Points Congruent Sets (4PCS) [1] adopts the searching algorithm of affine transformation invariant coplanar 4 point group in different point sets with \\(O(n^{2})\\) time complexity [16] to improve the congruent group searching efficiency since rigid transformation is the subset of affine transformation. 4PCS determines the corresponding four-point base sets by taking advantage of the invariant intersection distance ratios of these four points, as shown in Fig.12 and Fig.13.
Since 4PCS has good performance on challenging global registration cases but is still time-consuming due to the huge total number of points, some more efficient variants of 4PCS are proposed recently. Super4PCS [24] decrease the time complexity of congruent set searching from \\(O(n^{2})\\) to \\(O(n)\\) by using smart indexing of points. K-4PCS [37] is based on significant keypoints instead of the raw point cloud so that decrease the processing point number.
Randomly hypothesis and verify strategy can also be used for plane correspondence. [30] use RANSAC for plane correspondence based registration. [42] proposed V4PCS, which is a plane version of 4PCS. In V4PCS, plane primitive is fitted from each voxel. Then the invariant intersection angle between 4 plane set's normal vectors is adopted to speed up the congruent plane searching procedure.
### Analysis and evaluation
These random sample based algorithms also do not need the transformation initial guess so they can do global registration. Since candidates are checked under a certain confidence, these methods are somehow robust to noise, outliers and similar (repetitive) structures but the total trial number would also increase under such circumstance because the likelihood of picking outlier free subsets degrades rapidly. Besides, these algorithm are mostly coarse registration solutions since the final transform estimated from the pair base of only minimum required number of correspondence is not accurate enough.
## 6 Summary and Outlook
Except for the aforementioned methods, there are also some probability based methods that don't follow the correspondence determination and transform estimation workflow. These probability based algorithms often fit some kind of probability distribution in target point cloud and then maximize the product of probability of the transformed points in source point cloud under certain distribution. Some examples are the 3D Normal Distribution Transformation (NDT) algorithm [23], the Coherent Point Drift (CPD) algorithm [26] and the Gaussian Mixed Model Registration (GMMReg) algorithm [17]. NDT has been widely used in LiDAR assisted localization. Since there's no exact one-to-one correspondence between two point clouds due to noise of measurement and sampling, applying the probability based strategy can better deal with such problem compared with correspondence based methods. However, other probability based methods are still not robust enough to handle registration on large scale real world scenarios.
Generally speaking, for practical registration cases with a lot of scans, we often follow the 'coarse to fine' and 'pairwise to multi-view' processing idea, as shown in Fig.14. Coarse registration algorithm like feature matching and 4PCS are used at first and then fine registration algorithm such as ICP and NDT are adopted to refine the former result. Then we usually use some global adjustment strategy like pose graph optimization [38][41] to jointly register multi
Figure 12: 4PCS algorithm: an example of 4-points congruent set pair [1].
Figure 13: 4PCS algorithm: an example of congruent set match using the invariant intersection distance ratio [1].
ple scans together and minimize the misclosure.
Although there are so many different target-less registration methods that are suitable for different kind of datasets, they still face some common challenges, namely the huge number of points, low overlapping rate, the exist of clutters, occlusion, noise as well as the repetitive structures. Besides, the trade-off of accuracy and efficiency is still a big issue in practice. These challenges would be the main focus of future improvement of registration methods.
There are some other open questions to solve in the near future. For example, the cross-platform registration (such as ALS and TLS point clouds) is still a challenging problem due to the huge difference in perspective,range and point density. Besides, the little-overlapping registration or shape matching problem which can be very useful in digital cultural relic restoration still waits for good solutions.
## References
* [1] D. Aiger, N. J. Mitra, and D. Cohen-Or. 4-points congruent sets for robust pairwise surface registration. _ACM Transactions on Graphics (TOG)_, 27(3):1-10, 2008.
* [2] P. Besl and N. D. McKay. A method for registration of 3-d shapes. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 14(2):239-256, 1990.
* [3] F. Biljecki, J. Stoter, H. Ledoux, S. Zlatanova, and A. Coltekin. Applications of 3d city models: State of the art review. _ISPRS International Journal of Geo-Information_, 4(4):2842-2889, 2015.
* [4] A. P. Bustos and T.-J. Chin. Guaranteed outlier removal for point cloud registration with correspondences. _IEEE transactions on pattern analysis and machine intelligence_, 40(12):2868-2882, 2017.
* [5] A. Censi. An icp variant using a point-to-line metric. In _2008 IEEE International Conference on Robotics and Automation_, 2008.
* [6] Y. Chen and G. Medioni. Object modelling by registration of multiple range images. _Image and vision computing_, 10(3):145-155, 1992.
* [7] D. Chetervikov, D. Svirko, D. Stepanov, and P. Kresk. The trimmed iterative closest point algorithm. In _Object recognition supported by user interaction for service robots_, volume 3, pages 545-548. IEEE, 2002.
* [8] W. Dai, B. Yang, X. Liang, Z. Dong, R. Huang, Y. Wang, and W. Li. Automated fusion of forest airborne and terrestrial point clouds through canopy density analysis. _ISPRS Journal of Photogrammetry and Remote Sensing_, 156:94-107, 2019.
* [9] Y. Diez, F. Roure, X. Llado, and J. Salvi. A qualitative review on 3d coarse registration methods. _ACM Computing Surveys (CSUR)_, 47(3):45, 2015.
* [10] Z. Dong, B. Yang, Y. Liu, F. Liang, B. Li, and Y. Zang. A novel binary shape context for 3d local surface description. _ISPRS Journal of Photogrammetry and Remote Sensing_, 130:431-452, 08 2017.
* [11] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. _Communications of the ACM_, 24(6):381-395, 1981.
* [12] Z. Gojcic, C. Zhou, J. D. Wegner, and A. Wieser. The perfect match: 3d point cloud matching with smoothed densities. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 5545-5554, 2019.
* [13] Y. Guo, F. Sohel, M. Bennamoun, M. Lu, and J. Wan. Rotational projection statistics for 3d local surface description and object recognition. _International journal of computer vision_, 105(1):63-86, 2013.
* [14] D. Holz, A. E. Ichim, F. Tombari, R. B. Rusu, and S. Behnke. Registration with the point cloud library: A modular framework for aligning in 3-d. _IEEE Robotics & Automation Magazine_, 22(4):110-124, 2015.
* [15] B. K. Horn. Closed-form solution of absolute orientation using unit quaternions. _Josa a_, 4(4):629-642, 1987.
* [16] D. P. Huttenlocher. Fast affine point matching: An output-sensitive method. In _Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, pages 263-268. IEEE, 1991.
* [17] B. Jian and B. C. Vemuri. Robust point set registration using gaussian mixture models. _IEEE transactions on pattern analysis and machine intelligence_, 33(8):1633-1645, 2010.
* [18] A. E. Johnson and M. Hebert. Using spin images for efficient object recognition in cluttered 3d scenes. _IEEE Transactions on pattern analysis and machine intelligence_, 21(5):433-449, 1999.
* [19] D. Kelbe, J. van Aardt, P. Romanczyk, M. van Leeuwen, and K. Cawse-Nicholson. Marker-free registration of forest terrestrial laser scanner data pairs with embedded confidence metrics. _IEEE Transactions on Geoscience and Remote Sensing_, 54(7):4314-4330, July 2016.
* [20] P. Kim, J. Chen, and Y. K. Cho. Automated point cloud registration using visual and planar features for construction environments. _Journal of Computing in Civil Engineering_, 32(2):04017076, 2017.
Figure 14: Common point cloud registration workflow: from pairwise to multi-view, from coarse to fine [38].
* [21] F. Liang and B. Yang. Multilevel solar potential analysis of building based on ubiquitous point clouds. In _2018 26th International Conference on Geoinformatics_, pages 1-4. IEEE, 2018.
* [22] K.-L. Low. Linear least-squares optimization for point-to-plane icp surface registration. _Chapel Hill, University of North Carolina_, 4(10):1-3, 2004.
* [23] M. Magnusson, A. Lilienthal, and T. Duckett. Scan registration for autonomous mining vehicles using 3d-ndt. _Journal of Field Robotics_, 24(10):803-827, 2007.
* [24] N. Mellado, D. Aiger, and N. J. Mitra. Super 4pcs fast global pointcloud registration via smart indexing. In _Computer Graphics Forum_, volume 33, pages 205-215. Wiley Online Library, 2014.
* [25] A. Montuori, G. Luzi, S. Stramondo, G. Casula, C. Bignami, E. Bonali, M. G. Bianchi, and M. Crosetto. Combined use of ground-based systems for cultural heritage conservation monitoring. In _2014 IEEE Geoscience and Remote Sensing Symposium_, pages 4086-4089, July 2014.
* [26] A. Myronenko and X. Song. Point set registration: Coherent point drift. _IEEE transactions on pattern analysis and machine intelligence_, 32(12):2262-2275, 2010.
* [27] Y. Pan, B. Yang, F. Liang, and Z. Dong. Iterative global similarity points : A robust coarse-to-fine integration solution to pairwise 3d point cloud registration. _In 2018 International Conference on 3D Vision(3DV)_, 2018.
* [28] Pomerleau, Francois, Francis, Siegwart, Roland, Magnenat, and Stephane. Comparing icp variants on real-world data sets. _Autonomous Robots_, 34(3):133-148, 2013.
* [29] F. Pomerleau, F. Colas, R. Siegwart, et al. A review of point cloud registration algorithms for mobile robotics. _Foundations and Trends(r) in Robotics_, 4(1):1-104, 2015.
* [30] M. Previtali, L. Barazzetti, R. Brumana, and M. Scaioni. Scan registration using planar features. _The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, 40(5):501, 2014.
* [31] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 652-660, 2017.
* [32] K. Rowlands, L. D. Jones, and M. Whitworth. Landslide laser scanning: a new look at an old problem. _Quarterly Journal of Engineering Geology and Hydrogeology_, 36(2):155-157, 2003.
* [33] R. B. Rusu, N. Blodow, and M. Beetz. Fast point feature histograms (fpfh) for 3d registration. In _2009 IEEE International Conference on Robotics and Automation_, pages 3212-3217, May 2009.
* [34] A. Segal, D. Haehnel, and S. Thrun. Generalized-icp. In _Robotics: science and systems_, volume 2, page 435. Seattle, WA, 2009.
* [35] G. C. Sharp, S. W. Lee, and D. K. Wehe. Icp registration using invariant features. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 24(1):90-102, 2002.
* [36] I. Sipiran and B. Bustos. Harris 3d: a robust extension of the harris operator for interest point detection on 3d meshes. _The Visual Computer_, 27(11):963, 2011.
* [37] P. W. Theiler, J. D. Wegner, and K. Schindler. Keypoint-based 4-points congruent sets-automated marker-less registration of laser scans. _ISPRS journal of photogrammetry and remote sensing_, 96:149-163, 2014.
* [38] P. W. Theiler, J. D. Wegner, and K. Schindler. Globally consistent registration of terrestrial laser scans via graph optimization. _ISPRS journal of photogrammetry and remote sensing_, 109:126-138, 2015.
* [39] F. Tombari, S. Salti, and L. Di Stefano. Unique shape context for 3d data description. In _Proceedings of the ACM workshop on 3D object retrieval_, pages 57-62. ACM, 2010.
* [40] F. Tombari, S. Salti, and L. Di Stefano. Unique signatures of histograms for local surface description. In _European conference on computer vision_, pages 356-369. Springer, 2010.
* [41] D. Wujanz, L. Barazzetti, M. Previtali, M. Scaioni, et al. a comparative study among three registration algorithms: Performance, quality assurance and accuracy. In _8th International Workshop on 3D Virtual Reconstruction and Visualization of Complex Architectures, 3D-ARCH 2019_, volume 42, pages 779-786. Copernicus GmbH, 2019.
* [42] Y. Xu, R. Boerner, W. Yao, L. Hoegner, and U. Stilla. Pairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets. _ISPRS journal of photogrammetry and remote sensing_, 151:106-123, 2019.
* [43] B. Yang, Z. Dong, F. Liang, and Y. Liu. Automatic registration of large-scale urban scene point clouds based on semantic feature points. _ISPRS Journal of Photogrammetry and Remote Sensing_, 113:43-58, 2016.
* [44] J. Yang, H. Li, D. Campbell, and Y. Jia. Go-icp: A globally optimal solution to 3d icp point-set registration. _IEEE transactions on pattern analysis and machine intelligence_, 38(11):2241-2254, 2015.
* [45] S. Yang, X. Zhu, X. Nian, L. Feng, X. Qu, and T. Mal. A robust pose graph approach for city scale lidar mapping. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 1175-1182. IEEE, 2018.
* [46] D. Zai, J. Li, Y. Guo, M. Cheng, P. Huang, X. Cao, and C. Wang. Pairwise registration of tls point clouds using covariance descriptors and a non-cooperative game. _ISPRS Journal of Photogrammetry and Remote Sensing_, 134:15-29, 2017.
* [47] J. Zhang and S. Singh. Loam: Lidar odometry and mapping in real-time. In _Robotics: Science and Systems_, volume 2, page 9, 2014.
* [48] Y. Zhong. Intrinsic shape signatures: A shape descriptor for 3d object recognition. In _2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops_, pages 689-696. IEEE, 2009. | Point cloud registration has been one of the basic steps of point cloud processing, which has a lot of applications in remote sensing and robotics. In this report, we summarized the basic workflow of target-less point cloud registration,namely correspondence determination and transformation estimation. Then we reviewed three commonly used groups of registration approaches, namely the feature matching based methods, the iterative closest points algorithm and the randomly hypothesis and verify based methods. Besides, we analyzed the advantage and disadvantage of these methods are introduced their common application scenarios. At last, we discussed the challenges of current point cloud registration methods and proposed several open questions for the future development of automatic registration approaches. | Write a summary of the passage below. | 129 |
arxiv-format/2001_05264v1.md | # Towards Deep Unsupervised SAR Despeckling with Blind-Spot Convolutional Neural Networks
## 1 Introduction
Synthetic Aperture Radar (SAR) is a coherent imaging system and as such it strongly suffers from the presence of speckle, a signal dependent granular noise. Speckle noise makes SAR images difficult to interpret, preventing the effectiveness of scene analysis algorithms for, e.g., image segmentation, detection and recognition. Several despeckling methods applied to SAR images have been proposed working either in spatial or transform domain. The first attempts at despeckling employed filtering-based techniques operating in spatial domain such as Lee filter [1], Frost filter [2], Kuan filter [3], and Gamma-MAP filter [4]. Wavelet-based methods [5, 6] enabled multi-resolution analysis. More recently, non-local filtering methods attempted to exploit self-similarities and contextual information. A combination of non-local approach, wavelet domain shrinkage and Wiener filtering in a two-step process led to SAR-BM3D [7], a SAR-oriented version of BM3D [8].
In recent years, deep learning techniques have set the benchmark in many image processing tasks, achieving exceptional results in problems such as image restoration [9], super resolution [10], semantic segmentation [11]. Recently, some despeckling methods based on convolutional neural networks (CNNs) have been proposed [12, 13], attempting to leverage the feature learning capabilities of CNNs. Such methods use a supervised training approach where the network weights are optimized by minimizing a distance metric between noisy inputs and clean targets. However, clean SAR images do not exist and supervised training methods resort to synthetic datasets where optical images are used as ground truth and their artificially speckled version as noisy inputs. This creates a domain gap between the features of synthetic training data and those of real SAR images, possibly leading to presence of artifacts or poor preservation of radiometric features. SAR-CNN [13] addressed this problem by averaging multi-temporal SAR data of the same scene to obtain a ground truth. However, acquisition of multi-temporal data, scene registration and robustness to variations can be challenging.
Self-supervised denoising methods represent an alternative to train CNNs without having access to the clean images. Noise2Noise [14] proposed to use pairs of images with the same content but independent noise realizations. This method is not suitable for SAR despeckling due to the difficulty in accessing multiple images of the same scene with independently drawn noise realizations. Noise2void [15] further relaxes the constraints on the dataset, requiring only a single noisy version of the training images, by introducing the concept of blind-spot networks. Assuming spatially uncorrelated noise, and excluding the center pixel from receptive field of the network, the network learns to predict the value of the center pixel from its receptive field by minimizing the \\(\\ell_{2}\\) distance between the prediction and the noisy value. The network is prevented from learning the identity mapping because the pixel to be predicted is removed from the receptive field. The blind-spot scheme used in Noise2void [15] is carried out by a simple masking method, keeping a few pixels active in the learning process. Laine et al. [16] devised a novel convolutional blind-spot network architecture capable of processing the entire image at once, increasing the efficiency. They also introduce a Bayesian framework to include noise models and priors on the conditional distribution of the blind spot given the receptive field.
In this paper, we use the self-supervised Bayesian denoising with blind-spot networks proposed in [16], adapting the model to the noise and image statistics of SAR images, thusenabling direct training on real SAR images. Our method bypasses the problem of training a CNN on synthetically-speckled optical images and using it to denoise SAR images, since in general transfer knowledge from optical to SAR images is a very difficult task as imaging geometries and content are quite dissimilar due to the different imaging mechanisms. To the best of our knowledge, this is the first self-supervised method to deal with real SAR images.
## 2 Background
CNN denoising methods estimate the clean image by learning a function that takes each noisy pixel and combines its value with the local neighboring pixel values (receptive field) by means of multiple convolutional layers interleaved with non-linearities. Taking this from a statistical inference perspective, a CNN is a point estimator of \\(p(x_{i}|y_{i},\\Omega_{y_{i}})\\), where \\(x_{i}\\) is the \\(i^{th}\\) clean pixel, \\(y_{i}\\) is the \\(i^{th}\\) noisy pixel and \\(\\Omega_{y_{i}}\\) represents the receptive field composed of the noisy neighboring pixels, excluding \\(y_{i}\\) itself. Noise2void predicts the clean pixel \\(x_{i}\\) by relying solely on the neighboring pixels and using \\(y_{i}\\) as a noisy target. The CNN learns to produce an estimate of \\(\\mathbb{E}_{x_{i}}[x_{i}|\\Omega_{y_{i}}]\\) using the \\(\\ell_{2}\\) loss when in presence of Gaussian noise. The drawback of Noise2void is that the value of the noisy pixel \\(y_{i}\\) is never used to compute the clean estimate.
The Bayesian framework devised by Laine et al. [16] explicitly introduces the noise model \\(p(y_{i}|x_{i})\\) and conditional pixel prior given the receptive field \\(p(x_{i}|\\Omega_{y_{i}})\\) as follows:
\\[p(x_{i}|y_{i},\\Omega_{y_{i}})\\propto p(y_{i}|x_{i})p(x_{i}|\\Omega_{y_{i}}).\\]
The role of the CNN is to predict the parameters of the chosen prior \\(p(x_{i}|\\Omega_{y_{i}})\\). The denoised pixel is then obtained as the MMSE estimate, i.e., it seeks to find \\(\\mathbb{E}_{x_{i}}[x_{i}|y_{i},\\Omega_{y_{i}}]\\). Under the assumption that the noise is pixel-wise i.i.d., the CNN is trained so that the data likelihood \\(p(y_{i}|\\Omega_{y_{i}})\\) for each pixel is maximized. The main difficulty involved with this technique is the definition of a suitable prior distribution that, when combined with the noise model, allows for close-form posterior and likelihood distributions. We also remark that while imposing a handcrafted distribution as \\(p(x_{i}|\\Omega_{y_{i}})\\) may seem very limiting, it is actually not since i) that is the _conditional_ distribution given the receptive field rather than the raw pixel distribution, and ii) its hyperparameters are predicted by a powerful CNN on a pixel-by-pixel basis.
## 3 Proposed Method
Following the notation in Sec. 2, this section presents the Bayesian model we adopt for SAR despeckling and the training procedure. A summary is shown in Fig. 1.
### Model
We consider the multiplicative SAR speckle noise model: \\(y_{i}=n_{i}x_{i}\\) where \\(x\\) represents the unobserved clean image and \\(n\\) the uncorrelated multiplicative speckle. Concerning noise modeling, we choose the widely-used \\(\\Gamma(L,L)\\) distribution for an \\(L\\)-look image. We model the conditional prior distribution given the receptive field as an inverse Gamma distribution with shape \\(\\alpha_{x_{i}}\\) and scale \\(\\beta_{x_{i}}\\):
\\[p(x_{i}|\\Omega_{y_{i}})=\\mathrm{inv}\\Gamma(\\alpha_{x_{i}},\\beta_{x_{i}}),\\]
where \\(\\alpha_{x_{i}}\\) and \\(\\beta_{x_{i}}\\) depend on \\(\\Omega_{y_{i}}\\), since they are the outputs of the CNN at pixel \\(i\\). For the chosen prior and noise models, the posterior distribution is also an inverse Gamma:
\\[p(x_{i}|y_{i},\\Omega_{y_{i}})=\\mathrm{inv}\\Gamma(L+\\alpha_{x_{i}},\\beta_{x_{i }}+Ly_{i}). \\tag{1}\\]
Finally, the noisy data likelihood \\(p(y_{i}|\\Omega_{y_{i}})\\) can be obtained in closed form:
\\[p(y_{i}|\\Omega_{y_{i}})=\\frac{L^{L}y_{i}^{L-1}}{\\beta_{x_{i}}^{-\\alpha_{x_{i} }}Beta(L,\\alpha_{x_{i}})(\\beta_{x_{i}}+Ly_{i})^{L+\\alpha_{x_{i}}}},\\]
with the Beta function defined as \\(Beta(L,\\alpha_{x_{i}})=\\frac{\\Gamma(L)\\Gamma(\\alpha_{x_{i}})}{\\Gamma(L+ \\alpha_{x_{i}})}\\). This distribution is also known as the \\(G_{I}^{0}\\) distribution introduced in [17]. It has been observed that it is a good model of highly heterogeneous SAR data in intensity format like urban areas, primary forests and a deforesed area.
### Training
The training procedure learns the weights of the blind-spot CNN, which is used to produce the estimates for parameters \\(\\alpha_{x_{i}}\\) and \\(\\beta_{x_{i}}\\) of the inverse gamma distribution \\(p(x_{i}|\\Omega_{y_{i}})\\). We refer the reader to [16] on how to implement a CNN so that it has a central blind spot. The blind-spot CNN is trained to minimize the negative log likelihood \\(p(y_{i}|\\Omega_{y_{i}})\\) for each pixel, so that the estimates of \\(\\alpha_{x_{i}}\\) and \\(\\beta_{x_{i}}\\) fit the noisy observations. Our loss function is as follows:
\\[l=-\\sum_{i}\\log p(y_{i}|\\Omega_{y_{i}}).\\]
### Testing
In testing, the blind-spot CNN processes the SAR image to estimate \\(\\alpha_{x_{i}}\\) and \\(\\beta_{x_{i}}\\) for each pixel. The despeckled image is then obtained through the MMSE estimator, i.e., the expected value of the posterior distribution in Eq. (1):
\\[\\hat{x}_{i}=\\mathbb{E}[x_{i}|y_{i},\\Omega_{y_{i}}]=\\frac{\\beta_{x_{i}}+Ly_{i} }{L+\\alpha_{x_{i}}-1}.\\]
Figure 1: Scheme depicting the training and the testing phases.
Notice that this estimator combines both the per-pixel prior estimated by the CNN and the noisy realization.
## 4 Experimental Results and Discussions
In this section we describe the results of our method through a two-step validation analysis. First, we train and test the network on a synthetic dataset where the availability of ground truth images allows to compute objective performance metrics. We compare our method with the following despeckling algorithms: PPB [18], SAR-BM3D [7] and SAR-CNN [13]. This allows to understand the denoising capability of our self-supervised method in comparison with both traditional methods and a CNN-based one with supervised training. In the second experiment, training is conducted directly on real SAR images. To compare the despeckling methods, we rely on some no-reference performance metrics such as equivalent number of looks (ENL), and moments of the ratio image (\\(\\mu_{r}\\), \\(\\sigma_{r}\\)), and on visual inspection.
The network architecture we use in the experiments is composed of four branches with shared parameters (handling the four directions of the blind-spot receptive field, see [16]) in a first part with 17 blocks composed of 2D convolution with \\(3\\times 3\\) kernel, batch normalization and Leaky ReLU nonlinearity. After that, the branches are merged with a series of three \\(1\\times 1\\) convolutions.
### Synthetic dataset
In this experiment we employ natural images to construct a synthetic SAR-like dataset. Pairs of noisy and clean images are built by generating speckle to simulate a single-look intensity image (\\(L=1\\)). During training patches are extracted from 450 different images of the Berkeley Segmentation Dataset (BSD) [19]. The network has been trained for around 400 epochs with a batch size of 16 and learning rate equal to \\(10^{-5}\\) with the Adam optimizer. Table 1 shows performance results on a set of well-known testing images in terms of PSNR. It can be noticed that our self-supervised method outperforms PPB and SAR-BM3D. Moreover, it is interesting to notice that while the proposed approach does not use the clean data for training, it achieves comparable results with respect to the supervised SAR-CNN method. Fig. 2 shows that also from a qualitative perspective. Despite the absence of the true clean images during training, our method produces images as visually pleasing as those produced by SAR-CNN with comparable edge-preservation capabilities.
### TerraSAR-X dataset
In this experiment we employ single-look TerraSAR-X images1. Most of the despeckling works in literature assume the multiplicative speckle noise to be a white process. However, the transfer function of SAR acquisition systems can introduce a statistical correlation across pixels. One of the assumption for the blind-spot network training to work is that the noise has to be pixel-wise independent so that the network cannot predict the noise component from the receptive field. Hence, both training and testing images are pre-processed through a blind speckle decorrelator [20] to whiten them. During training patches are extracted from 16000 \\(256\\times 256\\) whitened SAR images. The network has been trained for around 100 epochs with a batch size of 16 and learning rate of \\(10^{-5}\\) with the Adam optimizer.
Footnote 1: [https://tpm-ds.eo.esa.int/oads/access/collection/TerraSAR-X/tree](https://tpm-ds.eo.esa.int/oads/access/collection/TerraSAR-X/tree)
Table 2 and Fig. 3 show the results obtained on three \\(1000\\times 1000\\) test images disjoint from the training ones. ENL is computed over manually-selected homogeneous areas. It can be noticed that the proposed method is very close to the desired statistics of the ratio image, showing that indeed it removes a significant noise component, and that it better preserves edges and fine textures. It also does not hallucinate artifacts over homogeneous regions, while SAR-CNN tends to oversmooth and produce cartoon-like edges. However, the degree of smoothing over homogeneous areas is somewhat limited as confirmed by the ENL values and deserves further investigation. We conjecture that residual spatial correlation in the speckle may affect the network on real images, since excellent performance is observed on synthetic speckle.
## 5 Conclusion
In this paper we introduced the first self-supervised deep learning SAR despeckling method which only requires real single look complex images. Learning directly from the true SAR data rather than simulated imagery avoids transfering between domains for improved fidelity.
## References
* 32, 1981.
* [2] V. S. Frost, J. A. Stiles, K. S. Shanmugan, and J. C. Holtzman, \"A model for radar images and its application to adaptive digital filtering of multiplicative noise,\" _IEEE Transactions on
\\begin{table}
\\begin{tabular}{l c c c c} Image & PPB [18] & SAR-BM3D [7] & SAR-CNN [13] & Proposed \\\\ \\hline Cameraman & 23.02 & 24.76 & 26.15 & 25.90 \\\\ \\hline House & 25.51 & 27.55 & 28.60 & 27.96 \\\\ \\hline Peppers & 23.85 & 24.92 & 26.02 & 25.99 \\\\ \\hline Starfish & 21.13 & 22.71 & 23.37 & 23.32 \\\\ \\hline Butterfly & 22.76 & 24.48 & 26.05 & 25.82 \\\\ \\hline Airplane & 21.22 & 22.71 & 23.93 & 23.67 \\\\ \\hline Parrot & 21.88 & 24.17 & 25.92 & 25.44 \\\\ \\hline Lena & 26.64 & 27.85 & 28.70 & 28.54 \\\\ \\hline Barbara & 24.08 & 25.37 & 24.70 & 24.36 \\\\ \\hline Boat & 24.22 & 25.43 & 26.05 & 26.02 \\\\ \\hline _Average_ & _23.43_ & 24.99 & 25.95 & 25.67 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Synthetic images - PSNR (dB)
\\begin{table}
\\begin{tabular}{l c c c c} Metrics & PPB [18] & SAR-BM3D [7] & SAR-CNN [13] & Proposed \\\\ \\hline \\(\\mu_{r}\\) & 1.0021 & 1.0628 & 0.9845 & 1.0271 \\\\ \\hline \\(\\sigma_{r}\\) & 1.4004 & 1.7322 & 0.8458 & 0.9837 \\\\ \\hline ENL & 44.56 & 22.80 & 29.98 & 8.91 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Quantitative results on SAR real imagesPattern Analysis and Machine Intelligence_, vol. PAMI-4, no. 2, pp. 157-166, March 1982.
* [3] D. Kuan, A. Sawchuk, T. Strand, and P. Chavel, \"Adaptive restoration of images with speckle,\" _IEEE Transactions on Acoustics, Speech, and Signal Processing_, vol. 35, no. 3, pp. 373-383, March 1987.
* [4] A. Lopes, E. Nezry, R. Touzi, and H. Laur, \"Structure detection and statistical adaptive speckle filtering in SAR images,\" _International Journal of Remote Sensing_, vol. 14, no. 9, pp. 1735-1758, 1993.
* [5] Hua Xie, L. E. Pierce, and F. T. Ulaby, \"SAR speckle reduction using wavelet denoising and Markov random field modeling,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 40, no. 10, pp. 2196-2212, Oct 2002.
* [6] F. Argenti and L. Alparone, \"Speckle removal from SAR images in the undecimated wavelet domain,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 40, no. 11, pp. 2363-2374, Nov 2002.
* [7] S. Parrilli, M. Poderico, C. V. Angelino, and L. Verdoliva, \"A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 50, no. 2, pp. 606-616, Feb 2012.
* [8] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, \"Image denoising by sparse 3-D transform-domain collaborative filtering,\" _IEEE Transactions on Image Processing_, vol. 16, no. 8, pp. 2080-2095, Aug 2007.
* [9] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, \"Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,\" _IEEE Transactions on Image Processing_, vol. 26, no. 7, pp. 3142-3155, July 2017.
* [10] A. B. Molini, D. Valsesia, G. Fracastoro, and E. Magli, \"DeepSUM: Deep Neural Network for Super-Resolution of Unregistered Multitemporal Images,\" _IEEE Transactions on Geoscience and Remote Sensing_, pp. 1-13, 2019.
* [11] J. Long, E. Shelhamer, and T. Darrell, \"Fully convolutional networks for semantic segmentation,\" in _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2015, pp. 3431-3440.
* [12] P. Wang, H. Zhang, and V. M. Patel, \"SAR Image Despeckling Using a Convolutional Neural Network,\" _IEEE Signal Processing Letters_, vol. 24, no. 12, pp. 1763-1767, Dec 2017.
* [13] G. Chierchia, D. Cozzolino, G. Poggi, and L. Verdoliva, \"SAR image despeckling through convolutional neural networks,\" in _2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, July 2017, pp. 5438-5441.
* [14] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, \"Noise2Noise: Learning image restoration without clean data,\" in _Proceedings of the 35th International Conference on Machine Learning_. 2018, Proceedings of Machine Learning Research, pp. 2965-2974, PMLR.
* Learning Denoising from Single Noisy Images,\" in _CVPR_, 2018.
* [16] S. Laine, T. Karras, J. Lehtinen, and T. Aila, \"High-quality self-supervised deep image denoising,\" in _Advances in Neural Information Processing Systems_, 2019, pp. 6968-6978.
* [17] A. C. Frery, H.. Muller, C. C. F. Yanasse, and S. J. S. Sant'Anna, \"A model for extremely heterogeneous clutter,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 35, no. 3, pp. 648-659, May 1997.
* [18] C. Deledalle, L. Denis, and F. Tupin, \"Iterative weighted maximum likelihood denoising with probabilistic patch-based weights,\" _IEEE Transactions on Image Processing_, vol. 18, no. 12, pp. 2661-2672, Dec 2009.
* [19] D. Martin, C. Fowlkes, D. Tal, and J. Malik, \"A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,\" in _Proc. 8th Int'l Conf. Computer Vision_, July 2001, vol. 2, pp. 416-423.
* [20] A. Lapini, T. Bianchi, F. Argenti, and L. Alparone, \"Blind speckle decorrelation for SAR image despeckling,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 52, no. 2, pp. 1044-1058, Feb 2014.
Figure 3: Real SAR images: Noisy, PPB, SAR-BM3D, SAR-CNN, our method.
Figure 2: Synthetic images: Noisy, PPB (21.13 dB), SAR-BM3D (22.71 dB), SAR-CNN (23.37 dB), our method (23.32 dB). | SAR despeckling is a problem of paramount importance in remote sensing, since it represents the first step of many scene analysis algorithms. Recently, deep learning techniques have outperformed classical model-based despeckling algorithms. However, such methods require clean ground truth images for training, thus resorting to synthetically speckled optical images since clean SAR images cannot be acquired. In this paper, inspired by recent works on blind-spot denoising networks, we propose a self-supervised Bayesian despeckling method. The proposed method is trained employing only noisy images and can therefore learn features of real SAR images rather than synthetic data. We show that the performance of the proposed network is very close to the supervised training approach on synthetic data and competitive on real data.
Andrea Bordone Molini, Diego Valsesia, Giulia Fracastoro, Enrico Magli+ Politecnico di Torino, Italy SAR, speckle, convolutional neural networks, unsupervised
Footnote †: This research has been funded by the Smart-Data@PoliTO center for Big Data and Machine Learning technologies. | Condense the content of the following passage. | 232 |
arxiv-format/2112_06539v2.md | MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions, spherical coordinates, and intensity
Kamil Zywanowski*\\({}^{1}\\), Adam Banaszczyk*\\({}^{1}\\), Michal R. Nowicki\\({}^{1}\\), and Jacek Komorowski\\({}^{2}\\)
* Equal contribution\\({}^{1}\\) The authors are with the Institute of Robotics and Machine Intelligence, Faculty of Control, Robotics, and Electrical Engineering, Poznan University of Technology, Poznan, Poland [email protected]\\({}^{2}\\) Jacek Komorowski is with the Faculty of Electronics and Information Technology, Warsaw University of Technology, Warsaw, Poland [email protected] M. R. Nowicki is supported by the Foundation for Polish Science (FNP) This research was supported by Nvidia Hardware Grant programme with a single Nvidia A100
## I Introduction
Place recognition modules, determining if the sensor revisits a previously observed location, are vital whenever a long-time autonomous operation is required, i.e., for simultaneous localization and mapping (SLAM). In SLAM, the ability to relocalize can reduce the accumulated localization drift while correcting the past trajectory to build a consistent map of the environment. Among place recognition applications, robust place recognition for autonomous cars is a commonly tackled problem. City-wide localization in dynamic environments with moving objects, changing weather conditions, and seasonal changes requires robust methods capable of capturing low-level and high-level features from raw sensory data.
Designing an efficient learning-based 3D LiDAR place recognition system is still an open problem. A key challenge is to find the best 3D data representation that can be efficiently processed using neural networks to extract meaningful features for robust place recognition. As a community, we have already explored image-like [1], voxel-based [2], bird's-eye view approaches [3, 4, 5], or unordered point sets [6, 7] representations. More recently, we also see a rise in sparse convolution-based approaches [8, 9] and attention-based modules [10] that might be combined together [11] as well. Many of these methods are trained, evaluated, and compared on the Oxford RobotCar dataset [6, 8, 12, 13, 14, 15, 16, 17], which was gathered by concatenating multiple 2D LiDAR scans covering a 20 m distance and subsampling to 4096 points [6]. In contrast, a single scan from a modern 3D LiDAR covers a much larger area (even up to 160-200 meters in diameter) and has a greater number of points (up to 260k points for Ouster OS1-128), as presented in Fig 1.
We propose a new 3D LiDAR place recognition system called MinkLoc3D-SI, extending the concept of sparse convolution-based MinkLoc3D [8] to improve performance on scans from 3D LiDARs. The proposed MinkLoc3D-SI uses spherical coordinates of 3D points and utilizes the intensity value in 3D LiDAR scans. We evaluate our approach on the USyd Campus [18], Oxford RobotCar [19],
Fig. 1: The LiDAR place recognition solutions are most commonly evaluated on combined 2D scans containing 4096 equally distributed points from the Oxford RobotCar dataset (A). In contrast, actual 3D scans have a single point of observation, contain more points, and include intensity as in the KITTI dataset (B). The proposed MinkLoc3D-SI targets the irregular distributions of points in the 3D LiDAR scans while utilizing intensity to generate descriptors that are compared to determine if the query location matches one of the locations in the database.
and KITTI [20] datasets.
The main contributions of our work are:
* The first 3D sparse convolution-based place recognition system, MinkLoc3D-SI, utilizing intensity and spherical point representation suited for place recognition based on a single scan from 3D LiDAR.
* A new place recognition dataset utilizing Velodyne VLP-16 based on the USyd Campus dataset.
* A modified Oxford RobotCar Intensity dataset including intensity for each 3D point.
All the code and datasets enabling the replication of results presented in the paper are publicly available1.
Footnote 1: [https://github.com/Kamilzywanowski/MinkLoc3D-SI](https://github.com/Kamilzywanowski/MinkLoc3D-SI)
## II Related work
### _3D LiDAR data representation for deep learning_
The community explored different options of 3D point cloud representation for deep learning. We started with 2D range image representations processed by 2D convolutions [1], being the closest representation to those typically used for RGB images. Later approaches focused on the volumetric representations [2] that capture the structure of the 3D point cloud. The problem is that their computational complexity usually grows cubically with the number of voxels. In turn, the community explored the possibility of reducing dimensionality to increase efficiency, i.e., by introducing a 2.5D bird's-eye view point pillars [3] representation, suitable for autonomous cars.
An alternative approach assumes processing raw point clouds, e.g., using a PointNet-like [21, 22] architecture. In the PointNet architecture, to enforce invariance to permutations of the input point cloud, a large part of the processing is done separately for each point. Thus, this design is not well-suited to extract informative global features.
The most recent approaches use 3D convolutional architectures and a sparse volumetric representation of a point cloud. These methods are efficient due to optimized implementations which use hashing-based addressing techniques to quickly perform convolutions on sparse data [23, 24]. As a result, this representation is gaining popularity with successful applications in many areas, such as semantic segmentation of 3D LiDAR scans [25].
### _3D LiDAR place recognition_
Existing 3D LiDAR place recognition solutions are closely related to developments in data representation. The range image-based representation of LiDAR scans is used in [1] to achieve localization robust to changing weather conditions. More recently, X. Chen _et al._ presented OverLapNet [26], which combines image-based representation with normal and semantic layers for metric localization. In AttDLNet [10], authors combine the proxy representation of range images with attention modules to prove that attention is a critical component in the descriptor learning.
The first notable example of a method operating on a raw 3D point cloud is PointNetVLAD [6]. It combines PointNet [21] local feature extraction with NetVLAD [27] to aggregate local features into a global place descriptor. Unfortunately, the method suffers from PointNet weakness to capture high-level features. Therefore, many solutions like [12, 13, 15] focus on the data representation problem leaving the NetVLAD part intact. PCAN [12] improves PointNet by estimating the significance of each point. DAGC [15] uses graph CNN to combine information at multiple scales. LPDNET [13] computes hand-crafted features, which are later processed using a pipeline similar to PointNet architecture. The most recent method, NDT-Transformer [7], combines local distribution of raw points with Normal Distributions Transform (NDT). Combined features are processed by the Transformer module and aggregated using NetVLAD to achieve state-of-the-art performance.
SegMap [28] presents another approach for combining individual scan features, based on segments extracted from a 3D point cloud. This approach is extended in [29] by combining SegMap with intensity information to improve descriptor discriminativity. Locus [30] analyzes topological relationships between segments in a single scan and temporal correspondences between segments in a sequence of scans.
A popular approach is to represent the scan as the bird's-eye view (BEV) image. Scan Context [4] is a hand-crafted descriptor efficiently computed from a BEV scan representation in polar coordinates. Scan Context sparked a new family of BEV approaches, including a trained descriptor called DiSCO [31], Scan Context augmented with intensity information [32], or semantic-based extension called SSC [33]. Recently presented Scan Context++ [5] extends Scan Context, by providing metric localization on top of the existing topological localization. In BVMatch [34], BEV is combined with Log-Gabor filters to detect keypoints later used in a typical topological localization with a bag-of-visual-words (BoVW) approach for topological and metric localization.
The availability of the efficient 3D sparse convolution library sparked an interest in using sparse volumetric representation for place recognition purposes [8, 9, 11]. The first method, MinkLoc3D [8], surpasses previous methods by a significant margin when evaluated on the Oxford RobotCar dataset, proving that the data representation is a critical component of a 3D LiDAR place recognition method. MinkLoc++ [9] further improves MinkLoc3D with a channel attention mechanism ECA [35] while fusing 3D LiDAR scans and camera images. Similar approach is used in TransLoc3D [11], which combines sparse convolutions, adaptive receptive field module (ARFM) based on ECA, and Transformer module.
Our work combines the idea presented in Scan Context [4] to use non-Cartesian point representation well suited for 3D LiDAR scans with efficient sparse 3D convolutional architecture while utilizing intensity information available for each 3D point.
## III Proposed solution
### _MinkLoc3D_
MinkLoc3D [8] network architecture is based on a local feature extraction network utilizing sparse convolutions and generalized-mean (GeM) pooling layer [36]. In this approach, presented in Fig. 2, the input 3D LiDAR point cloud (\\(x_{i},y_{i},z_{i}\\)) is quantized into a finite number of cuboids with a single 3D point located in each cuboid. The processed information is then used to create a single sparse tensor of quantized points \\(\\mathbf{\\hat{C}}\\) and an associated feature input \\(\\mathbf{F}\\)[23]:
\\[\\mathbf{\\hat{C}}=\\begin{bmatrix}\\hat{x}_{1}&\\hat{y}_{1}&\\hat{z}_{1}\\\\ &\\vdots&\\\\ \\hat{x}_{N}&\\hat{y}_{N}&\\hat{z}_{N},\\end{bmatrix},\\mathbf{F}=\\begin{bmatrix}f _{1}^{T}\\\\ \\vdots\\\\ f_{N}^{T},\\end{bmatrix}. \\tag{1}\\]
In the case of MinkLoc3D, it is a purely geometrical approach with each associated feature equal to one \\(\\left(f_{i}=\\begin{bmatrix}1\\end{bmatrix}\\right)\\) indicating the existence of a 3D point. The local feature extraction part of the network is built using Feature Pyramid Network (FPN) [37] design pattern. The bottom-up part contains four convolutional blocks (Conv0..3), producing sparse 3D feature maps with decreasing spatial resolution and increasing receptive field. The top-down part contains a transposed convolution (TConv3) generating an upsampled feature map. The upsampled feature map is concatenated with features from the corresponding layer of the bottom-up pass using a lateral connection (\\(1\\times 1\\mathrm{Conv}\\)). Such architecture produces a feature map with a large receptive field and high spatial resolution [8]. As a result, the feature extraction network generates a sparse feature map (\\(h_{j}^{1},h_{j}^{2}, ,h_{j}^{256}\\)) for \\(j\\)-th non-zero element of the sparse local feature map. Feature map produced by a local feature extraction part of the network is pooled with generalized-mean (GeM) pooling [36] to produce a global point cloud descriptor \\(\\mathbf{g}\\), with its k-th component \\(g^{k}\\) equal to:
\\[g^{k}=\\left(\\frac{1}{M}\\sum_{j=1}^{M}\\left(h_{j}^{k}\\right)^{p}\\right)^{\\frac{ 1}{p}}, \\tag{2}\\]
where \\(M\\) is the number of non-zero elements in the sparse local feature map and \\(p\\) is a learnable parameter of GeM pooling that determines a smooth transition from the global max pooling to the global average pooling.
The network is trained with a triplet margin loss [38, 39]. The goal of the triplet loss is to minimize the distance between descriptors of point clouds showing the same location (a reference point cloud descriptor \\(a\\) and its positive match descriptor \\(p\\)) and maximize the distance between descriptors of points clouds representing different locations (\\(a\\) and its negative match descriptor \\(n\\)):
\\[L=\\sum_{i}\\max(d(a_{i},p_{i})-d(a_{i},n_{i})+m,0), \\tag{3}\\]
where \\(d(x,y)\\) is the Euclidean distance between global descriptors \\(x\\) and \\(y\\), and \\(m\\) is a chosen margin parameter. We use a hard negative mining strategy to ensure that only triplets with non-zero loss \\(L\\) (active triplets) are selected for batch processing. We utilize data augmentation: random jitter, random removal of points, random translations, random flip, etc. The network is trained with Adam optimizer with learning rate \\(l_{r}=1e^{-3}\\) and decay \\(d=1e^{-3}\\). We used a single Nvidia RTX 3080 card for training, which lasted approximately 4 hours due to efficient hard-negative triplet mining. More information about the 3D sparse convolutions can be found in [23] while the in-detail presentation of MinkLoc3D is given in [8].
### _MinkLoc3D-SI_
3D point clouds captured by the 3D LiDARs have a varying density with more points closer to the scanner's origin and usually a smaller range of values in the elevation direction. Therefore, a regular grid of Cartesian coordinates is not well-suited for 3D points further away from the scanner. The distances between points naturally increase, making it harder to extract high-level features.
We propose MinkLoc3D-S utilizing a spherical representation of 3D coordinates of points, which is a natural representation of 3D LiDAR's measurements. MinkLoc3D-S performs sparse convolutions using this alternative representation. In the proposed approach, each 3D point \\((x,y,z)\\) with elevation represented by the \\(z\\) component is converted into a corresponding spherical representation \\((r,\\theta,\\phi)\\):
\\[r =\\sqrt{x^{2}+y^{2}+z^{2}}, \\tag{4}\\] \\[\\theta =\\mathrm{atan2}(y,x),\\] (5) \\[\\phi =\\mathrm{atan2}(z,\\sqrt{x^{2}+y^{2}}), \\tag{6}\\]
where \\(r\\) is the distance between the 3D point and the scanner, \\(\\theta\\) is the horizontal scanning angle, while \\(\\phi\\) is the vertical scanning angle. As a result, the area of the quantization
Fig. 2: Overall processing steps of the MinkLoc3D-SI architecture. The modifications proposed to the MinkLoc3D model, marked by green color, include spherical representation (MinkLoc3D-S) and intensity usage (MinkLoc-I) combined into MinkLoc3D-SI. Presented numbers (e.g., 1/32) indicate a stride and number of channels of a feature map produced by each block in the local feature extractor.
cuboid increases for 3D points further away from the 3D LiDAR.
The original MinkLoc3D utilizes only the geometry of 3D points to perform place recognition. But as stated in [40, 41], the intensity of the returned signal for the 3D LiDARs is valuable and can be used to construct the local place recognition descriptor. Therefore, we propose to include the information about the LiDAR intensity in the sparse convolutions of MinkLoc3D-I.
The intensity \\(s_{i}\\) for the \\(i\\)-th 3D point is filled in the feature map part of the processing \\(f_{i}=\\left[s_{i}\\right]\\), which does not increase the dimensionality of the convolutions. The raw intensity values are normalized to \\(0-1\\) interval. When multiple points fall into the same quantization cuboid, we randomly choose one of the intensity values from the cuboid during training to ensure proper robustness. Still, we average the intensity values during inference to achieve repeatable results.
MinkLoc3D-SI combines both improvements: spherical representation and usage of measurement intensities.
## IV Datasets
### _USyd Campus_
The USyd Campus Dataset (USyd) [18] contains recordings from a buggy-like car registered over 50 weeks in varying weather conditions. The used sensory setup consists of a Velodyne VLP-16 LiDAR, cameras, and GPS that serve as ground truth in our application.
After skipping recordings with errors or faulty sensor readings (i.e., due to lack of GPS signal inside tunnels), we sampled the LiDAR data with interpolated GPS so that the distance between consecutive scans from the same recording is 5 meters resulting in approx. 735 locations per each of 40 runs. Each location for place recognition contains information from a single 3D scan that contains up to 25000 3D points with intensity measurement that might be as far as 100 m from the sensor. The number of points and their distribution for each location are not modified or limited compared to the raw 3D scan, i.e., the ground plane is not removed.
The division into test, buffer and train sections is presented in Fig. 3. Randomly selected four 100\\(\\times\\)100 m areas constitute test sections. Each test section has a 10 m buffer area that is not used for testing or training. The rest of the data was used to train the networks. In total, we obtained 19138 training and 8797 test 3D point clouds, which we make publicly available in Oxford RobotCar [19] compatible format for future comparisons.
### _Oxford RobotCar_
The Oxford RobotCar dataset [19] consists of data recorded over a year-long and over 1000 km, thus making it well-suited to compare 3D place recognition solutions [6]. The available 3D data is generated from 2D scans accumulated over time to create a unified environment map. The map is then divided into segments of 20 m length, with each segment containing exactly 4096 equally distributed points as opposed to a greater range and larger number of points in 3D scans in USyd. The 3D map segments in Oxford RobotCar do not reflect the point clouds obtained from single scans from 3D LiDARs. In practice, we have more points, and these points are not equally distributed coming from a single point of view. As a result, the 3D structure of the data is not compatible with the proposed 3D spherical representation.
To enable evaluation of intensity-based solutions, we modified the typical preprocessing of the Oxford RobotCar dataset for the place recognition task introduced in [6] to additionally include intensity for each 3D point based on original, raw data. In our processing, the training and testing split is the same as in the original Oxford RobotCar. We use the same number of 4096 points per point cloud, the same length of segments, and whenever possible, we use the same preprocessing steps as for the original Oxford RobotCar dataset. We will refer to this dataset as the Oxford RobotCar Intensity.
### _Kitti_
Similarly to [9, 42], we decided to use the KITTI dataset (KITTI) [20] to test the generalization ability of place recognition while the proposed systems were trained on Oxford RobotCar or USyd datasets. The first \\(170\\) seconds of Sequence 00 construct the reference database. The remaining part of the sequence is used as localization queries [42]. The achieved performance indicates how well the place recognition system can operate in a previously unknown environment.
### _Evaluation measures_
Regardless of the chosen dataset, the evaluation on the testing dataset is performed by selecting a single query location and matching it to the collection of remaining locations called database. Matching is performed using the Euclidean distance of the descriptors computed by the network for these locations. The performance of the place recognition system
Fig. 3: The visualization of 40 runs on the USyd dataset. The recorded data is divided into the testing areas (four black squares), buffer areas not used at all (between black and green squares), and the training data (outside green squares).
is measured with average recalls \\(AR@X\\). The location for \\(AR@X\\) is assumed to be correctly recognized if the \\(X\\) most similar locations matched from the database contain at least one location within the distance \\(c\\) from the query location. Among the different \\(X\\) values, the value at 1 (\\(AR@1\\)) is the most important for robotic (i.e., SLAM) applications as it measures how often the first match from the database matches the query location. In the SLAM scenarios, including a wrong recognition may ultimately break the localization solution. In our evaluation, we also list the \\(AR@1\\%\\) to compare our results to the state-of-the-art solutions.
For Oxford RobotCar and KITTI datasets, we assume that the place is correctly recognized if the determined location is within \\(c=25\\)\\(m\\) of the ground truth location [6, 8, 42]. In the case of the USyd dataset, we decided to use a more challenging threshold of \\(c=10\\)\\(m\\). Since the LiDAR sensor is omnidirectional, the correct place recognition is determined purely by the sensor's position.
## V Experiments
### _USyd Campus_
As a reference, we trained and tested the original version of MinkLoc3D [8] on USyd Campus, achieving the \\(AR@1\\%\\) of \\(98.1\\%\\) and \\(AR@1\\) of \\(91.7\\%\\). The evaluation of the Scan Context [4] with default parameters achieved the \\(AR@1\\%\\) of \\(88.7\\%\\) and \\(AR@1\\) of \\(86.0\\%\\) in our runs.
The obtained numerical results for proposed modifications are gathered in Tab. I with a visual representation of \\(AR@X\\) measure for X from 1 to 25 presented in Fig. 4.
MinkLoc3D-I reports an improved performance of \\(AR@1\\%\\) of \\(98.2\\%\\) and \\(AR@1\\) of \\(92.3\\%\\), which confirms that the intensity is complementary to the pure geometric location of the 3D points. MinkLoc3D-S with the spherical representation of 3D point coordinates also improves the performance to \\(AR@1\\%\\) of \\(98.8\\%\\) and \\(AR@1\\%\\) of \\(93.9\\). The spherical representation, in this case, is better suited to capture the inter-point relations for points further from the scanner, which improves the model's ability to create high-level map features. The most significant gains for both analyzed solutions are visible for \\(AR@1\\), as the \\(AR@1\\%\\) is already at a very high level and returning the single correct location is a much more challenging task. Combining both ideas into MinkLoc3D-SI reports the best \\(AR@1\\) (94.7) and the best \\(AR@1\\%\\) (99.0) among the evaluated solutions, as shown in Fig. 4.
### _Oxford RobotCar and Oxford RobotCar Intensity_
The results achieved when trained and evaluated on the Oxford RobotCar Intensity are gathered in Tab. II. Adding additional intensity for MinkLoc3D-I improves the \\(AR@1\\%\\) to 98.1, which is slightly better than the version without intensity (MinkLoc3D with \\(AR@1\\%\\) of 97.6). The MinkLoc3D-I's \\(AR@1\\) of 93.6 is also greater than 92.8 for MinkLoc3D. More significant improvements from utilizing intensity are not evident as a single 3D point in Oxford RobotCar can be measured from different distances, thus not making its intensity repeatable even for the same location.
MinkLoc3D-S utilizing the spherical representation of 3D points achieved the \\(AR@1\\%\\) of \\(92.0\\) and \\(AR@1\\) of \\(79.9\\). As expected, this approach harms the system's performance as points are equally distributed in the 3D point cloud. We also performed experiments with MinkLoc3D-SI, which improves over a version without intensity to \\(AR@1\\%\\) to 93.4 and \\(AR@1\\) to 82.2 but works worse than MinkLoc3D-I due to spherical coordinates.
On the original formulation of the Oxford RobotCar dataset, MinkLoc3D-S performs worse than MinkLoc3D as 3D point clouds are created from multiple 2D scans as presented in Tab. III. Similar results obtained by both methods on Oxford RobotCar and Oxford RobotCar Intensity suggest that the proposed variant of Oxford RobotCar can be used to evaluate methods when intensity information is required.
### _Kitti_
We used the MinkLoc3D-SI solution trained on the Oxford RobotCar Intensity or USyd to verify its performance on the KITTI dataset as in [6, 13, 9]. The only modification is the division of the quantization parameter of angle \\(\\phi\\) by four to include information from all 64 layers of the HDL-64E used in KITTI, compared to 16 layers in USyd. The results are presented in Tab. IV.
In this generalization task, MinkLoc3D-SI outperforms other solutions, whether trained on the USyd or the Oxford RobotCar Intensity. The selected spherical representation with intensity information in MinkLoc3D-SI is a better fit for 3D LiDAR scans recorded at a single location, which is the case for the KITTI dataset.
## VI Ablation study
### _Quantization_
Minkowski Engine performs 3D sparse convolutions on cuboids, and each cuboid can contain only one point. Therefore, the size of the smallest cuboid provided at the network's input determines the granularity of the input data.
In the case of MinkLoc3D-SI, each 3D point is converted into spherical coordinates \\((r,\\theta,\\phi)\\). As a basis, we chose the best configuration of MinkLoc3D-SI with \\(r=2.5,\\theta=2^{\\circ}\\). The influence of \\(\\phi\\) was not tested as VLP-16 has only 16 scanning layers, and we assume the quantization parameter that preserves each layer. The obtained results are presented in Fig. 5.
The \\(AR@1\\%\\) values are similar for different quantization values, showing that the method is robust while a noticeable difference can be observed for \\(AR@1\\). We observe a drop in the performance for small range quantizations below 1 m as the MinkLoc3D-SI network cannot capture the high-level features correctly. On the other hand, the granularity of the range measurement above 3 m makes the performance worse, which stems from losing necessary details in the 3D point clouds. We determined the sweet spot for USyd to be equal to \\(r=2.5~{}m\\).
A similar analysis performed for \\(\\theta\\) reveals a range of values from \\(\\theta=2^{\\circ}\\) to \\(\\theta=5^{\\circ}\\) resulting in a similar, best performance when it comes to \\(AR@1\\). Choosing to use more cuboids (\\(\\theta\\) below \\(2^{\\circ}\\)) as well as fewer cuboids (\\(\\theta\\) above \\(5^{\\circ}\\)) results in worse \\(AR@1\\) and \\(AR@1\\%\\) measures.
### _The maximum range of 3D LiDAR_
Velodyne VLP-16 used in the USyd Campus dataset has a range of approx. 100 m. MinkLoc3D-I and MinkLoc3D-SI results obtained for varying maximum range of points from the sensor are presented in Fig. 6.
The best results were obtained with the maximum range of measurement set to at least 60 m. The 3D points are sparsely located for greater maximal ranges, making it harder to determine meaningful features for place recognition. For these ranges, the spherical representation of MinkLoc3D-SI outperforms the Cartesian representation used in MinkLoc3D-I. The _AR@1_ for the limited ranges lower than 60 m drops while the _AR@1%_ remains above \\(96\\%\\) for both methods. For the smallest maximum range of 20 m, MinkLoc3D-I reports better _AR@1_ of \\(89.3\\) than _AR@1_ of \\(85.2\\) of MinkLoc3D-SI, further proving that Cartesian formulation works well when measured points are close to each other.
\\begin{table}
\\begin{tabular}{l|c c c|c c} KITTI dataset & \\multicolumn{2}{c|}{Trained} & \\multicolumn{2}{c|}{Source} & \\multicolumn{1}{c}{\\multirow{2}{*}{\\(AR@1\\%\\)}} \\\\ & \\multicolumn{2}{c|}{on} & \\multicolumn{2}{c|}{of results} & & \\\\ \\hline PointNetVLAD [6] & Oxford RC & [42] & 72.4 & – \\\\ LPD-Net [13] & Oxford RC & [42] & 74.6 & – \\\\ MinkLoc++ (3D) [9] & Oxford RC & [9] & 72.6 & – \\\\ Scan Context [4] & – & our eval. & 75.0 & 71.4 \\\\ MinkLoc3D [8] & USyd & our eval. & 73.8 & 69.1 \\\\ MinkLoc3D-SI (our) & Oxford RCI & our eval. & **81.0** & 72.6 \\\\ MinkLoc3D-SI (our) & USyd & our eval. & **81.0** & **78.6** \\\\ \\end{tabular}
\\end{table} TABLE IV: The generalization results on the KITTI dataset. MinkLoc3D-SI outperforms other solutions. Oxford RC stands for Oxford RobotCar, while Oxford RCI stands for Oxford RobotCar Intensity.
Fig. 5: The influence of the range and \\(\\theta\\) quantization on the obtained results by the MinkLoc3D-SI on USyd.
### _Number of points in the 3D point cloud_
The 3D LiDAR scans in the USyd dataset contain up to 25k points per scan, which is significantly more than 4096 points used in the Oxford RobotCar dataset. The performance of the proposed solutions depends on a number of randomly subsampled points, as shown in Fig. 7.
The best results are obtained when all of the points from the 3D LiDAR are considered. This stands in contrast to the typical processing of the Oxford RobotCar dataset that limits the number of points to 4096. In our ablation study, the drop in performance is visible when the number of points is below 8192. Based on these results, we recommend using all points from the sensor for MinkLoc3D-SI as the measured inference time per 3D point cloud only increases to \\(11.4\\) ms from \\(8.9\\) ms on Nvidia RTX 3080 when we process 23000 points instead of 2048 as presented in Fig. 8.
### _Performance across different weather conditions_
The USyd dataset was gathered along similar routes for over a year with varying different weather conditions. Therefore, the performance of MinkLoc3D-SI depending on the weather conditions was evaluated to determine its robustness, and the results are presented in Tab. V.
The obtained results suggest that the performance of the proposed MinkLoc3D-SI is mostly independent of the weather conditions. MinkLoc3D-SI yields an improvement of \\(AR@1\\) from \\(1.7\\) to \\(3.5\\) percentage points over MinkLoc3D for all weather scenarios proving that spherical representation and intensity information are valuable additions in all cases.
## VII Conclusions
In this article, we propose MinkLoc3D-SI, the sparse convolution-based method utilizing the natural, spherical representation of 3D points from a single 3D LiDAR scan, and the commonly available intensity information associated with each 3D point measurement. The proposed method targets the problem of place recognition when using a single scan from a 3D LiDAR.
MinkLoc3D-SI is evaluated on USyd Campus, KITTI, and Oxford RobotCar Intensity datasets. On the USyd Campus dataset, the gains from the spherical point representation, intensity, and combined improvements are notable compared to the state-of-the-art MinkLoc3D and Scan Context. We observe minor improvements on the proposed Oxford RobotCar Intensity dataset when intensity is used, but the spherical representation is unsuitable for map segments created from accumulated 2D scans. The further evaluation of the generalization ability on the KITTI dataset yields the best results among the 3D point cloud-based algorithms. The performed ablation study confirms that the best results should be expected with rather large quantization steps and when all of the available points are processed.
The obtained results suggest that the spherical coordinates
\\begin{table}
\\begin{tabular}{c|c c c c c c|c|c} \\hline \\hline \\begin{tabular}{c} \\begin{tabular}{c} \\begin{tabular} \\end{tabular} \\\\ \\end{tabular} & \\begin{tabular}{c} **S** \\\\ \\end{tabular} & \\begin{tabular}{c} **C** \\\\ \\end{tabular} & \\begin{tabular}{c} **S/C** \\\\ \\end{tabular} & \\begin{tabular}{c} **AR** \\\\ \\end{tabular} & \\begin{tabular}{c} **SS** \\\\ \\end{tabular} & \\begin{tabular}{c} **VC** \\\\ \\end{tabular} & \\begin{tabular}{c} **Mean** \\\\ \\end{tabular} &
\\begin{tabular}{c} **Imp.** \\\\ \\end{tabular} \\\\ \\hline
**S** & 95.1 & 93.1 & 96.0 & 95.0 & 93.4 & 96.0 & 94.6 & +3.1 \\\\
**C** & 95.2 & 92.8 & 96.0 & 94.5 & 92.7 & 94.7 & 94.5 & +2.6 \\\\
**S/C** & 96.5 & 94.7 & 96.6 & 95.6 & 94.9 & 96.4 & 95.9 & +2.7 \\\\
**AR** & 95.3 & 93.1 & 95.9 & 94.8 & 94.3 & 95.5 & 94.8 & +2.8 \\\\
**SS** & 94.0 & 91.8 & 94.6 & 95.3 & 94.9 & 96.2 & 93.9 & +3.5 \\\\
**VC** & 95.7 & 93.0 & 95.6 & 94.7 & 94.8 & – & 94.9 & +1.7 \\\\ \\hline
**Mean** & 95.2 & 93.1 & 95.9 & 94.9 & 93.6 & 95.7 & 94.7 & +2.9 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE V: MinkLoc3D-SI performance measured with \\(AR@1\\) across different weather conditions: sunny (S), cloudy (C), sunny/cloudy (S/C), after/slight rain (AR), sunset (SS), very cloudy (VC). The **Imp.** stands for the percentage point improvement over MinkLoc3D in the same weather conditions.
Fig. 8: Inference time for a single 3D point cloud place recognition with MinkLoc3D-SI on USyd. Total inference time is below 12 ms, regardless of the number of points used.
Fig. 6: The maximum distance of points from the 3D LiDAR center on the results obtained by MinkLoc3D-I and MinkLoc3D-SI on USyd. A drop in performance is visible for maximum ranges below 60 m.
Fig. 7: The influence of a randomly chosen number of points on the performance of MinkLoc3D-SI on USyd. A drop in performance is visible when the number of points is below 8192.
with intensity for 3D points are promising modifications to processing point clouds from a rotating 3D LiDAR and thus could be applied to other solutions with sparse 3D convolutional architecture or for other applications.
## References
* [1]K. Zywanowski, A. Banaszczyk, and M. R. Nowicki (2020) Comparison of camera-based and 3D LiDAR-based place recognition across weather conditions. In 2020 16th Inter. Conf. on Control, Automation, Robotics and Vision (ICARCV), Vol., pp. 886-891. Cited by: SSI, SSI, SSII-A, SS
* [42] Y. Pan, X. Xu, W. Li, Y. Wang, and R. Xiong, \"Coral: Colored structural representation for bi-modal place recognition,\" _arXiv preprint arXiv:2011.10934_, 2020. | The 3D LiDAR place recognition aims to estimate a coarse localization in a previously seen environment based on a single scan from a rotating 3D LiDAR sensor. The existing solutions to this problem include hand-crafted point cloud descriptors (e.g., ScanContext, M2DP, LiDAR IRIS) and deep learning-based solutions (e.g., PointNetVLAD, PCAN, LPDNet, DAGC, MinkLoc3D), which are often only evaluated on accumulated 2D scans from the Oxford RobotCar dataset. We introduce MinkLoc3D-SI, a sparse convolution-based solution that utilizes spherical coordinates of 3D points and processes the intensity of 3D LiDAR measurements, improving the performance when a single 3D LiDAR scan is used. Our method integrates the improvements typical for hand-crafted descriptors (like ScanContext) with the most efficient 3D sparse convolutions (MinkLoc3D). Our experiments show improved results on single scans from 3D LiDARs (USyd Campus dataset) and great generalization ability (KITTI dataset). Using intensity information on accumulated 2D scans (RobotCar Intensity dataset) improves the performance, even though spherical representation doesn't produce a noticeable improvement. As a result, MinkLoc3D-SI is suited for single scans obtained from a 3D LiDAR, making it applicable in autonomous vehicles. | Condense the content of the following passage. | 297 |
arxiv-format/0907_0442v1.md | # Dust-Induced Destabilization of Glacial Climates
Brian F. Farrell
[email protected] Harvard University
Dorian S. Abbot
[email protected] Harvard University
November 3, 2021
## I Introduction
A central problem in climate science is understanding the mechanism producing abrupt climate change such as the glacial/interglacial transitions and the Dansgaard-Oeschger interstadial transitions during the glacial. The rapidity with which the glacial climate state changes argues for an underlying nonlinear mechanism taking the form of a switch or trigger [5; 7]. Climate records from ice and deep sea cores establish a relation between Milankovitch cycles and glacial climate variability but not a deterministic causal connection [29]. Additional mechanisms must be involved in producing climate variability and the millennial time scale climate variation recorded in Greenland ice [1] and Chinese cave deposits [26] shows that these rapid climate variations do not depend on orbital parameter, ice albedo or CO\\({}_{2}\\) variation, which have longer time scales. In addition to the the decadal and shorter time scale of these transitions, particularly telling is their hemispheric to global spatial scale and the intimate association of these events with the hydrological cycle implied by polar records of ice accumulation [1; 22], dust and aerosol loading [12; 16], oxygen isotope and methane [3; 4] which are corroborated by midlatitude glacier records [24] and widespread cave oxygen isotope records [8; 11; 26; 30].
Rapid increase in atmospheric methane during abrupt warming events implies tropical precipitation changes on time scales as short as three to six years coincident with equally rapid variation in the midlatitude Asian monsoon precipitation intensity [19] and Greenland ice accumulation rates [17]. It was remarked early in the study of these events that atmospheric dust varies by at least an order of magnitude on the same decadal time scale [12]. It is also well-established that a great deal of dust was in flux during glacial periods producing widely distributed loess deposits [23] and that during glacial periods the Pleistocene megafauana roamed vast areas of a dusty Mammoth steppe in what is now wet permafrost tundra [15]. Large increases in atmospheric dust load observed during these cycles [16; 24] is usually explained as a consequence of the reduced hydrological cycle [31] and the observed decrease in dust coincident with warming is consistent with return of pluvial conditions to source regions and increase in atmospheric dust wash out in association with a stronger hydrological cycle.
Clues to the mechanism of abrupt climate change are provided by these records. The bimodality, abrupt onset and memoryless nature of the change implies random occurrence of a switch-like transition between states. The hemispheric scale, extreme rapidity and concurrent change in the hydrological cycle at both low and mid latitudes implies this switch-like rearrangement in the state of the atmosphere is related to precipitation on a hemispheric scale [18]. Moreover, this rearrangement must involve strong enough feedbacks to produce and maintain two stable states, one dry/cold and one wet/warm.
## II The mechanism of dust destabilization
A suggestive analogue of abrupt transitions seen in Earth's glacial climate has been observed on Mars where transition between a clear and a dusty state occurs witha bias toward the dusty state transition during Martian northern summer [10]. In fact, the analogy with Martian dust storms is apt as the Earth's climate also has a crucial and vulnerable interaction among atmospheric dynamics, dust, and radiation; with the additional involvement of the hydrological cycle in the case of the Earth. The interaction between radiation and the hydrological cycle drives the monosons and the instability of this interaction is notorious in limiting monsoon prediction. With these observations and concepts in view we next examine more closely the influence of dust on glacial climate variability.
Calculations with a radiative-convective model show that both atmospheric reflection and absorption by dust reduce radiation reaching the surface (Fig. 1b), which exerts a strong influence on global precipitation rate (Fig. 1a) essentially by reducing the energy available to evaporate water. A more subtle influence of radiation absorption on precipitation becomes important when the dust is increased to 5-20 times modern levels (State 2 in Fig. 1,2). Absorption of radiation by dust begins to replace vertical transport of latent energy by convection in maintaining the marginal convective stability of the troposphere (Fig. 2c). This cuts off mid-tropospheric dust from wet deposition, eliminating the primary process by which dust is removed from the atmosphere. For example, when dust is increased to ten times modern levels, deep convective precipitation production is reduced by a factor of 2.6 at 607 mb (Fig. 2c), the altitude of maximum dust mixing ratio (Fig. 2a). This drastic reduction in the dust sink means that dust input would only need to be increased by a factor of 2-8 for the midtropospheric dust concentrations to reach 5-20 times modern levels so that this state could persist, well within the range of estimated dust source increase during cold Pleistocene periods [12; 16; 24; 28]. If the dust is further increased to \\(\\sim\\)30 times modern, an inversion develops near the altitude of maximum dust concentration (State 3, Fig. 2b, 1a). In this state deep convection shuts off completely (Fig. 2c) and total precipitation is reduced by more than an order of magnitude (Fig. 1a).
The state in which the role of convective latent heat transport and precipitation in maintaining the marginal convective stability of the troposphere is partially assumed by absorption of radiation by dust (State 2) is a model for the large-scale behavior of the atmosphere during cold, dry, glacial climates. Switch-like transitions between the cold, dry, dusty glacial climate (State 2) and the warm, wet, less dusty interglacial climate (State 1) occur when atmospheric dust load exceeds a threshold resulting in switch-like behavior as indicated in Figs. 1 and 2. Crossing this threshold would depend on a sequence of weather events that result in sufficiently high atmospheric dust load. Dust load is a strong function of surface drying which in turn is related to the particular sequence of global weather events that happens to occur. Even with identical boundary conditions individual realizations of global weather events result in large integrated dispersion in continental scale drying as seen among realizations in ensemble AGCM simulations [21]. In addition dust lofting is a strong nonlinear function of surface wind speed [2] which also varies among ensemble members in simulations both with and without boundary condition influences. The effect of changes in atmospheric boundary conditions, such as are associated with ENSO and the PDO in a coupled AOGCM, is to produce an additional stochastic variation in regional weather patterns that serves to enhance the variance of the stochastic process that triggers the dust state transition. Explicit simulation of the dust state transition is not possible in the present generation of climate models since they do not solve for dust as a prognostic variable and since dust lofting depends crucially on meso and micro scale processes [20] that they do not resolve. Therefore the calculations presented here, while indicative, can not establish precisely a sequence of events that would cause such a transition, although we can say that the occurrence would be quite random.
The switch-like nature of the dust transition and the highly nonlinear relation between wind speed and dust lofting imply in the free running climate system random occurrences of widespread excursions in dust loading triggering associated abrupt transitions even in the absence of external influences. This is consistent with a memoryless Poisson process characterized only by a time scale for transitions as is seen in the glacial climate record. Consistent with these statistical properties, the glacial climate can be phenomenologically modeled using a Langevin equation with a pseudo-potential having two minima representing the two stable states separated by a potential barrier and forced by white noise [7]. With suitable parameter choice a stochastic model of this kind produces a time series of dust concentration in good agreement with the ice core record. The contribution of our dust-induced transition theory to this Langevin model for abrupt transition is to make a physical identification of the two phenomenological states as being the warm/wet state and cold/dry state and to identify the pseudo-potential as corresponding to the switch-like dust-induced instability between these states. As an illustrative example, using a state variable of \\(x=log_{10}(dust)\\), we choose the interstadial state to correspond to the present dust level (\\(x=0\\)) and a typical stadial dust level of 10 times the present dust level (\\(x=1\\)). We then choose a potential (\\(U\\)) corresponding to these equilibria, with a switch between them at \\(x=0.5\\) (Fig. 3a). The corresponding Langevin equation is
\\[\\dot{x}=-\\frac{dU}{dx}+\\sigma\\xi,\\]
with \\(\\sigma=.1\\) and \\(\\xi\\) Gaussian distributed white noise with zero mean and unit variance. This equation produces a time series (Fig. 3c) in which the system abruptly switches between states with exponentially distributed waiting time indicative of a stochastic Poissson process, consistent with the glacial records. We have chosen to put the glacial equilibrium at 10 times present dust in agreement with observations of dust, temperature and precipitation in glacial climates. We believe the far dustier and colder equilibrium indicated by the relationship between dust and precipitation in Fig. 1a is not accessible because dust levels can not increase further once all potential dust source regions have been fully activated. This observation suggests that the progressive increase in the severity of stadials during the Pleistocene culminating in the particularly severe LGM interval can be related to increased availability and activation of dust source regions. Over long time scales the distribution and activation of dust source regions is clearly related to seasonal distribution of insolation and to variations in orography.
We have thus far concentrated on the mechanism underlying the stochastic component of climate variability. There is also a substantial component of variability in phase with orbital forcing. Weak, but deterministic, external influences such as orbital-parameter-induced variation in seasonal insolation can produce clear signals in stochastically forced systems such as the dust-destabilized glacial climate by the process of stochastic resonance [13]. This provides an explanation for the appearance of orbital periodicities in the climate record even though the effects of these insolation changes are insufficient to directly drive the climate system. In particular the precessional cycle, which strongly controls the Monsoon, influences dust input from large regions of the planet and would be expected to show prominently in the record of dust induced climate variation. The highly nonlinear nature of the dust feedback is consistent with the precessional cycle forcing at \\(\\sim\\)20 kyr producing both the early Pleistocene \\(\\sim\\)40kyr and late Pleistocene \\(\\sim\\)100 kyr glacial cycles if these are seen as nonlinear responses to cyclic forcing, which characteristically results in power appearing at some integer fraction of the forcing signal frequency [25]. The Himalayan uplift over the Neogene would have slowly isolated the interior of Asia from the Monsoon, eventually drying Asia enough that precession-induced aridity variations could interact with integrated stochastic weather variation to generate sufficient dust to initiate glacial cycles. The long-standing puzzle of the correlation between global glaciations and Northern Hemisphere insolation variations can be explained by the influence of the dust-induced precipitation decrease in the tropics which would effectively transmit the influence across the equator.
## III Discussion
Understanding past climate change presents a fundamental theoretical challenge, and predicting future climate change is important for society. In pursuit of these goals particularly important issues are establishing the physical mechanisms controlling the climate system and placing bounds on the rapidity with which climate change can occur. Evidence preserved in ice cores and widespread tropical and mid-latitude ice and cave deposits reveal that glacial climates were characterized by abrupt stochastic bimodal transitions. These observations constrain mechanisms for explaining glacial climate variability. Abrupt transitions can be produced by thermohaline [27], sea ice [14], and storm track [9] switches, but the effects of these transitions are local and fail to predict the observed abrupt and global involvement of the hydrological cycle. In this work we described a theory implicating the precipitation/dust feedback destabilization of the hydrological cycle in explaining rapid climate transitions.
*
## Appendix A Radiative-Convective Model
In order to model the mechanism of dust-induced destabilization of glacial climates we use NCAR's single column atmospheric model (SCAM). This model contains all aerosol, cloud, convection, and radiation representations of CAM, NCAR's atmospheric general circulation model. We couple the model atmosphere to a mixed layer ocean with a depth of 50 m, so that the equilibrated model conserves energy, which is important for determining precipitation. We apply an ocean heat flux divergence of -70 W m\\({}^{-2}\\) to this slab ocean to represent heat transport to higher latitudes. We apply dust to the model by multiplying the tropical-average of the seasonally-varying standard CAM aerosol dust climatology by a \"dust factor\" at each model level for each of the CAM dust size bins, which represent dust particles with diameter 0.1-1.0 \\(\\mu\\)m, 1.0-2.5 \\(\\mu\\)m, 2.5-5.0 \\(\\mu\\)m, and 5.0-10.0 \\(\\mu\\)m. Each dust bin has characteristic optical properties. We use a sea salt aerosol profile taken from the tropical Pacific and set all other aerosols to zero. We apply a surface wind speed of 8 m s\\({}^{-1}\\) to the model. We find similar results with surface wind speeds of 4 and 12 m s\\({}^{-1}\\). We average model results over 20 years of converged solutions. We use a time step of 1200 s; our results are very similar when we reduce the time step to 600 s.
###### Acknowledgements.
This work was partially supported by NSF ATM-0736022 and NSF ATM-0754332.
## References
* (1) Alley, R., D. Meese, C. Shuman, A. Gow, and K. Taylor (1993), Abrupt increase in Greenland snow accumulation at the end of the Younger Dryas event, _Nature_, _362_, 527- 529.
* (4) Chappellaz, J., T. Bluniert, D. Raynaud, and J. Barnola (1993), Synchronous changes in atmospheric CH\\({}_{4}\\) and Greenland climate between 40 and 8 kyr BP, _Nature_, _366_, 443 -445.
* (5) Clement, A., and L. Peterson (2008), Mechanisms of abrupt climate change of the last glacial period, _Reviews of Geophysics_, (2006RG000204).
* (6) Ditlevsen, P., and O. Ditlevsen (2009), On the stochastic nature of the rapid climate shifts during the last ice age, _J Climate_, pp. 446-457.
* (7) Ditlevsen, P. (2000), Observation of \\(\\alpha\\)-stable noise induced millennial climate changes from an ice-core record, _Geophys. Res. Lett._, pp. 446-457.
* (9) Farrell, B. F., and P. J. Ioannou (2003), Structural stability of turbulent jets, _JAS_, _60_(17), 2101-2118.
* (11) Fleitmann, D., S. Burns, M. Mudelsee, and U. Neff (2003), Holocene forcing of the Indian monsoon recorded in a stalagmite from southern Oman, _Science_, pp. 1367-1369.
* (14) Gildor, H., and E. Tziperman (2003), Sea-ice switches and abrupt climate change, _Philosophical Transactions: Mathematical_, _361A_, 193 5- 1944.
* (15) Guthrie, R. D. (2001), Origin and causes of the mammoth steppe: a story of cloud cover, woolly mammal tooth pits, buckles, and inside-out Beringia, _Quaternary Science Reviews_, _20_, 549-574.
* (17) Mayewski, P., L. Meeker, S. Whitlow, M. Twickler, M. Morrison, R. Alley, P. Bloomfield, and K. Taylor (1993), The atmosphere during the Younger Dryas, _Science_, _261_, 195-197.
* (19) Ruth, U., M. Bigler, R. Rothlisberger, M.-L. Siggaard-Andersen, S. Kipfstuhl, K. Goto-Azuma, M. E. Hansson, S. J. Johnsen, H. Lu, and J. P. Steffensen (2007), Ice core evidence for a very tight link between North Atlantic and east Asian glacial climate, _Geophys. Res. Lett._, _34_(L03706).
* (20) Schepanski, K., I. Tegen, M. C. Todd, B. Heinold, G. Bonisch, B. Laurent, and A. Macke (2009), Meteorological processes forcing saharan dust emission inferred from msg-seviri observations of subdaily dust source activation and numerical models, _J. Geophys. Res._, _114_(D10), 1-18.
* (21) Schubert, S., M. Suarez, P. Pegion, R. Koster, and J. Bacmeister (2004), On the cause of the 1930s dust bowl, _Science_, _303_(5665), 1855-1859.
* (23) Sun, Y., J. Chen, S. C. Clemens, Q. Liu, J. Ji, and R. Tada (2006), East Asian monsoon variability over the last seven glacial cycles recorded by a loess sequence from the northwestern Chinese Loess Plateau, _Geochem. Geophys. Geosyst._, _7_(12), 1-16, doi:10.1029/2006GC001287.
* (24) Thompson, L., E. Mosley-Thompson, M. Davis, J. Bolzan, J. Dai, L. Klein, T. Yao, X. Wu, Z. Xie, and N. Gundestrup (1989), Holocene-late Pleistocene climatic ice core records from Qinghai-Tibetan Plateau, _Science_, _246_, 474-477.
* (25) Tziperman, E., M. E. Raymo, P. Huybers, and C. Wunsch (2006), Consequences of pacing the Pleistocene 100 kyr ice ages by nonlinear phase locking to Milankovitch forcing, _Paleoceanography_, _21_(4).
* (26) Wang, Y., H. Cheng, R. Edwards, Z. An, and J. Wu (2001), A high-resolution absolute-dated late Pleistocene monsoon record from Hulu Cave, China, _Science_, pp. 2345-2348.
* (28) Winckler, G., R. Anderson, M. Fleisher, and D. McGee (2008), Covariant glacial-interglacial dust fluxes in the equatorial Pacific and Antarctica, _Science_, _320_, 93-96.
* (31) Yung, Y., T. Lee, C. Wang, and Y. Shieh (1996), Dust: A diagnostic of the hydrologic cycle during the Last Glacial Maximum, _Science_, _271_, 962.
Figure 1: **Regimes in tropical stability and precipitation determined by atmospheric dust level.** (a) The precipitation (blue circles), tropical surface temperature (red x’s), and the difference between mid-tropospheric (515 mb) and surface potential temperature (red diamonds) as a function of the factor by which current tropical dust levels are increased (dust factor). (b) The absorption of solar radiation by the surface (blue circles), the atmosphere (green +’s), and the sum of the surface and atmosphere (red x’s) as a function of the dust factor, the factor by which modern dust levels are multiplied. In State 1 convection maintains the marginal convective stability of the troposphere. In State 2 absorption of solar energy by midtropospheric dust partially replaces convection in maintaining marginal convective stability. In State 3 solar absorption by dust causes an inversion.
Figure 2: **Atmospheric stability and deep convection in each of the dust-mediated states.** (a) Vertical profile of the annual-mean tropical dust mixing ratio in the current climate (dust factor of one). (b) Vertical profiles of atmospheric temperature for different dust factors. Moist adiabats with surface temperatures of \\(12.5^{\\circ}\\), \\(27.5^{\\circ}\\), and \\(22.5^{\\circ}\\) are plotted as dashed black lines. (c) Net precipitation production (precipitation minus evaporation) by deep convection. A dust factor of 1 is used to demonstrate State 1, 10 for State 2, and 100 for State 3.
Figure 3: **Langevin Model of Dust Transitions.** (a) Pseudo-potential corresponding to the stadial and interstadial equilibria and to the switch between them at \\(x=0.5\\), where \\(x=log_{10}(dust)\\). (b) Probability of occurrence of states. (c) Time series of transitions between states with waiting time between transitions of 1600 years. | The climate record preserved in polar glaciers, mountain glaciers, and widespread cave deposits shows repeated occurrence of abrupt global transitions between cold/dry stadial and warm/wet interstadial states during glacial periods. These abrupt transitions occur on millennial time scale and in the absence of any known global-scale forcing. Here a theory is advanced implicating a feedback between atmospheric dust and the hydrological cycle in producing these abrupt transitions. Calculations are performed using a radiative-convective model that includes the interaction of aerosols with radiation to reveal the mechanism of the dust/precipitation interaction feedback process and a Langevin equation is used to model glacial climate destabilization by this mechanism. This theory explains the observed bimodal, stochastic, and abrupt nature of the transitions as well as their intrinsic connection with the hydrological cycle.
pacs: 47., 47.27.-i, 47.20.Ft, 47.20.Ky +
Footnote †: preprint: APS/123-QED | Write a summary of the passage below. | 206 |
arxiv-format/2402_04546v2.md | # LiDAR-Forest Dataset: LiDAR Point Cloud Simulation Dataset for
Forestry Application
\\({}^{1}\\) Yawen Lu
e-mail: [email protected]
\\({}^{1}\\) Zhuoyang Sun
e-mail: [email protected]
\\({}^{2}\\) Linyuan Shao
e-mail: [email protected]
\\({}^{3}\\) Qianyu Guo
e-mail: [email protected]
\\({}^{1}\\) Yunhan Huang
e-mail: [email protected]
\\({}^{2}\\) Songlin Fei
e-mail: [email protected]
\\({}^{1}\\) Victor Chen
e-mail: [email protected]
## 1 Introduction
On a cold winter night, you carry a backpack with a 64-bit LiDAR sensor to collect forest data such as LiDAR points and IMU. Because the wild forest is so dense and large, with extremely rugged ground full of shrubs, it is easy to get scratched or lost and not know where you are or where you have already recorded. But this is just a small area, and your goal is to cover as much diversity, weather, and area as possible, which could take half a year. You complain about whatever and go back to sleep, hoping everything will be fine tomorrow.
It is a miniature of the forestry researchers and professionals who want to collect point cloud in wild forests. When comparing with the most recent equipment for collecting LiDAR point in forest scenarios (see Figure 2), we naturally raise three questions: **i)** Is it possible to use simulation data as an efficient complement when evaluating point cloud algorithms? **ii)** How can we generate that the simulation is close to the data collected by the real LiDAR sensor? **iii)** What applications can this simulation dataset be used for that will benefit forestry professionals and education?
Real-world collection in a structured environment, such as a track of planted forests, allows end-to-end testing of the developed system, but it is limited to a very small number of test cases because it is very expensive and time-consuming to cover diverse tree species and regions. In addition, manual measurement and data annotation relies on post-processing by human labelers, which inevitably introduces subjective errors and noise into the training labels, especially for trees that are out of range of the LiDAR (either too close or too far away). Given these limitations, a simulation system for LiDAR scanning is necessary and important, to automatically generate high-quality, accurate data with error-free labels.
There have already been some attempts to recognize the importance of sensor simulation, dating back to the early robot simulators, such as Gazebo and OpenRave [3, 14], which support sensor simulation through physics and graphics engines. More recently, advanced real-time rendering techniques have been used in autonomous driving simulators such as CARLA and AirSim [4, 22]. However, these existing simulators primarily focus on urban driving scenarios, relying on manually created 3D assets and simplified physics, resulting in simulations that look fake and are very limited in variety. This limits the wide use of existing methods to be applied in wild forests.
Bridging the gap between simulation and the real world requiresus to design a better model of the real world environment and the distribution of the LiDAR point cloud. In this work, to address the three questions we raised earlier, we propose _LiDAR-Forest_ dataset, which focuses on the LiDAR point cloud simulation for the challenging wild forest scenes. As a new, efficient, and realistic simulation system, it simulates the actual LiDAR in a more realistic manner. The entire system consists of _three stages and five novel modules_: The asset creation stage creates each core component such as LiDAR, trees of different species, and a customizable landscape. The scene generation stage combines the individual components together with a random selector for the number and location distribution. At the simulation stage, our approach combines the power of physics-based scenes and LiDAR sensor and records the generated point cloud to the drive. The five modules (data generation module, rotation module, error simulation module, human motion module, and scan path module) contribute to a more realistic simulation by introducing motion and sensor noises, and narrow the gap between the simulation and the real-world backpack collection.
In the following section, we will describe the background of LiDAR simulation and the relevant LiDAR-based applications in forestry in Sec. 2, the design and creation of our dataset and metrics in Sec. 3, the extensibility and potential applications in Sec. 4, a discussion of future work in Sec. 5, and a conclusion summarizing the work in Sec. 6. For its effectiveness, we hope the simulation system and data can catalyze a transformation in simulation systems and inspire new insights to the digital forestry community.
## 2 Background
LiDAR point clouds have been widely used for various tasks in forestry including tree counting, species identification, DBH measurements, volume estimation, and structural diversity quantification. In this section, we generally review recently released LiDAR point cloud datasets in forestry, followed by introducing the recent simulation systems. Finally, we summarize the LiDAR-based applications.
**LiDAR Dataset for Forestry.** A LiDAR sensor is capable of measuring the distance data of a set of adjacent points using laser beams and generating a point cloud in a 3D manner. Considering it's high measurement accuracy, data collected by LiDAR sensor has been widely used in the research of forestry. Weiss and Biber [23] proposed a LiDAR-based plant detection and segmentation approach, while demonstrating the advantages of using LiDAR under different weather conditions. To explore the route in a GPS-free environment, Malavazi et al. [20] introduced a method to extract lines from point cloud data. The approach is robust even when the plant condition is less than ideal. Furthermore, a LiDAR-based forest management strategy was proposed to monitor and assess various forest attributes, such as tree height growth, canopy volume, and cover estimate [25]. According to [2], combined with machine learning algorithms, data collected using LiDAR sensor can be applied to classify the species of plants. In addition, LiDAR is also used for forest fire prevention, soil erosion detection, yield prediction, etc.
**LiDAR Simulation.** LiDAR simulation [5, 21] is a process that simulates the operation and performance of LiDAR systems that use laser beams to measure the distance and surface characteristics of targets. It is a very popular technique in many fields such as autonomous driving, robotics, and virtual reality. LiDARsim [21] uses real-world data, physical principles, and machine learning algorithms to simulate authentic LiDAR sensor data. Fang et al. [5] presented an augmented LiDAR simulation system that automatically produces annotated point clouds for 3D obstacle perception in autonomous driving. And Li et al. [15] proposed a scalable simulation pipeline to transfer annotated point clouds across LiDAR embodiments, rendering synthetic data streams that emulate alternative densities and placements to generalize models.
**LiDAR-based Applications.** In addition to forestry, point cloud generated from LiDAR have huge potential in various tasks and domains [6, 9, 10, 11, 13, 18, 24], especially in computer vision, augmented reality (AR) and virtual reality (VR). Hasan et al. [9] indicates that LiDAR could be a powerful tool in computer vision tasks, such as object detection, person tracking and property estimation. Ghallabi et al. [6] used multi-layer LiDAR data to detect lane markings, which were matched to a prior map using particle filtering to achieve improvements over standard GPS solutions. Jacobsen and Teizer [12] proposed a novel worker safety monitoring system using LiDAR for precise real-time presence detection near hazards, demonstrably improving over GPS solutions when tested in a virtual environment. Furthermore, in [8], the researchers introduced a mobile augmented reality platform which utilized LiDAR point cloud data to visually render real-world object dimensions on a phone.
## 3 LiDAR-Forest Dataset
In this part, we present the methodology and specifics of the _LiDAR-Forest_ dataset. We've developed a procedure-based LiDAR Simulator to generate an authentic dataset. It's versatile and can be applied to various LiDAR sensors and configurations (e.g., rotation speed) with just a few key settings, such as scan pattern, beam quantity, and field of view (FOV). This procedure-based simulator follows the strict steps and accuracy of real scanning.
### _Dataset Simulation_
We chose the Velodyne VLP-16 as a prototype for analog LiDAR, a widely chosen model in the current backpack LiDAR field, which has been similarly validated for stability and reliability by [7, 17]. The overview of the framework is shown in Figure 1, which consists of 5 modules, the _Data Generation Module_, the _Rotation Module_, the _Error Simulation Module_, the _Human Movement Module_, and the _Scanning Path Module_. We will detail each module function and the dataset format here.
#### 3.1.1 Data Generation Module
The Data Generation Module involves simulating LiDAR beams for generating data in a virtual environment. Instead of directly replicating LiDAR beams, our approach is to create a prototype model and affix thin rectangles to it. These rectangles act as simulated beams and move in tandem with the base model, evenly distributed within a sector corresponding to the LiDAR's field of view (FOV). The length of each rectangle matches the maximum distance covered by a real LiDAR.
This strategy makes effective use of Unreal Engine 5's collision detection system. By employing this system, collision status between the rectangles and scene elements (such as trees or the ground) can be queried. When a collision is detected, the coordinates of the collision point are recorded, mirroring the process of a physical LiDAR.
The resulting data format encapsulates various key elements, including (x, y, z, leaf-wood labels, semantic labels, and instance labels), while (x, y, z) represents the coordinate of the collision point, leaf-wood labels refer to the type of object, which is used to indicate whether it is a leaf or a trunk, semantic labels refer to
Figure 2: Conceptual difference between the real-word LiDAR backpack scanner (left) and our simulated LiDAR scanner in wild forests (right).
the category of the object, which is used to differentiate between different kinds of objects, e.g., ground, tree, stone, etc., and instance labels refer to the specific kind of individual of a semantic type, e.g., Tree A, Tree B, and Tree C. In the meantime, 2 types of data are generated: Relative and Absolute. The relative data type reflects LiDAR's perspective, detailing the coordinates relative to its position in the virtual space, and the absolute data type provides a broader context, offering real coordinates in the virtual space.
#### 2.2.2 Rotation Module
To adapt various scanning patterns, we introduced a versatile rotation configuration employing three key parameters within our research. These parameters encompass azimuth resolution, vertical angles, and spin rate.
Starting with azimuth resolution, this parameter defines the level of detail in the horizontal scanning dimension. By adjusting the azimuth resolution, we can customize the precision of scans based on the specific demands of the environment or application. The second parameter, vertical angles, plays a crucial role in determining the extent of coverage in the vertical dimension. This flexibility allows us to tailor the scanning process to different spatial requirements. Whether it's capturing data from specific heights or obtaining a comprehensive vertical profile, adjusting vertical angles enables us to align the scanning methodology with the objectives of the given task. Lastly, the spin rate parameter governs the speed at which the scanning device rotates. This dynamic control over the spin rate is essential for optimizing scanning efficiency. Depending on the application, a faster or slower spin rate can be employed to strike a balance between swift data acquisition and the precision required for accurate analysis.
#### 2.2.3 Error Simulation Module
The Error Simulation Module (see Fig. 3) is crucial for emulating the Velodyne VLP-16 LiDAR sensor. We refer to the VLP-16 User Manual for insights into beam divergence. However, data phasing challenges prompt the need for a segmentation function. This function, proposed within our module, accurately simulates beam divergence despite data phasing.
Addressing the utilization of error queried from the segmentation function involves exploring 2 proposed methods. One approach is to calculate the error at the conclusion of the process, incorporating it by adding an offset to the collision point's coordinates. It assumes the collision point must inside a ellipsoidal space whose central point is the collision point, it's semi-long axis' length is the horizontal divergence and short axis's length is the vertical divergence. Under this hypothesis, value of the offset should be a random value pair (x, y) with constraint that in the ellipsoidal space.
Alternatively, an initial adjustment to the beam's angle based on the error is suggested, leading to the determination of coordinates with the incorporated error. The angle has 2 parts, vertical angle and horizontal angle. In order to calculate this angle, we need to get the distance between the starting position (coordinate of the prototype model) and the collision point, through this distance value we can know the maximum value of divergence at the current stage from the segmentation function, divide the distance by divergence to get the tangent value, and then use the inverse trigonometric function to get the required angle.
#### 2.2.4 Human Movement Module
Mobile Mapping Systems (e.g., backpack or hand-held LiDAR systems) have been used in the real forest scene for faster data acquisition [17]. Our study simulates this scenario where a person is walking through a forest with a LiDAR device on his back, so we need to take into account the vertical and horizontal swaying that occurs during a person's walk, even if it is only slight, when simulating. To achieve a more realistic dataset in the context of this scene, we suggest a human movement simulation into the system.
For simplicity, we divide human movement into two primary components: vertical and horizontal movement. The vertical movement, resembling an up-and-down motion, and the horizontal movement, capturing side-to-side swaying, can both be effectively represented using sine functions. This simplification allows us to inject a level of realism into our simulations, mirroring the intricacies of human motion as it influences LiDAR data acquisition in a forest environment.
#### 2.2.5 Scanning Path Module
This module is designed with two primary functionalities: assigning the scanning route and adjusting the running time (total frames) in accordance with the spinning rate and azimuth resolution.
In the process of assigning the scanning route, we leverage the intuitive capabilities of the spline component in UE5. This component facilitates a user-friendly approach, allowing for the seamless adjustment of the scanning path through a drag-and-drop interface. On the other hand, the adjustment of running time poses a unique challenge due to the frames per second (FPS) limitations inherent in computer systems. Unlike real LiDAR devices that can achieve Rotation per Minute (RPM) speeds as high as 1200 rpm, computer-based FPS data acquisition is constrained. Assuming a fixed 30 frames per second, and considering the Velodyne VLP-16's User Manual specification of 8 data points collected per circle per beam, totaling 160 data points per second per beam, the required adjustment factor is derived from the ratio 160 / 30. This adjustment ensures that our simulated scanning time aligns proportionally with the operational characteristics of the Velodyne VLP-16, offering a realistic representation within the computational constraints of the simulation environment.
### _Metrics and Evaluations_
**Metrics.** To evaluate the LiDAR simulation (e. g. LiDAR types, placement, and scene complexity), we plan to follow [1] to use two point cloud distribution metrics: Infrastructure Density (InfraD) and Infrastructure Normalized Uniformity Coefficient (InfraNUC).
InfraD describes the density of the point cloud within the specific region of interest (called InfraLOB) to perceive as:
\\[InfraD=\\frac{N}{S} \\tag{1}\\]
where \\(N\\) is the number of points inside the InfraLoB regions and \\(S\\) is the area of InfraLoB in the Unreal Engine.
Figure 3: Specific embodiment of the two error simulation methods: _Option 1_ corresponds to the scheme that modifies the angle, and _Option 2_ corresponds to the scheme modifying the coordinates directly.
As a variant of Normalized Uniformity Coefficient (NUC) [16], InfraNUC is a measure of the overall uniformity of the point set across all target objects. It is calculated as the standard deviation of the points in the randomly selected disk regions as:
\\[\\textit{InfraNUC}=\\sqrt{\\frac{1}{D}\\sum_{i=1}^{D}\\left(\\frac{n_{i}}{N\\cdot p}- \\frac{1}{D}\\sum_{i=1}^{D}\\frac{n_{i}}{N\\cdot p}\\right)} \\tag{2}\\]
where \\(n_{i}\\) is the number of points inside the disk regions, \\(N\\) is the number of points within the InfraLOB region, \\(D\\) is the total number of disks, and \\(p\\) is the ratio of disk area to InfraLOB area.
Beyond the metrics for point cloud itself, there are more metrics we will consider to include in relevant applications from the dataset, such as Mean average precision (mAP) for calculating each detected objects overlapping over the specified categories for the task of 3D tree detection, and Chamfer distance [16] for measuring the similarity of the two sets of point clouds \\(P_{\\textit{ext}}\\in R^{N\\times 3}\\) and the ground truth one \\(P_{\\textit{gt}}\\in R^{N\\times 3}\\) in point cloud reconstruction and interpolation.
**Evaluations.** In Figure 4, we illustrate the qualitative result from different types of LiDAR sensor data (VLD-8 bit, VLD-16 bit, VLD-64 and VLD-256 bit), where all the background is from the same scenario and the foreground objects are placed at the same location. It can be observed that as the bit increases, the point becomes denser and more detailed. For a better visualization, we crop a small part and zoom it in to provide the detailed point structure of the object.
## 4 Potential Applications
The _LiDAR-Forest_ dataset provides non-error labels, which allows us to train data-intensive algorithms for various forestry tasks and evaluate them using the high-quality ground truth [17, 19, 26].
**Tree Species Identification.** In field scanning, species information needs a forest professional to recognize one by one, which is not beneficial for fast ground truth collection. Tree models in our simulation process can be customized for any species, so the species information is directly known without identification. This advantage is helpful in species ID dataset creation and algorithm development.
**Stem Mapping and Measuring.** Forest inventory is a basic but important task in forestry practice, and it includes stem mapping (counting and localization) and attributes measuring (DBH and tree height). Due to instance-level labels are provided in our dataset, we can easily conduct algorithm development and evaluation for individual tree segmentation that can be used for stem mapping. With a simulated dataset, one can generate forest point clouds with each tree's location, DBH, and height. Furthermore, key data attributes such as point density, noise level, and occlusion can be controlled as well, which helps evaluate how different scanning conditions and point cloud quality affect the accuracy of tree diameter estimation.
**Volume and Biomass Estimation.** Derivation of tree volume and biomass from point cloud requires wood points, but it is hard to separate wood points from the tree point cloud as small branch points and foliage points are often stuck together. Thereby, obtaining accurate leaf-wood labels from real point clouds is almost impossible. Thanks to our simulator, we can generate no-error leaf-wood labels as wood and leaf are known. Accurate leaf-wood labels help build more reliable algorithms for tree volume estimation. With species information from our dataset, biomass can also be calculated.
**Digital Twinning and Simulation for Virtual Experiences.** Other than ecological attribute measurement, our simulated dataset can also be used for forest scene reconstruction of digital twins. The reconstructed forest provides valuable insights into ecological processes, forest dynamics, and environmental changes. The detailed reconstructions can be used for VR and AR applications, offering immersive experiences for education, training, or entertainment.
## 5 Future works
For future work, we plan to use the simulated dataset in multimodal sensor fusion. This will combine the simulated point cloud with additional synthesized RGB imagery, depth maps, and other modalities for multi-sensor perception research.
## 6 Conclusion
This paper introduces the _LiDAR-Forest Dataset_, which addresses the lack of good dataset for LiDAR point cloud simulation in wild forests. To establish the simulation platform, we present asset-scene-simulation stages for setting up a complex forest simulator. To replicate real-world LiDAR sensor effects, we propose five modules, including sensor error and human movement simulations. We will release the dataset to the public, hoping to promote related research in 3D point cloud technology for forestry, engineering, and education communities. In the future, our dataset will support additional types of sensors. LiDAR sensors and more learning-based perception tasks like point cloud segmentation, reconstruction, and interpolation.
## 7 Acknowledgments
This ongoing work is supported by the U.S. Department of Agriculture (USDA) under grant No. 20236801238992.
Figure 4: Illustration of the simulation results from different types of LiDAR sensors in our _LiDAR-Forest_ dataset. Top left: simulation result from 8-bit LiDAR; Top right: simulation result from 16-bit LiDAR; Bottom left: simulation from 64-bit LiDAR; Bottom right: simulation from 256-bit LiDAR. All the given point clouds are generated from the same scenario and the same viewpoint using different sensor patterns.
## References
* [1] Xinyu Cai, Wentao Jiang, Runsheng Xu, Wenquan Zhao, Jiaqi Ma, Si Liu, and Yikang Li. Analyzing infrastructure lidar placement with realistic lidar simulation library. In _2023 IEEE International Conference on Robotics and Automation (ICRA)_, pages 5581-5587. IEEE, 2023.
* [2] Udit Debangshi. Lidar sensor: Applications in agriculture. _Chronicle of Bioresource Management_, 6(Sep, 3):066-071, 2022.
* [3] Rosen Diankov and James Kuffner. Openrave: A planning architecture for autonomous robotics. _Robotics Institute, Pittsburgh, PA, Tech. Rep. CMU-RI-TR-08-34_, 79, 2008.
* [4] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In _Conference on robot learning_, pages 1-16. PMLR, 2017.
* [5] Jin Fang, Dingfu Zhou, Feilong Yan, Tongtong Zhao, Feihu Zhang, Yu Ma, Liang Wang, and Ruigang Yang. Augmented lidar simulator for autonomous driving. _IEEE Robotics and Automation Letters_, 5(2):1931-1938, 2020.
* [6] Farouk Ghallabi, Fawzi Nashashibi, Ghayath El-Haj-Shhade, and Marie-Anne Mittet. Lidar-based lane marking detection for vehicle positioning in an hd map. In _2018 21st International Conference on Intelligent Transportation Systems (ITSC)_, pages 2209-2214. IEEE, 2018.
* [7] Craig L Glennie, Arpan Kusari, and Aldo Facchin. Calibration and stability analysis of the vlp-16 laser scanner. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 40:55-60, 2016.
* [8] S Gupta and B Lohani. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 2:153-159, 2014.
* [9] Mahmudul Hasan, Junichi Hanawa, Riku Goto, Ryota Suzuki, Hisato Fukuda, Yoshinori Kuno, and Yoshinori Kobayashi. Lidar-based detection, tracking, and property estimation: A contemporary review. _Neurocomputing_, 2022.
* [10] Wenchong He, Zhe Jiang, Tingsong Xiao, Zelin Xu, Shigang Chen, Ronald Fick, MILES D MEDINA, and Christine Angelini. A hierarchical spatial transformer for massive point samples in continuous space. In _Thirty-seventh Conference on Neural Information Processing Systems_, 2023.
* [11] Jing Hu, Xincheng Wang, Ziheng Liao, and Tingsong Xiao. M-gcn: Multi-scale graph convolutional network for 3d point cloud classification. In _2023 IEEE International Conference on Multimedia and Expo (ICME)_, pages 924-929, 2023.
* [12] Emil L Jacobsen and Jochen Teizer. Real-time lidar for monitoring construction worker presence near hazards and in work areas in a virtual reality environment. In _Proceedings of the EG-ICE 2021 Workshop on Intelligent Computing in Engineering, Berlin, Germany_, pages 592-602, 2021.
* [13] Elise Kaartinen, Kyle Dunphy, and Ayan Sadhu. Lidar-based structural health monitoring: Applications in civil infrastructure systems. _Sensors_, 22(12):4610, 2022.
* [14] Nathan Koenig and Andrew Howard. Design and use paradigms for gazebo, an open-source multi-robot simulator. In _2004 IEEE/RSJ international conference on intelligent robots and systems (IROS)(IEEE Cat. No. 04CH37566)_, volume 3, pages 2149-2154. IEEE, 2004.
* [15] Chenqi Li, Yuan Ren, and Bingbing Liu. Pcgen: Point cloud generator for lidar simulation. In _2023 IEEE International Conference on Robotics and Automation (ICRA)_, pages 11676-11682. IEEE, 2023.
* [16] Ruihui Li, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. Pu-gan: a point cloud upsampling adversarial network. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 7203-7212, 2019.
* [17] Yi-Chun Lin, Jinyuan Shao, Sang-Yeop Shin, Zainab Saka, Mina Joseph, Raja Manish, Songlin Fei, and Ayman Habib. Comparative analysis of multi-platform, multi-resolution, multi-temporal lidar data for forest inventory. _Remote Sensing_, 14(3):649, 2022.
* [18] Yawen Lu, Yuxing Wang, Dearth Parikh, Yuan Xin, and Guoyu Lu. Extending single beam lidar to full resolution by fusing with single image depth estimation. In _2020 25th International Conference on Pattern Recognition (ICPR)_, pages 6343-6350. IEEE, 2021.
* [19] Yawen Lu, Jie Zhang, Su Sun, Qianyu Guo, Zhiwen Cao, Songlin Fei, Baijian Yang, and Yingjie Chen. Label-efficient video object segmentation with motion clues. _IEEE Transactions on Circuits and Systems for Video Technology_, 2023.
* [20] Flavio BP Malavazi, Remy Guyonneau, Jean-Baptiste Fasquel, Sebastien Lagrange, and Franck Mercier. Lidar-only based navigation algorithm for an autonomous agricultural robot. _Computers and electronics in agriculture_, 154:71-79, 2018.
* [21] Sivabalan Manivasagam, Shenlong Wang, Kelvin Wong, Wenyuan Zeng, Mikita Sazanovich, Shuhan Tan, Bin Yang, Wei-Chiu Ma, and Raquel Urtasun. Lidarsim: Realistic lidar simulation by leveraging the real world. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11167-11176, 2020.
* [22] Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In _Field and Service Robotics: Results of the 11th International Conference_, pages 621-635. Springer, 2018.
* [23] Ulrich Weiss and Peter Biber. Plant detection and mapping for agricultural robots using a 3d lidar sensor. _Robotics and autonomous systems_, 59(5):265-273, 2011.
* [24] Yutian Wu, Yueyu Wang, Shuwei Zhang, and Hartutoshi Ogai. Deep 3d object detection networks using lidar data: A review. _IEEE Sensors Journal_, 21(2):1152-1171, 2020.
* [25] Michael A Wulder, Christopher W Bater, Nicholas C Coops, Thomas Hilker, and Joanne C White. The role of lidar in sustainable forest management. _The forestry chronicle_, 84(6):807-826, 2008.
* [26] Zelin Xu, Tingsong Xiao, Wenchong He, Yu Wang, and Zhe Jiang. Spatial knowledge-infused hierarchical learning: An application in flood mapping on earth imagery. In _Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems_, pages 1-10, 2023. | The popularity of LiDAR devices and sensor technology has gradually empowered users from autonomous driving to forest monitoring, and research on 3D LiDAR has made remarkable progress over the years. Unlike 2D images, whose focused area is visible and rich in texture information, understanding the point distribution can help companies and researchers find better ways to develop point-based 3D applications. In this work, we contribute an unreal-based LiDAR simulation tool and a 3D simulation dataset named _LiDAR-Forest_, which can be used by various studies to evaluate forest reconstruction, tree DBH estimation, and point cloud compression for easy visualization. The simulation is customizable in tree species, LiDAR types and scene generation, with low cost and high efficiency.
**Index Terms:** Computing methodologies--Modeling and simulation--Simulation support systems--Simulation environment; Computing methodologies--Computer graphics--Shape modeling--Point-based models. | Provide a brief summary of the text. | 186 |
arxiv-format/2201_10910v1.md | # A Bayesian Based Deep Unrolling Algorithm
for Single-Photon Lidar Systems
Jakeoung Koo, Abderrahim Halimi, and Stephen McLaughlin,
This work was supported by the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF/201718/17128) and EPSRC Grants EP/T000907X/1,EP/S000633/1,EP/S026428/1. (_Corresponding author: Abderrahim Halimi,_)The authors are with the School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom (e-mail: [email protected]; [email protected]; [email protected]).
## I Introduction
Single-photon light detection and ranging (Lidar) is an emerging technique for reconstructing and analyzing 3D scenes and has a wide range of applications [1, 2]. Using time correlated single-photon counting (TCSPC) technology [3], a single-photon Lidar system builds a histogram of photon counts with respect to their time-of-flights (ToF). Detecting reflected photons relies on a single-photon sensitive detector known as solid-state single-photon avalanche diode (SPAD), while ToFs are obtained by measuring the time difference between the emission of laser pulses and the detection of reflected photons. The acquired histogram contains depth and reflectivity information about the observed objects, and reconstructing such 3D information from single-photon Lidar data has been a subject of very active research [1, 2].
Two reconstruction approaches of 3D scenes from single-photon Lidar data have been widely studied: a statistical approach and a data-driven approach. Statistical methods design a statistical model with some prior information and reconstruct 3D scenes, using strategies such as optimization with spatial regularization [4, 5, 6, 7, 8, 9], Markov chain Monte-Carlo (MCMC) [10, 11, 12], expectation-maximization [13, 14] or Plug-and-Play methods [15, 16]. Such methods provide good interpretability in the sense that we can predict the results, depending on the considered observation model and imposed priors, but they often require user-defined parameters and hand-crafted priors. Data-driven approaches using deep learning have recently gained in popularity for single-photon Lidar systems. Existing deep learning algorithms [17, 18, 19, 20] train neural networks from simulated data and aim to generalize to unseen data. Lindell et al. [17] first proposed an end-to-end deep learning model which infers depth profiles from Lidar data. Peng et al. [18] suggested a non-local network and showed a clear benefit for low photon and high noise cases. Despite their excellent performance on challenging data these deep learning methods lack interpretability, require long running times for high dimensional Lidar data and can present over-smoothing artifacts around 3D surface boundaries.
In this paper, taking advantage of statistical and deep learning approaches, we propose an interpretable and efficient deep learning architecture for high dimensional Lidar data. We design a neural network architecture by unfolding an iterative Bayesian algorithm [21] in the sense that we replace some of its internal operations with neural network blocks. The proposed method is in line with an emerging technique called algorithm unrolling [22, 23]. This technique replaces the steps of a conventional iterative method by neural network blocks, hence exploiting the domain knowledge when designing the network. Following the algorithm unrolling, the proposed neural network is made interpretable via the connection to the underlying Bayesian algorithm [21] and is efficient in terms of the number of network parameters and running time.
Using multiscale information to process single-photon data is an essential component of several state-of-the-art 3D Lidar reconstruction algorithms. This was exploited in the proposed network which requires an initial estimate of few multiscale depths as input, instead of the large histogram data cube as in [17, 18]. In this way, the knowledge of the system impulse response function (IRF) is exploited to generate the multiscale depths and the high dimensional data is compressed to the essential information. The network layers mimic the iterative steps of the Bayesian algorithm in [21], which alternated between a weighted median step to choose the best depth scale to represent a pixel, and a soft-thresholding step to account for spatial correlations between pixels. Our conversion relies on a popular tool called attention [24, 25, 26, 27, 28], which computesweights highlighting the features or areas of interest (i.e., areas requiring attention). An attention layer is said to be hard attention [29] if the attention weights are sparse (one-hot encoding), or soft otherwise. In this paper, inspired by the weighted median filter used in [21] and promoting sharp surface's boundaries, we consider hard attention to select the best depth scale per pixel, i.e., the one showing the highest attention weight. The proposed network also includes soft attention to improve the 3D object reconstruction by considering local spatial correlations. Results on simulated and real data show the benefit of this model when compared to the state-of-the-art learning-based algorithms [17, 18], as it preserves surface edges, has a lower computational cost (in terms of memory or computational time) and provides uncertainty maps on the predicted depth. The uncertainty maps are obtained by connection to the underlying Bayesian method [21] without additional complexity, while some previous works [30, 31] require multiple passes of inference and averaging steps to predict the uncertainty of the network's outputs.
In summary, the contributions of this paper are:
* an efficient deep learning model suitable for high-dimensional single-photon Lidar data,
* interpretable neural network blocks, providing uncertainty information on the final depth map,
* a scale selection strategy through a combination of hard and soft attention, showing competitive results when compared to state-of-the-art methods, i.e., less artifacts on surface boundaries, and improved robustness to miss-modelling effects.
The remainder of the paper is organized as follows. Section II describes the multiscale observation model for single-photon Lidar measurements. Section III reviews the underlying iterative method [21] resulting from a Bayesian hierarchical model. In Section IV, we present the proposed unrolling model with details on the training procedures. In Section V, we analyze the proposed network and evaluate the performance of our method on simulated data as well as real data. Section VI presents the conclusions and future work.
## II Multiscale observation model
This section presents the considered Poisson-based observation model for single-photon Lidar system, which is required to derive the underlying Bayesian algorithm in Section III. Akin to [21], we include multiscale information in the observation model. Single-photon Lidar systems provide range information about the scene by measuring the time difference between emission of light pulses and detection of photons. Collecting such time delays, the Lidar system builds a histogram of counts denoted by \\(y_{n,t}\\in\\{0,1,2,\\cdots\\}\\) where \\(n\\) represents the pixel index and \\(t\\) the time bin index. The observed photon counts are commonly assumed to follow the Poisson distribution with mean value \\(s_{n,t}\\) as follows \\(y_{n,t}\\sim\\mathcal{P}\\left(s_{n,t}\\right)\\)[4, 7]. Assuming one target per each pixel \\(n\\), the observation model for \\(s_{n,t}\\) can be written as
\\[s_{n,t}=r_{n}g\\left(t-d_{n}\\right)+b_{n}, \\tag{1}\\]
where \\(r_{n}\\) is the reflectivity of the target, \\(d_{n}\\) the depth information of the target, \\(b_{n}\\) the background photons due to ambient light and detector dark counts and \\(g\\) is the system IRF. We approximate the system IRF \\(g\\) by the Gaussian function \\(\\mathcal{N}(t\\,;\\,\\mu,\\sigma^{2})\\) with the mean \\(\\mu\\) and the standard deviation \\(\\sigma\\) and consider that \\(\\sum_{t=1}^{T}g\\left(t-d_{n}\\right)=1\\) for all \\(n\\), for all possible depths of the scene [5, 21]. By assuming independent observations between \\(y_{n,t},\\forall n,t\\), the joint likelihood for \\(\\mathbf{Y}=\\{y_{n,t}\\}\\) can be written as
\\[p\\left(\\mathbf{Y}\\mid\\mathbf{d},\\mathbf{r},\\mathbf{b}\\right)=\\prod_{n=1}^{N}\\prod_{t=1}^{T} \\frac{s_{n,t}^{y_{n,t}}}{y_{n,t}!}\\exp^{-s_{n,t}}, \\tag{2}\\]
where \\(\\mathbf{d}\\), \\(\\mathbf{r}\\), \\(\\mathbf{b}\\) represent the column vectors of size \\(N\\) gathering depth, reflectivity and background parameters, respectively. Without background photons, the maximum likelihood estimate of the reflectivity can be computed as \\(r_{n}^{\\rm ML}=\\bar{s}_{n}=\\sum_{t=1}^{T}y_{n,t}\\) and the depth as
\\[d_{n}^{\\rm ML}=\\arg\\max_{d}\\sum_{t}y_{n,t}\\log g(t-d). \\tag{3}\\]
In this case, the likelihood can be written to be proportional to the following (See Appendix of [21] for the details)
\\[\\begin{split} p\\left(\\mathbf{y}_{n}\\mid r_{n},d_{n}\\right)& \\propto\\mathcal{G}\\left(r_{n};1+\\bar{s}_{n},1\\right)Q\\left(\\mathbf{y}_ {n}\\right)\\\\ &\\times\\mathcal{N}(d_{n};d_{n}^{\\rm ML},\\bar{\\sigma}^{2}),\\end{split} \\tag{4}\\]
where \\(\\mathcal{G}(x\\,;\\,\\cdot,\\cdot)\\) is the gamma distribution with shape and scale parameters, \\(Q\\) is a function of \\(\\mathbf{y}_{n}\\) and \\(\\bar{\\sigma}^{2}:=\\sigma^{2}/\\bar{s}_{n}\\). To handle high noise in Lidar data, it is common to incorporate multiscale information, as is done in statistical methods [7, 21] as well as deep learning works [17, 18, 20]. We employ a similar multiscale approach, using the fact that low-pass filtered histograms (resulting in summing neighbouring pixels) still follow a Poisson distribution. We generate \\(L\\) downsampled histograms \\(\\mathbf{y}_{n}^{(\\ell)}\\) with \\(\\ell\\in\\{2,\\cdots,L\\}\\), by spatially downsampling the original histogram data \\(\\mathbf{y}_{n}^{(l)}:=\\mathbf{y}_{n}\\) with uniform filters. This multiscale data can be efficiently computed using convolution with different uniform kernel sizes. Assuming the same observation model in (4), the likelihood for each downsampled histogram \\(\\mathbf{y}_{n}^{(\\ell)}\\) can be written as
\\[\\begin{split} p\\left(\\mathbf{y}_{n}^{(\\ell)}\\mid r_{n}^{(\\ell)},d_{ n}^{(\\ell)}\\right)&\\propto\\mathcal{G}\\left(r_{n}^{(\\ell)};1+\\bar{s}_{n}^{( \\ell)},1\\right)Q\\left(\\mathbf{y}_{n}^{(\\ell)}\\right)\\\\ &\\times\\mathcal{N}\\left(d_{n}^{(\\ell)};d_{n}^{\\rm ML(\\ell)},\\bar{ \\sigma}^{2(\\ell)}\\right),\\end{split} \\tag{5}\\]
where \\(\\bar{s}_{n}^{(\\ell)}=\\sum_{t=1}^{T}y_{n,t}^{(\\ell)}\\) and \\(\\bar{\\sigma}^{2(\\ell)}=\\sigma^{2}/\\bar{s}_{n}^{(\\ell)}\\). For example, we can consider \\(L=4\\) scales with different kernel sizes such as \\(1\\times 1\\), \\(3\\times 3\\), \\(7\\times 7\\) and \\(13\\times 13\\).
## III Underlying Bayesian algorithm
In this section, we review an underlying Bayesian algorithm proposed by Halimi et al. [21] which inspired the design of our deep learning method in Section IV. This method [21] follows a Bayesian approach, by considering prior distributions on the unknown depth as well as their uncertainty information. The prior distributions will be combined with the observation model in (5) to derive the posterior distribution, which contains rich information regarding the parameters of interest. To exploit this distribution, the method in [21] approximated the parameter's maximum-a-posteriori (MAP) estimator using a coordinate descent method. Although this method can estimate both depth and reflectivity, for the purpose of this paper, we only consider estimating depth profiles.
### _Prior and posterior distribution_
The observation model for multiscale depths \\(\\mathbf{d}^{(\\ell)}\\) is derived in (5). From this multiscale information, the goal now is to estimate the true depth denoted by a latent variable \\(\\mathbf{x}\\). On this latent variable a prior is imposed, requiring spatial smoothness within a homogeneous surface while preserving the discontinuity around the boundaries of the surfaces. To satisfy this requirement and estimate a robust depth map, Halimi et al. [21] introduced some pre-defined weights called _guidance weights_ between local pixels for each scale. A high value of \\(w_{n^{\\prime},n}^{(\\ell)}\\) encourages the latent variable \\(x_{n}\\) to be similar to \\(d_{n^{\\prime}}^{(\\ell)}\\). Using the guidance weights, the latent variable \\(\\mathbf{x}\\) is assigned the conditional Laplace distribution
\\[\\begin{split} x_{n}\\mid d_{\
u_{n}}^{(1, ,L)},w_{\
u_{n},n}^{(1, ,L)},\\epsilon_{n}\\sim\\\\ \\prod_{n^{\\prime}\\in\
u_{n}}\\left[\\prod_{\\ell=1}^{L}\\mathcal{L} \\left(x_{n};d_{n^{\\prime}}^{(\\ell)},\\frac{\\epsilon_{n}}{w_{n^{\\prime},n}^{( \\ell)}}\\right)\\right]\\end{split} \\tag{6}\\]
where \\(\\mathcal{L}(\\,\\cdot\\,;\\mu,\\psi)\\) is the Laplace distribution with the mean \\(\\mu\\) and the scale parameter \\(\\psi\\), \\(\
u_{n}\\) represents the local neighbourhood around the \\(n\\)th pixel and \\(\\epsilon_{n}\\) is the variance of the depth \\(x_{n}\\). To ensure the positivity of the variance \\(\\mathbf{\\epsilon}\\), it is assigned a conjugate inverse gamma distribution as
\\[\\mathbf{\\epsilon}\\sim\\prod_{n}\\mathcal{IG}\\left(\\epsilon_{n};\\alpha_{d},\\beta_{d}\\right) \\tag{7}\\]
where \\(\\alpha_{d}\\) and \\(\\beta_{d}\\) are user set positive hyperparameters. Combining the prior distributions in (6) and (7) and the likelihood in (5), the posterior distribution reduces to
\\[p\\left(\\mathbf{x},\\mathbf{\\epsilon},\\mathbf{D}\\mid\\mathbf{Y},\\mathbf{W}\\right)\\propto p\\left(\\bm {Y}\\mid\\mathbf{D}\\right)p\\left(\\mathbf{x},\\mathbf{D}\\mid\\mathbf{\\epsilon},\\mathbf{W}\\right)p\\left( \\mathbf{\\epsilon}\\right) \\tag{8}\\]
where \\(\\mathbf{W}\\) represents the guidance weights and \\(\\mathbf{D}\\) represents the multiscale depths \\(\\mathbf{d}^{(1,\\cdots,L)}\\).
### _Iterative algorithm_
To approximate the parameter's MAP estimates, a coordinate descent method is employed to minimize the negative log-posterior of (8). The algorithm proposed in [21] updates one variable at a time while fixing other variables and is summarized in Algorithm 1. The updates of unknown variables can be divided into three parts. Firstly, the latent variable \\(x_{n}\\) is updated using a weighted median filtering as follows
\\[x_{n}\\leftarrow\\operatorname*{argmin}_{x}\\mathcal{C}(x)=\\sum_{l,n^{\\prime}\\in \
u_{n}}w_{n^{\\prime},n}^{(\\ell)}\\left|x-d_{n^{\\prime}}^{(\\ell)}\\right|. \\tag{9}\\]
This operation will be replaced by attention mechanisms in the proposed deep learning model in Section IV. Secondly, the multiscale depths \\(\\mathbf{d}^{(1,\\cdots,L)}\\) are updated by minimizing the negative log-conditional distributions of \\(\\mathbf{D}\\) in (8) as follows:
\\[d_{n}^{(\\ell)}\\leftarrow\\operatorname*{argmin}_{d}\\frac{\\left[d-d_{n}^{\\text{ ML}(\\ell)}\\right]^{2}}{2\\bar{\\sigma}^{2(\\ell)}}+\\sum_{n^{\\prime}\\in\
u_{n}} \\frac{w_{n,n^{\\prime}\\in\
u_{n}}^{(\\ell)}\\left|d-x_{n^{\\prime}}\\right|}{ \\epsilon_{n^{\\prime}}}. \\tag{10}\\]
This operator is known as a generalized soft-thresholding operator and can be solved analytically [32]. Lastly, given the estimations of \\(\\mathbf{x}\\) and \\(\\mathbf{d}\\), the depth uncertainty information can be evaluated by considering the depth variance. The conditional distribution of \\(\\mathbf{\\epsilon}\\) is given by
\\[\\epsilon_{n}\\mid\\mathbf{x},\\mathbf{D},\\mathbf{W}\\sim\\mathcal{IG}\\left[L\\bar{N}+\\alpha_{d },\\mathcal{C}\\left(x_{n}\\right)+\\beta_{d}\\right], \\tag{11}\\]
where \\(\\bar{N}=|\
u_{n}|\\) is the number of neighbors considered. The mode of this distribution represents the MAP estimator of \\(\\epsilon_{n}\\) and is given by
\\[\\hat{\\epsilon}_{n}\\leftarrow(\\mathcal{C}\\left(x_{n}\\right)+\\beta_{d})/(L\\bar{ N}+\\alpha_{d}+1). \\tag{12}\\]
This formula will subsequently provides a basis to estimate the uncertainty of the depth map estimated by the neural network. As mentioned in the previous subsection, the guidance weights \\(\\mathbf{W}\\) connect the latent variable \\(\\mathbf{x}\\) to multiscale depths. These weights play an important role in the performance of the algorithm. Halimi et al. [21] determines the weights \\(\\mathbf{W}\\) based on the deviation of \\(d^{\\text{ML}(\\ell)},\\forall\\ell\\), from a given reference depth map, while this paper proposes to learn them from the data, as described in the following section.
```
1:Input: Lidar data \\(Y\\), the number of scales \\(L\\)
2:Construct downsampled histograms \\(\\mathbf{Y}^{(\\ell)}\\), \\(\\ell=1, ,L\\)
3:Compute the multiscale depths \\(\\mathbf{d}^{\\text{ML}(\\ell)},\\forall\\ell\\)
4:Compute the guidance weights \\(\\mathbf{W}\\)
5:while not converge do
6: Update the variable \\(\\mathbf{x}\\) by (9)
7: Update the multiscale depths \\(\\mathbf{d}^{(1,\\cdots,L)}\\) by (10)
8: Update the uncertainty information by (12)
9:break if the convergence criteria are satisfied
10:endwhile
11:Output: \\(\\mathbf{x},\\mathbf{\\epsilon}\\)
```
**Algorithm 1** Iterative Bayesian algorithm [21]
## IV Proposed unrolling method
Motivated by the Bayesian method in Algorithm 1, we propose an interpretable neural network model by algorithm unrolling. As mentioned in Section I, the main idea of algorithm unrolling is to unfold an underlying iterative method and mimic its operations with neural network blocks. Here, we replace the operations of Algorithm 1 by neural network layers. The major components of the proposed network use attention modules, which allow learning the weights \\(\\mathbf{W}\\), i.e., the correlations between local pixels at the multiscale depths.
### _Network_
Fig. 1 gives an overview of the proposed neural network. The network inputs an initial estimation of multiscale depths \\(\\mathbf{d}^{\\text{ML}(1,2,\\cdots,L)}\\) (See Section IV-C); and outputs the estimated depth \\(\\mathbf{x}\\). The network consists of \\(K\\) stages where each stage has the same structure (except for the last stage) with different network parameters and is designed to resemble one iteration of Algorithm 1. Each stage begins with the feature extraction step having three consecutive convolution layers. After that,each stage has two main blocks: _squeeze_ and _expansion_. In this subsection, for the simplicity of notation, we use the variable symbols for the first stage and omit the dependency on the stage \\(k\\) unless explicitly mentioned.
#### Iii-B1 The squeeze block
This is a key element in the network as it estimates a single depth by using the multiscale depths and their features, as shown in Fig. 2. This block is inspired from the weighted median filtering step (9) in the underlying Bayesian algorithm. It considers hard attention [29] to select the scale with the highest attention weight for each pixel, and takes the single depth value on that scale. Specifically, the squeeze block first computes attention weights in a module named _PAConv (cube)_, a variant of the so-called pixel attention [28]. Pixel attention considers attention weights at a pixel level, so that the attention weights have the same size as the input. The module _PAConv (cube)_ computes internal weights which are multiplied by the depth features, yielding the attention weights \\(\\mathbf{w}^{(1,2,\\cdots,L)}\\). The latter weights indicate the importance of each scale, and only one scale is chosen by the argmax operation, yielding the squeezed depth. Formally, the squeezed depth for the \\(n\\)th pixel is computed by
\\[x_{n}=d_{n}^{(\\ell^{\\prime})},\\quad\\ell^{\\prime}=\\operatorname*{ argmax}_{\\ell\\in\\{1,\\cdots,L\\}}w_{n}^{(\\ell)}, \\tag{13}\\]
where \\(w_{n}^{(\\ell)}\\) denotes the attention weight for the \\(\\ell\\)th scale. Since the argmax operation is not differentiable, we replace it with the alternative differentiable Gumbel-SoftMax [33, 34].
#### Iii-B2 The expansion block
This block refines multiscale depths to obtain \\(\\overline{\\mathbf{d}}\\) as the weighted average between multiscale depths \\(\\mathbf{d}\\) and the squeezed depth \\(\\mathbf{x}\\). This expansion block corresponds to the soft-thresholding step (10) in the Bayesian algorithm, as it updates the multiscale depths based on the obtained squeezed depth. However, we design the expansion block to exploit the attention framework, rather than mimicking exactly the soft-thresholding operator (10). As shown in Fig. 3, the expansion block has three inputs: the multiscale depths, their features \\(\\mathbf{d}_{feat}\\) and the squeezed depth \\(\\mathbf{x}\\). As indicated by the weighted average, the weights will help combine values from either the outlier-free squeezed depth \\(\\mathbf{x}\\) or the multiscale depths \\(\\mathbf{d}^{(\\ell)}\\), where \\(\\mathbf{x}\\) values will be promoted if the two depths are significantly different. This highlights the importance of the absolute difference between the multiscale depths and the squeezed depth for each scale (i.e., \\(\\forall\\ell\\), \\(|\\mathbf{d}^{(\\ell)}-\\mathbf{x}|\\)). The latter difference is fed together with the multiscale depth features \\(\\mathbf{d}_{feat}\\) into a module named _Group PAConv_ to compute the weights. This module consists of \\(L\\) independent sub-modules, where the \\(l\\)th sub-module inputs one scale depth feature \\(\\mathbf{d}_{feat}^{(\\ell)}\\) and the difference feature \\(|\\mathbf{d}^{(\\ell)}-\\mathbf{x}|\\). From such input, an attention module named _PAConv (slice)_ estimates internal weights for each scale. A softmax operator is then applied to these internal weights after multiplying them by a coefficient \\(\\rho\\) (note that \\(\\rho\\) is introduced to enforce weights sparsity and is fixed to \\(\\rho=2\\) throughout the paper). The softmax operator outputs two normalized channels per scale, we only consider the first channel related to the multiscale depth parts and denoted by \\(\\overline{\\mathbf{w}}\\) (See the blue rectangles in the bottom part of Fig. 3). Then, the expanded
Fig. 1: Overview of the proposed network. The network consists of \\(K\\) stages and each stage inputs a tuple of multiscale depths (\\(\\mathbf{d}^{\\text{ML}}\\)), estimates a squeezed depth (\\(\\mathbf{x}\\)) and refines the multiscale depths. The final stage’s output of the network is a squeezed depth (\\(\\mathbf{x}^{K}\\)). For the illustration, the network is shown for the case of three multiscales \\(L=3\\).
Fig. 3: The expansion block refines the previous multiscale depths (\\(\\mathbf{d}\\)), by comparing their features (\\(\\mathbf{d}_{feat}\\)) with the feature of the absolute difference between multiscale depths and the squeezed depth (\\(|\\mathbf{d}-\\mathbf{x}|\\)). This block outputs the refined multiscale depths (\\(\\mathbf{d}\\)).
Fig. 2: The squeeze block estimates a squeezed depth (\\(\\mathbf{x}\\)) from a tuple of multiscale depths (\\(\\mathbf{d}\\)) and their features (\\(\\mathbf{d}_{feat}\\)). The symbol \\(\\otimes\\) denotes elementwise multiplication.
depth \\(\\overline{d}_{n}^{(\\ell)}\\) for the \\(\\ell\\)th scale and the \\(n\\)th pixel is obtained by the convex combination of \\(\\bar{d}_{n}^{(\\ell)}\\) and \\(x_{n}\\) as follows:
\\[\\overline{d}_{n}^{(\\ell)}=\\overline{w}_{n}^{(\\ell)}d_{n}^{(\\ell)}+(1-\\overline{ w}_{n}^{(\\ell)})x_{n},\\quad 0\\leq\\overline{w}_{n}^{(\\ell)}\\leq 1. \\tag{14}\\]
We have described one stage of the network corresponding to one iteration in Algorithm 1. All stages have the same structure, except the last which only has the squeeze block to produce the final output of the network.
#### Iii-A3 Network learnable parameters
Throughout the network, all the convolution layers use the \\(3{\\times}3\\) kernel with the LeakyReLU activation [35] without bias and have the same number of output channels as the input. For example, if an input of the convolution layer is of size 12 \\(\\times\\) Height \\(\\times\\) Weight, the output size will be the same and the number of learnable parameters on this layer is \\(3\\times 3\\times 12\\times 12=1296\\) parameters, following the structure of standard convolutional layers [36]. The module _Group PAConv_ consists of 12 independent sub-modules where each has 144 learnable parameters from 4 convolutions whose input and output channels are 2 (i.e., \\(3\\times 3\\times 2\\times 2=144\\) parameters). Therefore, each stage contains \\(14688\\) learnable parameters, except for the last one which has \\(9072\\) parameters. Table I summarizes the network operations together with the corresponding number of learnable parameters, when considering \\(K=4\\) stages.
#### Iii-A4 Property of the network
Interestingly, the final depth value of the proposed network is bounded pixelwise by the initial estimates of the multiscale depths. To state formally, consider the values of the multiscale depths \\(d_{n}^{(\\ell)},\\forall\\ell\\) and the squeezed depth \\(x_{n}\\) at the first stage. Since the squeeze block chooses an element among \\(L\\) elements of \\(d_{n}^{(\\ell)}\\) with \\(\\ell\\in\\{1,\\cdots,L\\}\\), it holds that
\\[\\min\\{d_{n}^{(1)},\\cdots,d_{n}^{(\\ell)}\\}\\leq x_{n}\\leq\\max\\{d_{n}^{(1)}, \\cdots,d_{n}^{(\\ell)}\\}. \\tag{15}\\]
Meanwhile, the expanded depths denoted by \\(\\overline{d}_{n}^{(\\ell)}\\) are a convex combination of \\(d_{n}^{(\\ell)}\\) and \\(x_{n}\\) with normalized weights in Eq. (14), so we have
\\[\\min\\{d_{n}^{(\\ell)},x_{n}\\}\\leq\\overline{d}_{n}^{(\\ell)}\\leq\\max\\{d_{n}^{( \\ell)},x_{n}\\}. \\tag{16}\\]
Combining (15) and (16), the expanded depths are bounded pixelwise by the initial multiscale depths
\\[\\min\\{d_{n}^{(1)},\\cdots,d_{n}^{(\\ell)}\\}\\leq\\overline{d}_{n}^{(\\ell)}\\leq \\max\\{d_{n}^{(1)},\\cdots,d_{n}^{(\\ell)}\\}. \\tag{17}\\]
Since each stage has the same structure, this relation holds for the next stages and the final squeezed depth value has the same bound as in (15). This property has pros and cons. We can predict the behaviour of the network, so that it will not produce some extreme depth values. On the other hand, the proposed network requires the range of initial multiscale depths to cover the underlying true depth for each pixel.
### _Loss_
Motivated by the Laplace prior in (6), we define the training loss for depths as the \\(\\ell_{1}\\)-norm distance between the predicted depth and ground-truth depth. We additionally impose a constraint that the intermediate squeezed depths should be similar to the ground-truth depth \\(\\mathbf{x}^{*}\\) during training. The motivation for this constraint is twofold. It can prevent the neural network from losing key information in the initial stages and it can help avoid the vanishing gradient problem, by providing more paths in computational graphs for backpropagation. With the additional constraint, the training loss function \\(\\mathcal{L}\\) is defined as
\\[\\mathcal{L}(\\theta)=\\sum_{k=1}^{K}\\|\\mathbf{x}^{k}(\\theta)-\\mathbf{x}^{*}\\|_{1}, \\tag{18}\\]
where \\(\\theta\\) denotes the neural network parameters, \\(\\mathbf{x}^{k}\\) represents the intermediate squeezed depth in the \\(k\\) stage and \\(K\\) is the total number of stages.
### _Estimation of initial multiscale depths_
As a reminder, the input of the proposed network is a tuple of multiscale depths, rather than the large volume histogram data. From the histogram data, we aim at extracting initial multiscale depths without losing important information, while providing several depth values to cover the true one. For this goal, we consider several 3D low-pass filtered Lidar histograms as summarized in Table II. We first apply the cross correlation to the original Lidar data with the system IRF. To this cross correlated data, we apply the 3D convolution with the uniform filters with the sizes of 7\\(\\times\\)7\\(\\times\\)7 and 13\\(\\times\\)13\\(\\times\\)13, generating two additional histograms. Each of the three histograms is then spatially downsampled with 4 different kernel sizes. This results in 12 filtered histograms in total, where we locate the main peak's position in each filtered histogram and for each pixel, to obtain initial multiscale depths. Note that we can exploit the separability of uniform filters for efficient computation. Note also that the actual IRF could be used and that we do not impose any constraints on the IRF shape. It is worth mentioning that previous deep learning models [17, 18] do not account for a known system IRF in their architectures, but might learn it implicitly during training.
### _Training procedures_
To train the neural network, we generate synthetic data by simulating SPAD measurements with \\(T=1024\\) time bins, using the Poisson observation model in (1). We choose 9 scenes from the Middlebury stereo dataset [37] (with image sizes \\(555{\\times}650\\)) and 21 scenes from the Sintel stereo dataset [38] (with image sizes \\(436{\\times}1024\\)) for the training dataset and 2 scenes from [38] for the validation set. To make our network robust to different noise levels, we consider different scenarios based on the average number of Photons-Per-Pixel (PPP); and the average Signal-to-Background Ratio (SBR), defined as
\\[\\text{PPP}=\\frac{1}{N}\\sum_{n=1}^{N}\\left(r_{n}+b_{n}T\\right),\\text{ SBR}=\\frac{\\sum_{n=1}^{N}r_{n}}{\\sum_{n=1}^{N}b_{n}T}.\\]
We consider 4 cases: (PPP=1, SBR=1), (PPP=1, SBR=64), (PPP=64, SBR=1), (PPP=64, SBR=64). To save the GPU memory during training, we extract patches of size \\(256{\\times}256\\) with stride \\(48\\), rather than processing the original images. We implement our model in PyTorch and use ADAM [39] as an optimizer with the default hyperparameter (\\(\\beta_{1}\\)=\\(0.9\\), \\(\\beta_{2}\\)=\\(0.999\\)) and the batchsize 16. We train the model for 200 epochs with the initial learning rate 0.0001 which is reduced by half at epoch 100. The training was performed on a Linux server with a NVIDIA RTX 3090 GPU, which takes about 9 hours.
## V Experimental results
In this section, we perform the experiments to analyze our model and show the relative advantages over other reconstruction methods on synthetic datasets as well as real datasets.
### _Analysis of the network_
**Test dataset.** For the test data, we simulated Lidar data with \\(T=1024\\) time bins, from two scenes of _Art_ (\\(555{\\times}695\\)) and _Reindeer_ (\\(555{\\times}671\\)) in the Middlebury stereo dataset [37] which did not belong to our training sets. The reference depth and reflectivity maps of these two scenes are visualized in the first column of Figs. 6 and 8, respectively. In particular, the Reindeer scene contains extremely low-photon regions which will challenge the reconstruction algorithms. Note that these test data are larger than the data reported in previous deep learning works [17, 18].
**Interpretability.** Thanks to our unrolling strategy, we can interpret our neural network via the connection to the underlying Bayesian method. We first inspect whether the first stage can successfully discard outliers. Fig. 4 visualizes the outputs of the internal blocks in the first stage. Each pixel
Fig. 4: Visualization of the internal outputs in the first stage. For the purpose of presentation, only eight scales (\\(\\ell=1,\\cdots,8\\)) are visualized. [1st column] shows the initial multiscale depths (\\(\\mathbf{d}\\)) and [2nd column] their corresponding attention weights (\\(\\mathbf{v}\\)) in the squeeze block. [3rd column] shows the squeezed depth (\\(\\mathbf{x}\\)), [4th column] the attention weights in the expansion block and [5th column] the expanded multiscale depths computed by Eq. (14).
Fig. 5: Intermediate squeezed depths along the four stages (top), the difference between the multiscale depths and the corresponding squeezed depth in each stage (middle) and the errors between the squeezed depths and the ground-truth (bottom). The last column shows our estimated uncertainty map.
of the squeezed depth \\(\\mathbf{x}\\) (3rd column) is obtained using (13), where the multiscale depths and weights are represented in the 1st and 2nd columns, respectively. As shown in the first row, the first scale depth \\(\\mathbf{d}^{(1)}\\) shows many outliers, which leads to zero values in \\(\\mathbf{w}^{(1)}\\). On the other hand, the second scale depth \\(\\mathbf{d}^{(2)}\\) contains important features with less noise and its attention weight \\(\\mathbf{w}^{(2)}\\) contains many high values. We still observe noise in \\(\\mathbf{d}^{(2)}\\) especially around the low-photon regions. In such areas, \\(\\mathbf{d}^{(4)}\\) and \\(\\mathbf{d}^{(7)}\\) show more smoothed depth values and \\(\\mathbf{w}^{(4)}\\), \\(\\mathbf{w}^{(7)}\\) receive higher attention weights. In this way, the proposed network can successfully remove noise and discard many outliers in the first stage. The 4th column shows the attention weights for the expansion block where \\(\\overline{\\mathbf{w}}^{(1)}\\) has only zero values, which indicates that the first scale depth \\(\\mathbf{d}^{(1)}\\) will be discarded in the next stage.
We now investigate how the squeezed depths improve along the stages. To quantify the change within each stage, we define the difference \\(\\mathbf{\\delta}^{k}\\) in the \\(k\\)th stage between the multiscale depths and the corresponding squeezed depth, for each pixel \\(n\\), as follows:
\\[\\delta_{n}^{k}=\\frac{1}{L+2}\\sum_{\\ell=1}^{L}|x_{n}^{k}-d_{n}^{k,(\\ell)}|, \\tag{19}\\]
where \\(d_{n}^{k,(\\ell)}\\) is the multiscale depth value in the \\(k\\)th stage. A small value of \\(\\mathbf{\\delta}\\) would indicate a small improvement so that we may not need a further stage. We also define the uncertainty \\(\\mathbf{\\epsilon}\\) of our final depth map \\(\\mathbf{x}^{K}\\) via the connection to the mode of the depth variance (11) in the underlying method with \\(\\widetilde{N}=1\\), as follows:
\\[\\epsilon_{n}=\\frac{1}{K-1}\\sum_{k=1}^{K-1}\\frac{\\mathcal{C}_{n}^{k}+\\beta_{d} }{L+2+\\alpha_{d}},\\ \\mathcal{C}_{n}^{k}=\\sum_{\\ell=1}^{L}\\overline{\\overline{\\mathbf{w}}}_{n}^{k,( \\ell)}|d_{n}^{k,(\\ell)}-x_{n}^{K}|, \\tag{20}\\]
where \\(\\overline{\\overline{\\mathbf{w}}}^{k}\\) is the softmax-normalized version of \\(1-\\overline{\\mathbf{w}}^{k}\\), ensuring \\(\\sum_{\\ell=1}^{L}\\overline{\\overline{\\mathbf{w}}}_{n}^{k,(\\ell)}=1\\). The weights \\(\\overline{\\overline{\\mathbf{w}}}\\) play a similar role to the guidance weights in (10), and we set the hyperparameters \\(\\alpha_{d},\\beta_{d}\\) to small values to obtain a non-informative prior. Fig. 5 shows the intermediate squeezed depths \\(\\mathbf{x}^{k}\\) for each stage \\(k\\) and \\(\\mathbf{\\delta}^{k}\\) which decreases along the stages. The last row shows the errors between the squeezed depths and the ground-truth depth map \\(\\mathbf{x}^{*}\\). In the first stage, the errors appear on the background due to outliers, but such errors decrease along the stages. The last column shows the estimated uncertainty map \\(\\mathbf{\\epsilon}\\). It indicates high uncertainty around object edges and areas with low reflectivity.
**Evaluation metrics.** To analyze our model quantitatively, we use three evaluation metrics. We employ a standard metric, Depth Absolute Error (DAE) defined as \\(\\mathrm{DAE}\\left(\\mathbf{x},\\mathbf{x}^{*}\\right)=\\frac{1}{N}\\|\\mathbf{x}-\\mathbf{x}^{*}\\|_{1}\\), where \\(N\\) is the number of pixels, which is useful
Fig. 6: Reconstructed depth maps with different PPP and SBR levels on the Art scene. The first column shows the reference depth map (middle) and the reflectivity image (bottom). The last two columns show the estimated uncertainty maps by Halimi and the proposed method.
Fig. 7: Point cloud representation of reconstruction results on the Art scene. The first column shows the reference point cloud.
for measuring the overall disparity quality. To better evaluate surface boundaries, we use an additional metric called Soft Edge Error (SEE) [40], which measures the local error only at the edges. Formally, it is defined as
\\[\\mathrm{SEE}\\left(\\mathbf{x},\\mathbf{x}^{*}\\right)=\\gamma\\sum_{n\\in Edge(\\mathbf{x}^{*})} \\min_{j\\in\
u_{n}}|x_{j}-x_{j}^{*}|,\\]
where \\(\
u_{n}\\) is a \\(3\\times 3\\) local window around the \\(n\\)th pixel, \\(\\gamma:=10/|Edge(\\mathbf{x}^{*})|\\) is a scale factor, and \\(Edge(\\mathbf{x}^{*})\\) represents a set of edge locations in the ground-truth depth map \\(\\mathbf{x}^{*}\\) obtained using the Canny edge detector [41]. We also report the root mean square error: \\(\\mathrm{RMSE}(\\mathbf{x},\\mathbf{x}^{*})=\\sqrt{\\|\\mathbf{x}-\\mathbf{x}^{*}\\|_{2}^{2}/N}\\), as previously used in [17, 18].
**Ablation study.** We study the effect of the number of stages \\(K\\) and scales \\(L\\). As shown in Table III, we first fix \\(L=12\\) and vary the number of stages \\(K\\) from 2 to 5. We evaluate the performance on 98 different Lidar data with different levels of PPP and SBR both ranging from 0.25 to 1024. We note a decreasing error for increasing number of stages, but when \\(K=5\\), the error is shown to increase possibly due to overfitting to our training data. For example, we observed that the case \\(K=5\\) gives a worse performance than that of \\(K=4\\) when SBR is less than 1 which did not belong to our training set. The number of stages affects the running time by a small margin, because most of the computational cost comes from generating the initial multiscale depths. Next, we test the effect of the total number of scales explained in Section IV-C. The number of scales \\(L\\) affects the error, the number of parameters and the running time. To balance the trade-off between the performance and the network size, we choose \\(K=4\\) and \\(L=12\\) throughout the rest of the experiments.
### _Results on simulated data_
In this experiment, we use the same simulated dataset and the evaluation metrics described in the previous subsection.
Fig. 8: Reconstructed depth maps with different PPP and SBR levels on the Reindeer scene. The first column shows the reference depth map (middle) and the reflectivity image (bottom). The last two columns show the estimated uncertainty maps by Halimi and the proposed method.
Fig. 9: Point cloud representation of reconstruction results on the Reindeer scene. The first column shows the reference point cloud.
**Comparison methods.** We compare the proposed model to existing reconstruction methods without additional sensor fusion. We consider a state-of-the-art statistical method called Manipop [12] and the underlying iterative Bayesian method in Algorithm 1 by Halimi et al. [21]. In Algorithm 1, we consider the same filter size consistent to the case of \\(L=4\\) in Table II. We also compare to two state-of-the-art deep learning models: Lindell et al. [17] and Peng et al. [18]. We use the publicly available pre-trained model for [17] and as no pre-trained model is available for [18], we train this model using the authors' publicly available codes. We also report the result of the classical algorithm obtained by applying a matched filter to the Lidar data by the system IRF.
**Qualitative comparison.** Fig. 6 shows the reconstructed depth maps on the Art scene. In the case of high PPP and SBR, all the methods reconstruct well except for Lindell et al. [17] which loses some details. In the challenging data case, we first notice that Manipop is conservative and shows many zero pixels to indicate the absence of a target in them. Other algorithms detect more targets, with the proposed algorithm showing the best robustness to outliers. These results are confirmed in Fig. 7 showing the point cloud representation of the reconstruction results. When PPP=4 and SBR=1, the previous deep learning methods suffer from so-called flying pixel artifacts (also called bleeding effects) [42] around the surface boundaries, while less artifacts are observed in Halimi et al. [21] and in our method. When PPP=1 and SBR=0.25, Halimi et al. [21] yields many outliers, comapred to the proposed method. These two methods estimate similar uncertainty maps, however, the proposed method indicates higher uncertainty in noisy regions (e.g., see region behind the cone for PPP=4, SBR=1). Consistent results are observed for the Reindeer scene in Fig. 8 and Fig. 9.
**Quantitative comparison.** In Table IV, we quantitatively evaluate our model in different levels of PPP and SBR. Peng et al. [18] overall outperforms other methods in terms of RMSE, but it yields high errors in DAE and SEE. One reason is due to oversmoothing artifacts around the boundaries of surfaces. On the other hand, our method yields the lowest errors when PPP=1 in terms of DAE and SEE. When PPP is 16, both Halimi [21] and the proposed method show an overall good performance in terms of DAE and SEE. Manipop shows good performance for clean data at PPP=16 and SBR=4, but its errors are higher in challenging cases, because Manipop sets as zero non-target regions. Although the used metrics are not fair to Manipop, we report the errors for reference and we empty the numbers when they are not meaningful.
Fig. 10: Errors in terms of different levels of SBR and PPP on the Art scene by three methods: Halimi, Peng and the proposed (Top-to-Bottom). Three evaluation metrics of DAE, SEE and RMSE are used (Left-to-Right) and the error values are presented in a base-10 log scale.
As shown in Fig. 10, we further conduct an extensive experiment in a wide range of PPP and SBR levels when comparing with Halimi et al [21] and Peng et al. [18]. The underlying Bayesian method [21] gives an excellent performance on the clean data, but its performance rapidly degrades in the low-photon and high noise cases. In such cases, Peng [18] and our method show comparable results, but the errors in Peng's method begin to increase when PPP is higher than 16 and SBR is higher than 512. Compared to Peng's result, the proposed method offers a more consistent performance even when PPP and SBR are high.
**Generalizability on different system IRFs.** Unlike previous work [17, 18], the proposed method incorporates the system IRF, as explained in Section IV-C. Here, we test how robust our method is to changes affecting the system IRF. We consider two types of baseline IRFs with 15 non-zero time bins: a symmetric IRF given by the Gaussian function and a realistic asymmetric IRF. On these baseline IRFs, we apply a Gaussian smoothing with different standard deviations \\(\\sigma_{\\rm IRF}\\) and use the resulting IRF to generate test data on the Art scene with PPP=4 and SBR=4. The first row of Fig. 11 shows the shapes of different IRFs where a large value of \\(\\sigma_{\\rm IRF}\\) increases the IRF's width. The second row shows the errors with respect to \\(\\sigma_{\\rm IRF}\\) when considering the compared networks (without retraining with the modified IRFs). The performance of our method is shown to be less affected by the different IRFs in both symmetric and asymmetric cases [17, 18]. This result highlights the robustness of our method to the mismodelling of the system IRF.
**Efficiency of the network.** Table V compares the number of parameters of the compared deep learning methods and their running times on the Art scene. The proposed method shows the fastest running time when using a GPU device. The previous deep learning models [17, 18] could not take as input the full Lidar data due to the GPU memory limit, so they process small size patches and stitch the resulting depths together to obtain the final estimate. This is why they have a large running time with high dimensional data. Meanwhile, in the proposed method, most of the computational costs come from the estimation of initial multiscale depths, which takes 317 seconds on a CPU device and 5.07 seconds on a GPU device. The parameters of our method are an order of magnitude less than those of [17, 18], and hence the proposed method requires shorter training time. Fig. 12 provides a visual comparison of testing time in terms of the number of network parameters on two different sizes of data.
It is worth mentioning that during training, Lindell and Peng use 13,800 patches of SPAD measurements with the size 32\\(\\times\\)32\\(\\times\\)1024 simulated from NYU v2 dataset [43], while the proposed method uses 7,860 patches of SPAD measurements with the size 256\\(\\times\\)256\\(\\times\\)1024 simulated from [37, 38] where each patch is compressed into multiscale depths of size 256\\(\\times\\)256\\(\\times\\)12 to serve as an input to our network.
### _Results on real data_
We evaluate the proposed method on a real dataset provided in [17] which captures real scenes under challenging scenarios. The Lidar data cubes have the resolution of 256\\(\\times\\)256\\(\\times\\)1536 and the first column of Fig. 13 shows the reference intensity images of 4 indoor scenes (1st to 4th row) and 1 outdoor scene (the last row). In the figure, we report the PPP and SBR levels
Fig. 11: Generalization test on different system IRFs. When generating test data, different system IRFs are considered by applying Gaussian smoothing with varying standard deviations \\(\\sigma_{\\rm IRF}\\) on the two baseline IRFs: a Gaussian IRF (top-left) and a realistic asymmetric IRF (top-right). The bottom row shows the DAE by Lindell, Peng and the proposed method with varying \\(\\sigma_{\\rm IRF}\\) from the Gaussian IRF (bottom-left) and the asymmetric IRF (bottom-right).
Fig. 12: The proposed model requires fewer parameters and lower running time in high dimensional Lidar data, compared to the state-of-the-art deep learning methods by Lindell [17] and Peng [18]. The running time is tested on two Lidar data cube: one with a size of \\(555\\times\\)695\\(\\times\\)1024 (circles) and the other with \\(278\\times\\)348\\(\\times\\)1024 (diamonds).
which are approximately estimated. Due to the high noise on the real data, Manipop does not yield meaningful surfaces, so we omit its results. As shown in the checkerboard scene (1st row), Peng [18] and our method yield flat depth maps on the checkerboard, while other methods observe inaccurate depth maps affected by the textures of the checkerboard. Compared to Peng's result, the proposed method gives a more flat depth map within the checkerboard and has less artifacts around the top and left borders. In the elephant scene (2nd row), compared to other methods, the proposed method reconstructs better the boundary of the elephant. In the lamp scene (3rd row), Halimi [21] reconstructs the structures well, but it suffers from outliers, while deep learning methods overall obtain less noisy results. Both Lindell [17] and Peng [18] lose some details on the top of the lamp whereas the proposed method obtains better reconstruction. In the 4th row, Peng's method fails to capture the hand in the middle left region, while our method still captures it. Fig. 14 shows the reconstruction results represented by point clouds. The last row shows the reconstruction on the stair scene which has a very low SBR level due to strong sunlights. The previous deep learning methods often result in bleeding artifacts between the steps on the stair, while Halimi [21] and our method show fewer such artifacts. Finally, Table VI compares the running time, where the fastest results are obtained by the proposed method confirming its efficiency.
## VI Conclusion and discussion
In this paper, we have proposed a new deep learning model to reconstruct depth profiles from single-photon Lidar
Fig. 14: Point cloud representation of reconstruction results on the real dataset.
Fig. 13: Reconstructed depth maps on the real dataset. The first column shows a reference intensity image and the last two columns show the uncertainty maps estimated by Halimi and the proposed method.
data, taking advantages of statistical models and data-driven approaches. We design our neural network by unrolling a previous iterative Bayesian method [21], exploiting the domain knowledge on a single-photon Lidar system. This unrolling strategy improves the interpretability and efficiency of the proposed network in terms of the network size, and the training and testing times. The resulting network is also more robust to mismodeling effects due to differences between training and testing data, than classical architectures. The numerical experiments show that the proposed model can reconstruct high quality depth maps in challenging scenarios with less artifacts around the surface boundaries. Extending the model by accounting for the reflectivity maps as input is interesting and will be studied in the future.
## References
* [1] A. M. Wallace, A. Halimi, and G. S. Buller, \"Full waveform lidar for adverse weather conditions,\" _IEEE Transactions on Vehicular Technology_, vol. 69, no. 7, pp. 7064-7077, 2020.
* [2] J. Rapp, J. Tachella, Y. Altmann, S. McLaughlin, and V. K. Goyal, \"Advances in single-photon lidar for autonomous vehicles: Working principles, challenges, and recent advances,\" _IEEE Signal Process. Mag._, vol. 37, no. 4, pp. 62-71, 2020.
* [3] G. Buller and A. Wallace, \"Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,\" _IEEE J. Sel. Top. Quantum Electron._, vol. 13, no. 4, pp. 1006-1015, 2007.
* [4] D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, \"Photon-Efficient Computational 3-D and Reflectivity Imaging with Single-Photon Detectors,\" _IEEE Trans. Comput. Imaging_, vol. 1, no. 2, pp. 112-125, 2015.
* [5] A. Halimi, Y. Altmann, A. McCarthy, X. Ren, R. Tobin, G. S. Buller, and S. McLaughlin, \"Restoration of intensity and depth images constructed using sparse single-photon data,\" in _European Signal Processing Conference (EUSIPCO)_, 2016.
* [6] A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, \"Single-photon three-dimensional imaging at up to 10 kilometers range,\" _Opt. Express_, vol. 25, no. 10, pp. 11 919-11 931, May 2017.
* [7] J. Rapp and V. K. Goyal, \"A Few Photons Among Many: Unmixing Signal and Noise for Photonic-Efficient Active Imaging,\" _IEEE Trans. Comput. Imaging_, vol. 3, no. 3, pp. 445-459, 2017.
* [8] A. Halimi, R. Tobin, A. McCarthy, J. Bioucas-Dias, S. McLaughlin, and G. S. Buller, \"Robust Restoration of Sparse Multidimensional Single-Photon LiDAR Images,\" _IEEE Trans. Comput. Imaging_, vol. 6, pp. 138-152, 2020.
* [9] R. Tobin, A. Halimi, A. McCarthy, P. Soan, and G. Buller, \"Robust real-time 3d imaging of moving scenes through atmospheric obscurants using single-photon lidar,\" _Scientific Report_, 2021.
* [10] S. Hernandez-Marin, A. M. Wallace, and G. J. Gibson, \"Multilayered 3D LiDAR Image Construction Using Spatial Models in a Bayesian Framework,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 30, no. 6, pp. 1028-1040, 2008.
* [11] A. Halimi, A. Maccarone, A. McCarthy, S. McLaughlin, and G. S. Buller, \"Object depth profile and reflectivity restoration from sparse single-photon data acquired in underwater environments,\" _IEEE Trans. Comput. Imaging_, vol. 3, no. 3, pp. 472-484, 2017.
* [12] J. Tachella, Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, S. McLaughlin, and J.-Y. Tourneret, \"Bayesian 3D Reconstruction of Complex Scenes from Single-Photon Lidar Data,\" _SIAM J. Imaging Sci._, vol. 12, no. 1, pp. 521-550, 2019.
* [13] Y. Altmann and S. McLaughlin, \"Range Estimation from Single-Photon Lidar Data Using a Stochastic Em Approach,\" in _European Signal Processing Conference (EUSIPCO)_, 2018.
* [14] Q. Legros, S. Meignen, S. McLaughlin, and Y. Altmann, \"Expectation-Maxization Based Approach to 3D Reconstruction From Single-Waveform Multispectral Lidar Data,\" _IEEE Trans. on Comput. Imaging_, vol. 6, pp. 1033-1043, 2020.
* [15] S. V. Venkakrishnan, C. A. Bouman, and B. Wohlberg, \"Plug-and-Play priors for model based reconstruction,\" in _IEEE Global Conference on Signal and Information Processing_, 2013.
* [16] J. Tachella, Y. Altmann, N. Mellado, A. McCarthy, R. Tobin, G. S. Buller, J.-Y. Tourneret, and S. McLaughlin, \"Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers,\" _Nat. Commun._, vol. 10, no. 1, 2019.
* [17] D. B. Lindell, M. O'Toole, and G. Wetzstein, \"Single-photon 3D imaging with deep sensor fusion,\" _ACM Trans. Graph._, vol. 37, no. 4, 2018.
* [18] J. Peng, Z. Xiong, X. Huang, Z.-P. Li, D. Liu, and F. Xu, \"Photon-Efficient 3D Imaging with A Non-local Neural Network,\" in _European Conference on Computer Vision (ECCV)_, 2020.
* [19] Z. Sun, D. B. Lindell, O. Solgaard, and G. Wetzstein, \"SPADnet: Deep RGB-SPAD sensor fusion assisted by monocular depth estimation,\" _Opt. Express_, vol. 28, no. 10, p. 14948, 2020.
* [20] A. Ruget, S. McLaughlin, R. K. Henderson, I. Gyongy, A. Halimi, and J. Leach, \"Robust super-resolution depth imaging via a multi-feature fusion deep network,\" _Opt. Express_, vol. 29, no. 8, p. 11917, 2021.
* [21] A. Halimi, A. Maccarone, R. Lamb, G. S. Buller, and S. McLaughlin, \"Robust and Guided Bayesian Reconstruction of Single-Photon 3D Lidar Data: Application to Multispectral and Underwater Imaging,\" _IEEE Trans. on Comput. Imaging_, vol. 7, pp. 961-974, 2021.
* [22] K. Gregor and Y. LeCun, \"Learning fast approximations of sparse coding,\" in _International Conference on Machine Learning_, 2010.
* [23] V. Monga, Y. Li, and Y. C. Eldar, \"Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing,\" _IEEE Signal Process. Mag._, vol. 38, no. 2, pp. 18-44, 2021.
* [24] A. Vaswani, N. Shazeer, N. Parmaf, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, \"Attention Is All You Need,\" in _Neural Information Processing Systems_, 2017.
* [25] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, \"Squeeze-and-Excitation Networks,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 42, no. 8, pp. 2011-2023, 2020.
* [26] X. Wang, R. Girshick, A. Gupta, and K. He, \"Non-local Neural Networks,\" in _IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2018, pp. 7749-7808.
* [27] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, \"CBAM: Convolutional Block Attention Module,\" in _European Conference on Computer Vision (ECCV)_, 2018.
* [28] H. Zhao, X. Kong, J. He, Y. Qiao, and C. Dong, \"Efficient Image Super-Resolution Using Pixel Attention,\" in _European Conference on Computer Vision (ECCV) Workshop_, 2020.
* [29] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio, \"Show, Attend and Tell: Neural Image Caption Generation with Visual Attention,\" in _International Conference on Machine Learning (ICML)_, 2015.
* [30] Y. Gal and Z. Ghahramani, \"Dropout as a bayesian approximation: Representing model uncertainty in deep learning,\" in _International Conference on Machine Learning (ICML)_, 2016, pp. 1050-1059.
* [31] B. Lakshminarayanan, A. Pritzel, and C. Blundell, \"Simple and scalable predictive uncertainty estimation using deep ensembles,\" in _Neural Information Processing Systems_, 2016.
* [32] N. Parikh and S. Boyd, \"Proximal algorithms,\" _Foundations and Trends in optimization_, vol. 1, no. 3, pp. 127-239, 2014.
* [33] C. J. Maddison, A. Mnih, and V. W. Teh, \"The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables,\" in _International Conference on Learning Representations (ICLR)_, 2017.
* [34] E. Jang, S. Gu, and B. Poole, \"Categorical Reparameterization with Gumbel-Softmax,\" in _International Conference on Learning Representations (ICLR)_, 2017.
* [35] B. Xu, N. Wang, T. Chen, and M. Li, \"Empirical evaluation of rectified activations in convolutional network,\" _ArXiv Prepr. 150500853_, 2015.
* [36] K. Simonyan and A. Zisserman, \"Very Deep Convolutional Networks for Large-Scale Image Recognition,\" in _International Conference on Learning Representations (ICLR)_, 2015.
* [37] H. Hirschmuller and D. Scharstein, \"Evaluation of cost functions for stereo matching,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2007.
* [38] D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, \"A naturalistic open source movie for optical flow evaluation,\" in _European Conference on Computer Vision (ECCV)_, 2012.
* [39] D. P. Kingma and J. Ba, \"Adam: A Method for Stochastic Optimization,\" in _International Conference on Learning Representations (ICLR)_, 2015.
* [40] C. Chen, X. Chen, and H. Cheng, \"On the Over-Smoothing Problem of CNN Based Disparity Estimation,\" in _International Conference on Computer Vision (ICCV)_, 2019.
* [41] J. Canny, \"A computational approach to edge detection,\" _IEEE Trans. Pattern Anal. Mach. Intell._, no. 6, pp. 679-698, 1986.
* [42] F. Tosi, Y. Liao, C. Schmitt, and A. Geiger, \"SDN-Nets: Stereo Mixture Density Networks,\" in _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2021.
* [43] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, \"Indoor segmentation and support inference from rgbil images,\" in _European Conference on Computer Vision (ECCV)_, 2012. | Deploying 3D single-photon Lidar imaging in real world applications faces multiple challenges including imaging in high noise environments. Several algorithms have been proposed to address these issues based on statistical or learning-based frameworks. Statistical methods provide rich information about the inferred parameters but are limited by the assumed model correlation structures, while deep learning methods show state-of-the-art performance but limited inference guarantees, preventing their extended use in critical applications. This paper unrolls a statistical Bayesian algorithm into a new deep learning architecture for robust image reconstruction from single-photon Lidar data, i.e., the algorithm's iterative steps are converted into neural network layers. The resulting algorithm benefits from the advantages of both statistical and learning based frameworks, providing best estimates with improved network interpretability. Compared to existing learning-based solutions, the proposed architecture requires a reduced number of trainable parameters, is more robust to noise and mismodelling effects, and provides richer information about the estimates including uncertainty measures. Results on synthetic and real data show competitive results regarding the quality of the inference and computational complexity when compared to state-of-the-art algorithms.
3D reconstruction, Lidar, single-photon imaging, algorithm unrolling, attention, Bayesian inference | Provide a brief summary of the text. | 238 |
arxiv-format/1807_05412v1.md | # ViLDAR - Visible Light Sensing Based
Speed Estimation using Vehicle's Headlamps
Hisham Abuella, Farshad Miramirkhani,
Sabit Ekin, Murat Uysal,, and Samir Ahmed
This paper was presented in part at the IEEE 38th Sarnoff Symposium in Newark, NJ, Sept. 2017.H. Abuella and S. Ekin are with the School of Electrical and Computer Engineering, Oklahoma State University, Oklahoma, USA (e-mail: [email protected], [email protected]).F. Miramirkhani and M. Uysal are with the Department of Electrical and Electronics Engineering, Ozyegin University, Istanbul, Turkey, (e-mail: [email protected], [email protected]).S. Ahmed is with the School of Civil and Environmental Engineering, Oklahoma State University, Oklahoma, USA (e-mail: [email protected]).
######
A provisional patent (US#62/541,913) has been obtained for this work. Intelligent transportation systems, vehicle safety, speed estimation, RADAR, LiDAR, visible light sensing, ray tracing.
## I Introduction
In recent years, most vehicle manufacturers have started equipping their vehicles with daytime running lights (DRLs) by using light emitting diode (LED) headlamps because of their advantages such as long life time, energy efficiency, and short rise time. Research results show that DRLs can decrease the crash rate by up to 28% for multi-vehicle and pedestrian crashes [1, 2, 3]. LEDs can be modulated at very high frequencies without any adverse effects on illumination level. Therefore, the idea of using visible light communication (VLC) in the vehicle-to-X (V2X) communication to replace radio frequency (RF) based standards (i.e., 802.11p) has been already proposed in [4, 5, 6, 7, 8, 9, 10].
Another potential application of LED headlamps is visible light sensing (VLS) for speed estimation which will be the focus of this paper. Transportation agencies around the world measure vehicle speed for a number of reasons. Average speed is used to measure the quality of traffic operations and the reliability of travel time. The distribution of vehicle speeds is used to determine the traffic performance, examination of highway consistency and safety. Speed enforcement programs aim to lower the number and severity of traffic crashes. Therefore, transportation traffic professionals are always looking for technological advancements to enhance speed measurement systems.
**RA**dio **D**etection **A**nd **R**anging (RADAR) system is a popular method for vehicle speed estimation. Depending on the application and scenario, a RADAR can be used for either detecting a moving object or estimating its speed [11]. A similar system that uses the same principle of the RADAR but works on a different part of the electromagnetic radio spectrum is the **L**ight **D**etection **A**nd **R**anging (LiDAR). RADAR and LiDAR systems estimate the speed by detecting the change in frequency and travel time of the reflected signal. Among various challenges, the beam-width and angle-of-arrival are the most critical ones and have high impact on the estimation accuracy of these systems [12]. Indeed, narrow beam-width is required for accurate speed estimations. Furthermore, RADAR or LiDAR warning devices/systems are commercially available and can be easily acquired by drivers [13]. These devices detect the presence of RADAR/LiDAR (also known as speed gun or RADAR gun) and warn the driver to avoid traffic fines for speeding. More information about state-of-the-art RADARs and LiDARs' disadvantages and limitations is provided in details in [14].
In this paper, we introduce a VLS based speed estimation system that uses received light power (intensity) variations of an approaching vehicle's headlamps. We termed the system as **V**isible **L**ight **D**etection **A**nd **R**anging (ViLDAR)1, where the vehicle's headlamp acts as a transmitter. As long as the vehicle light is in ViLDARs field of view (FOV) (similar to beam-width in RADAR), the irradiance angle (angle of arrival) as low impact on estimation accuracy. To the extent of our knowledge, the concept of utilizing headlamps' received power level for vehicle speed estimation has not been discussed in the literature previously.
Footnote 1: The patent is pending: H. Abuella, S. Ekin, and M. Uysal, System and method for speed estimation, in vehicles, US Patent App. No. 62/541,913.
The main contributions of this study are given as follows:
* A VLS-based detection and ranging system using vehicle's headlamp is presented.
* The ViLDAR system performance is simulated based on channel coefficients generated from an advanced ray tracing tool to mimic the realistic and physical-based visible light channel models.
* The performance of ViLDAR system in both curved and straight road scenarios is investigated and compared with theoretical performance of RADAR system in ideal environment conditions.
The advantages of the ViLDAR system are summarized as follows:
* It is a low-power ranging and detection system that reflects on good battery efficiency.
* It has less noise and pathloss compared to two-way model.
* It performs better in a large incident angle and in scenarios where the incident angle is varying fast, i.e., curved road scenarios.
* Size and weight of ViLDAR system will be smaller and lighter than current handheld law enforcement RADAR guns.
* Unlike RADAR guns, the ViLDAR system cannot be detected by drivers when used by law enforcement officers.
* Since ViLDAR uses the visible light, it is not affected by RF interference and does not cause any interference to other RF systems.
The rest of the paper is organized as follows. First, we present the system model in Section II. Channel models under considerations are discussed in Section III. The speed estimation methods are presented in Section IV. The numerical results are presented in Section V, while conclusions are drawn in Section VI.
## II System Model
The system model is depicted in Fig. 1 where both straight and curved road deployments are illustrated. In Fig. 1, \\(\\theta\\) and \\(d\\) are the incidence angle and the vertically projected distance between the vehicle and the photodetector (PD), respectively. Likewise, \\(R_{i}\\) and \\(D_{i}\\) are the horizontally projected distance and the actual distance between the vehicle and the PD at time \\(t_{i}\\), respectively. To avoid confusion, \\(D\\) and \\(R\\) are called as distance and range for the rest of the paper. Moreover, the subscript \\(i\\) and \\(i+1\\) stand for the values of the corresponding parameter at time instances \\(t_{i}\\) and \\(t_{i+1}\\), respectively.
In Fig. 1., b, \\(r_{c}\\) is the curvature radius of the road, \\(\\beta\\) is the angle that is changing with respect to the angular velocity of the car \\(w=\\frac{V}{r_{c}}\\) and \\(V\\) is the vehicle speed. In this scenario, both the horizontal distance from ViLDAR to vehicle \\(d_{c}\\) and vertical distance \\(R\\) change with respect to \\(\\beta\\). In addition, \\(d_{o}\\) and \\(R_{o}\\) are the horizontal and vertical distances between ViLDAR and the end of the curved road, respectively.
The PD in ViLDAR system takes new measurements with a certain sampling time (i.e., different \\(D\\) and \\(R\\) from the PD). The received power increases as the vehicles approach the PD. Given the channel model, the speed estimation problem can be interpreted as designing an estimator to obtain the slope of the received power (see Fig. 5); hence, estimate the speed of the vehicle. Lastly, We assume that the vehicle moves with constant velocity. It is further assumed that 1) vehicle's LED transmits a constant power (luminance), 2) only a single2 vehicle approaches in the duration of measurements, i.e., certain distance between vehicles is assumed, 3) the ViLDAR's FOV3 is assumed to be 70\\({}^{\\circ}\\), and 4) difference between the two headlamps in the vehicle is small, hence it is assumed that vehicle has one transmitter.
Footnote 2: Depending on application, if needed, the position of the PD can be changed to focus on a single lane to perform speed estimation of a single vehicle.
Footnote 3: This assumption can be realized by using some optical lens with the PD.
## III Channel Modeling
While VLC has been studied intensively in the context of indoor communications [15, 16], its application to vehicular networking is relatively new [17, 18, 19]. Earlier works on infrastructure-to-vehicle (I2V) links (i.e., from traffic light to vehicle) [20, 21] build upon the line-of-sight (LOS) channel model originally proposed for the indoor LED light sources with Lambertian pattern. However, such a model is
Fig. 1: ViLDAR system model (a) straight road and (b) curved road.
not applicable to automotive low-beam and high-beam headlamps with asymmetrical intensity distributions. To address this, a piecewise Lambertian channel model was proposed in [9] to reflect the asymmetrical intensity distribution of scooter taillight. Measured intensity distribution patterns were further used in [22, 23, 24] to accurately reflect the asymmetrical structure of automotive lighting in vehicle-to-vehicle (V2V) channel modeling.
The reflections from road surface might impact the vehicular VLC system performance. The reflectance of road surface depends on its nature and physical state. In [22], Lee _et al._ utilized Monte Carlo ray tracing to obtain channel delay profiles for V2V, I2V and vehicle-to-infrastructure (V2I) links for a road surface with fixed reflectance value. In [23, 24], Luo _et al._ proposed a geometric V2V VLC channel model based on the measured headlamp beam patterns and a road surface reflection model. The link BER performance was investigated for the clean and dirty headlamps in a wet and dry road surface. In [25], Elamassie _et al._ carried out a comprehensive channel modeling study to quantify the effect of rain and fog on a V2V link with a high-beam headlamp acting as the transmitter. Taking advantage of advanced ray tracing features, they developed a path loss model for V2V link as a function of distance under different weather conditions.
In this section, we explore two path loss models for ViL-DAR system under consideration. As hypothetical case, we first consider Lambertian channel model. Then, we adopt ray-tracing approach [25] to propose more realistic channel model taking into account several practical constrains such as the asymmetrical pattern of headlamp and street lights, reflections from road surfaces and weather conditions.
### _Lambertian Channel Model_
In this model, the power-distance relation is given by [26]
\\[P_{r}(t)=\\frac{(n+1)A_{R}P_{t}}{2\\pi[D(t)]^{\\gamma}}\\cos^{n}(\\phi(t))\\cos( \\theta(t)),\\forall\\theta(t)<\\phi_{1/2}, \\tag{1}\\]
where \\(P_{t}\\) is the transmitter power and \\(A_{R}\\) is the optical PD size. \\(\\phi(t)\\) and \\(\\theta(t)\\) are irradiance and incidence angles at time \\(t\\), respectively. In addition, \\(\\phi_{1/2}\\) is the semi-angle at half-power of the LED, and \\(n\\) is the order of the Lambertian model and is given by
\\[n=-\\frac{\\ln(2)}{\\ln(\\cos\\phi_{1/2})}. \\tag{2}\\]
Assuming same heights for both transmitter and receiver, we have
\\[\\theta(t)=\\phi(t),\\forall t, \\tag{3}\\]
where \\(0<\\theta(t)<\\phi_{1/2}\\). Using (3), (1) can be further simplified as
\\[P_{r}(t)=\\frac{(n+1)A_{R}}{2\\pi[D(t)]^{\\gamma}}\\cos^{n+1}(\\theta(t)). \\tag{4}\\]
Finally, in order to derive \\(P_{r}(t)\\) in terms of \\(D(t)\\), we further simplify (4) by defining a constant as
\\[C=\\frac{(n+1)A_{R}}{2\\pi}. \\tag{5}\\]
Using the fact that \\(\\cos(\\theta)=\\frac{\\sqrt{[D(t)]^{2}-d^{2}}}{D(t)}\\), the resulting expression yields
\\[P_{r}(t)=\\frac{C}{[D(t)]^{\\gamma}}\\left(\\frac{\\sqrt{[D(t)]^{2}-d^{2}}}{D(t)} \\right)^{n+1}. \\tag{6}\\]
### _Simulated Channel Model_
We use a similar channel modeling methodology based on Zemax(r) as in [25]. We first construct the simulation platform of the outdoor environment integrating the CAD models of building, vehicles and any other objects within. We further specify the type of object surface materials (coating) and the types of reflections, i.e., purely diffuse, specular and mixed reflections. The specific type of reflection is defined by scatter fraction parameter. Mie scattering is further used to model clear weather conditions [27]. \"Bulk scatter\" method in the software allows providing the input parameters \"particle index\" (the refractive index of particles), \"size\" (the radius of the spherical particles) and \"density\" (the density of particles).
After we create the simulation environment, we use the built-in ray tracing function to determine the CIR. The non-sequential ray tracing tool generates an output file, which includes the detected power and path lengths from source to PD for each ray. We import this file to Matlab(r) and, using this information, we can express the CIR as
\\[h(t)=\\sum_{i=1}^{N_{r}}P_{i}\\delta(t-\\tau_{i}), \\tag{7}\\]
where \\(P_{i}\\) is the power of the \\(i\\)th ray, \\(\\tau_{i}\\) is the propagation time of the \\(i\\)th ray, \\(\\delta(t)\\) is the Dirac delta function and \\(N_{r}\\) is the number of rays received at the PD.
The received optical power is given as \\(P_{r}(t)=P_{t}-PL\\) where \\(P_{t}\\) is the transmit optical power and \\(PL\\) is expressed as [28]
\\[PL=10\\log_{10}\\left(\\int_{0}^{\\infty}h(t)dt\\right). \\tag{8}\\]
In the simulated channel model (similar to the RF path loss model [29]), the power-distance relation can be given by
\\[P_{r}(t)=K[D(t)]^{-\\gamma},\\forall\\ D(t)>1, \\tag{9}\\]
where \\(P_{r}(t)\\) is the power level received from the vehicle at time \\(t\\), and \\(K\\) is a constant that represents all the gains and the transmitted power. \\(\\gamma\\) is the channel path-loss exponent, which usually depends on the channel environment, and \\(D(t)\\) is the distance between the vehicle and the ViLDAR at time \\(t\\).
We consider a scenario shown in Fig. 3. We assume that coating materials of buildings, traffic light poles and street lamp poles are respectively concrete, aluminum metal and galvanized steel metal. The coating material of cars is considered as black and olive green gloss paint. The road type is assumed as R3 with the coating material of asphalt [30].
We use Philips Luxeon Rebel automotive white LED as the low-beam headlamp with the spatial distribution shown in Fig. 3.a. Due to asymmetrical intensity distribution of luminaire, different cross sections indicated by C0\\({}^{\\circ}\\)-C180\\({}^{\\circ}\\), C90\\({}^{\\circ}\\)-C270\\({}^{\\circ}\\) and C135\\({}^{\\circ}\\)-C315\\({}^{\\circ}\\) planes are shown in Fig. 3.b.
The headlamp with total power normalized to unity is placed in the front side of the vehicle as the transmitter and the PD with 1 cm\\({}^{2}\\) area and FOV of 70\\({}^{\\circ}\\) is placed on the sidewalk (see Fig. 2.b).
We assume that the vehicle moves toward the PD. We obtain the CIRs through all points with 1 meter interdistance over the driving direction of the car for a range of 10 meter. In an effort to obtain \\(K\\) and \\(\\gamma\\) values in (9), we apply curve fitting techniques on our calculated path loss in (8) based on the minimization of root mean square error. The related coefficients \\(K_{dB}\\) and \\(\\gamma\\) are presented in Table I. The Lambertian channel model is also included as a benchmark (i.e., hypothetical case).
In Fig. 4, we present the path loss versus distance for the channel models under consideration. It is observed from Fig. 4 that the path loss obtained with Lambertian channel model is underestimated with respect to the simulated channel model. This is a result of the fact that in the simulated channel model more reflected rays from the road surface are received.
In Fig. 5, Lambertian and simulated channel models are compared. To have a fair comparison, we use the same parameter values in both channel models. For instance, the constant \\(C\\) in (5) equals to constant \\(K\\) in (9) and with the same \\(\\gamma\\) for path-loss exponent4. In order to show the impact of noise level, different _initial_ signal-to-noise-ratios (SNR\\({}_{o}\\)) are used in the simulations (20 dB and 30 dB ).
Footnote 4: Both \\(K_{dB}\\) and \\(\\gamma\\) values are estimated using a ray tracing simulation explained in Section III. They are environment dependent.
Initially at time 0, and as the vehicle advances to the PD, distance decreases, ViLDAR takes new measurements. As predicted, the received power increases as the vehicle approach the PD in both models. Furthermore, although the estimation can be performed from all the received power levels, high accuracy of speed estimation can be obtained in certain region, which is shown as the reliable regio
\\begin{table}
\\begin{tabular}{||c|c c||} \\hline & \\(K_{dB}\\) & \\(\\gamma\\) \\\\ \\hline \\hline Lambertian Channel Model & -41.39 & 1.673 \\\\ \\hline Simulated Channel Model & -49.32 & 1.210 \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Channel parameters for Lambertian and simulated channel models.
Fig. 3: (a) Spatial distribution and (b) relative intensity distribution (C0\\({}^{\\circ}\\)-C180\\({}^{\\circ}\\), C90\\({}^{\\circ}\\)-C270\\({}^{\\circ}\\) and C135\\({}^{\\circ}\\)-C315\\({}^{\\circ}\\) planes are respectively indicated by red, blue and green color) of low-beam headlamp under consideration.
Fig. 2: (a) Scenario under consideration and (b) transmitter and receiver close-up in the 3-D ray tracer.
## IV Speed Estimation
In this section, we present the speed estimation algorithms by using the channel models presented in preceding section for two different road scenarios.
### _Estimation in Straight Road Scenario_
First, we consider the simulated channel model presented in (9). Under the assumption of constant speed during the estimation period, \\(D(t)\\) can be expressed in terms of speed and time as
\\[D(t)=\\sqrt{d^{2}+\\left(R_{o}-Vt\\right)^{2}}, \\tag{10}\\]
where \\(R_{o}\\) is the initial range between the vehicle and the ViLDAR. Then, replacing (10) in (9) yields
\\[P_{r}(t)=K(d^{2}+\\left(R_{o}-Vt\\right)^{2})^{-\\frac{2}{2}}. \\tag{11}\\]
Rearranging (11) results in linear model (\\(\\textbf{y}=V\\textbf{x}+R_{o}\\)), we have
\\[\\sqrt{\\left(\\frac{P_{r}(t)}{K}\\right)^{-\\frac{2}{\\gamma}}-d^{2}}=-Vt+R_{o}, \\tag{12}\\]
where \\(\\textbf{y}\\) can be considered as \\(n\\)-samples vector of \\(\\sqrt{\\left(\\frac{P_{r}(t)}{K}\\right)^{-\\frac{2}{\\gamma}}-d^{2}}\\), and each value in \\(y\\) corresponds to a value in vector \\(x\\) (\\(n\\)-samples vector of negative value of time). Then, it can be expressed in vector-domain as
\\[\\textbf{y}=[\\textbf{x},\\textbf{1}][V,R_{o}]^{T}, \\tag{13}\\]
where \\(\\textbf{1}\\) is a vector of \\(1\\)'s with size \\((n,1)\\). Then, by letting \\(\\textbf{b}=[V,R_{o}]^{T}\\) with size \\((2,1)\\) and \\(\\textbf{A}=[\\textbf{x},\\textbf{1}]\\) with size \\((n,2)\\), we have
\\[\\textbf{y}=\\textbf{A}\\textbf{b}. \\tag{14}\\]
\\(V\\) and \\(R_{o}\\) can be readily estimated by using the least square (LS) inverse formula as
\\[\\textbf{b}=[\\textbf{A}^{T}\\textbf{A}]^{-1}\\textbf{A}^{T}\\textbf{y}. \\tag{15}\\]
On the other hand, estimating the speed when using the Lambertian channel model is as follows. First, we update the LS model using the Lambertian channel model as:
\\[P_{r}(t)=\\frac{(n+1)A_{R}P_{t}}{2\\pi[D(t)]^{\\gamma}}\\cos^{(n+1)}(\\theta(t)), \\tag{16}\\]
setting \\(K=\\frac{(n+1)A_{R}P_{t}}{2\\pi}\\), which is a constant value, one can get
\\[P_{r}(t)=K[D(t)]^{-\\gamma}\\cos^{(n+1)}(\\theta(t)). \\tag{17}\\]
For \\(\\cos(\\theta(t))=1\\), the expression in (17) reduces to (9). To obtain the speed, a similar methodology is followed as in the simulated channel model. To minimize redundancy, we are not going to repeat the same derivation. however, the main difference is the constant parameter \\(K\\). Hence, the same formula in (15) is applied to estimate the speed in case of using the Lambertian channel model.
### _Estimation in Curved Road Scenario_
As shown in Fig. 1.b, the curved road scenario has different setup and parameters than straight road scenario. Hence, we use a different method to estimate the speed. First, we estimate the \\(\\beta\\) angle for each sample received power using minimum square error (MSE). Then, we use the linear LS estimation method to estimate the angular velocity. Then, we replace all the variables in (17) in terms of \\(\\beta\\).
Assuming \\(R_{o}\\) and \\(d_{o}\\) are zero, i.e., the ViLDAR is at the end of the curvature of the road. Using some basic trigonometry identities we have
\\[R=r_{c}\\sin(\\beta), \\tag{18}\\]
\\[d_{c}=r_{c}(1-\\cos(\\beta)), \\tag{19}\\]
\\[D=\\sqrt{(d_{c}+d_{o})^{2}+(R+R_{o})^{2}}. \\tag{20}\\]
Substituting \\(h\\) and \\(D\\), we have:
\\[D=\\sqrt{r_{c}^{2}(1-\\cos(\\beta))^{2}+r_{c}^{2}\\sin(\\beta)^{2}}, \\tag{21}\\]
Then,
\\[D=r_{c}\\sqrt{2-2\\cos(\\beta)}=2r_{c}\\sin(\\beta/2). \\tag{22}\\]
Fig. 4: Path Loss versus distance for Lambertian and simulated channel models.
Fig. 5: Reliable region of operation in ViLDAR for Lambertian and simulated channel models.
Given that \\(\\cos(\\theta)=\\frac{R}{D}\\), then,
\\[\\cos(\\theta)=\\frac{r_{c}\\sin(\\beta)}{2r_{c}\\sin(\\beta/2)}=\\cos(\\beta/2). \\tag{23}\\]
Substituting \\(D(t)\\) with \\(2r_{c}\\sin(\\beta/2)\\) and \\(\\cos(\\theta(t))\\) with \\(\\cos(\\beta/2)\\), where \\(\\beta\\) is also changing with time, we get
\\[P_{r}(\\beta)=\\frac{K\\left(\\cos(\\beta/2)\\right)^{n+1}}{\\left(2r_{c}\\sin(\\beta/ 2)\\right)^{\\gamma}}. \\tag{24}\\]
To estimate the \\(\\beta(t)\\) for each measurement of \\(P_{r}(t)\\), we minimize the cost function \\(g(\\beta)\\) where we define \\(g(\\beta)\\) as
\\[g(\\beta)=\\left(P_{r,sim}-P_{r}(\\beta)\\right)^{2}. \\tag{25}\\]
Substituting \\(P_{r}(\\beta)\\) in (24), we get
\\[g(\\beta)=\\left(P_{r,sim}-\\frac{K\\left(\\cos(\\beta/2)\\right)^{n+1}}{\\left(2r_{c }\\sin(\\beta/2)\\right)^{\\gamma}}\\right)^{2}. \\tag{26}\\]
The next step is to estimate the angular velocity (\\(w\\)) given that \\(\\beta=\\beta_{o}-wt\\). We estimate \\(w\\) and \\(\\beta_{o}\\) by using the linear LS equation used in (15) for the straight road case. Once angular velocity is estimated, it is straightforward to find the vehicle speed given the radius of curvature.
## V Simulation Results
In this section, the simulation results are presented to confirm the analytical results and investigate the impact of various system parameters on the performance of ViLDAR system.
### _Straight Road Scenario_
Initially, the vehicle is very far to the point that the range and the distance are almost equal. That is to say, the angle of incidence (\\(\\theta\\)) is approximately zero as shown in Fig. 1.a
The following parameters are used in simulating linear LS speed estimation algorithm given by (15):
* Estimation duration (\\(\\Delta t_{est}\\)) is 0.3 s unless otherwise stated (duration during which the PD is taking measurements from the approaching vehicle for the speed estimation process).
* The distance between the ViLDAR and the vehicle line of motion (\\(d\\)) is 0.5 m.
* Half power angle \\(\\phi_{1/2}\\) of the vehicle's headlamp5 is 40\\({}^{\\circ}\\).
Footnote 5: Although the incident angle of current headlamps ranges between 0\\({}^{\\circ}\\) and 40\\({}^{\\circ}\\), the simulation results are provided for a wider range to observe the trend of the performance improvement.
* The starting range of the simulation where ViLDAR starts taking measurements (\\(R_{o}\\)) is equal 15 m.
* The ViLDAR takes a new measurement every 1 ms.
* The channel path loss exponent (\\(\\gamma\\)) and channel gain (\\(K_{dB}\\)) are respectively \\(1.21\\) and -49.32 dB as found for simulated channel model.
In Fig. 6, it is shown how the the incidence angle (\\(\\theta(t)\\)) and distance (\\(R(t)\\)) varies as the vehicle approaches the PD. As predicted, the range decreases and \\(\\theta(t)\\) increases as the vehicle advances toward the ViLDAR. In Fig. 7, the speed estimation accuracy comparison between ViLDAR and the theoretical limit of the RADAR/LiDAR (see [12, 31]) is presented. As it can be observed, the proposed ViLDAR system provides better performance for a wider range of incidence angles, e.g., more than %00 estimation accuracy for up to %80 of incidence angle range. This improvement is attributed to first ViLDAR has lower dependency on the incidence angle compared to RADAR/LiDAR systems, and second the fact that the proposed system is a one-way signal model which is less susceptible to noise and path-loss. Furthermore, as the estimation duration increases, better estimation accuracy is achieved for different incidence angles.
Fig. 8 provides the impact of SNR levels (i.e., received power) at the initial point of measurements on the estimation accuracy. As expected, even at low incidence angles, there is performance degradation due to low SNR level. In addition, to further validate the performance of ViLDAR speed estimation method, the estimation accuracy for different speed levels and estimation duration is provided in Fig. 9. Similar to the observations in Fig. 7, the estimation algorithm works at different speed levels, while the performance is impacted only with the estimation duration. Moreover, as the speed of the vehicle decreases, ViLDAR needs more estimations duration (i.e., samples) to keep the same speed estimation accuracy.
Fig. 7: Speed estimation accuracy of ViLDAR compared to RADAR/LiDAR for different estimation duration in straight road scenario.
Fig. 6: The incidence angle and range of the vehicle in straight road scenario assuming \\(R_{o}\\) = 15 m, \\(V\\) = 72 km/hr and \\(d\\) = 0.5 m.
Finally, in Fig. 10, the impact of semi-half power angle (\\(\\phi_{1/2}\\)) on performance is given. As expected, higher values of angles improve estimation accuracy. These results prove that the performance of the system is strongly dependent on the incident angle, number of samples used in the estimation and the noise level (accuracy) of the received power.
In Fig. 11, we present the effect of the different channel models (simulated and Lambertian) and estimation time on the speed estimation accuracy. As expected, the performance is impacted more in simulated channel model since its channel gain (\\(K_{dB}\\)) is less than that one in Lambertian channel model.
### _Curved Road Scenario_
Initially, the vehicle is at the beginning of the curved road as shown in Fig. 1.b. The following parameters are used in the simulations:
* Angular vehicle speed \\(w\\) is 1 rad/sec unless otherwise stated, where curvature radius \\(r_{c}\\) is 40 m.
* \\(R_{o}\\) and \\(d_{o}\\) are zero, i.e, the ViLDAR is at the end of the curvature of the road.
* Half power angle \\(\\phi_{1/2}\\) of the vehicle's headlamp is 40\\({}^{\\circ}\\).
* The ViLDAR measures the power starting at \\(\\beta=\\pi/2\\).
* Every 1 ms, the ViLDAR captures a new measurement.
* The channel path loss exponent (\\(\\gamma\\)) and channel gain (\\(K_{dB}\\)) are respectively \\(1.21\\) and -49.32 dB.
In Fig. 12, the change in the incidence angle (\\(\\theta(t)\\)), range (\\(R(t)\\)) and vertical distance (\\(d_{c}(t)\\)) with simulation time is non-linear unlike the case of straight road scenario. As time increases the vehicle approaches the ViLDAR; therefore, the \\(R(t)\\) and \\(d_{c}(t)\\) decrease. While \\(\\theta(t)\\) decreases initially, then at a certain point it increases exponentially until it reaches almost \\(\\pi/2\\) due to the road curvature.
Speed estimation accuracy is impacted by SNR levels (i.e., received power) at the initial point of measurements (see Fig. 14) and estimation duration (see Fig. 13). Notice that low performance of estimation accuracy for the initial SNR value of 30 dB in Fig. 14 is attributed to the high noise level. Moreover, as the estimation duration (\\(\\Delta t_{est}\\)) increases the speed estimation accuracy improves, which can be related to benefit of having higher number of samples in estimation process. In addition, Fig. 13 presents the gain in estimation accuracy of the ViLDAR system compared to the theoretical error limit of the RADAR and LiDAR systems in curved road scenario (see (28) in Appendix).
Fig. 11: Speed estimation accuracy of Lambertian and simulated channel models for different estimation duration in straight road scenario.
Fig. 8: Speed estimation accuracy for different initial SNR values in straight road scenario
Fig. 10: Speed estimation accuracy for headlamp with different half viewing angles in straight road scenario.
Fig. 9: Speed estimation accuracy of linear LS method for different actual speed values in straight road scenario.
### _Comparison_
In this section, based on our simulations and discussions given in [32, 33], a comparison between ViLDAR and RADAR gun is provided in Table II.
As shown in Table II, ViLDAR gives more flexibility in terms of the angle of incidence and beam-width with same high accuracy percentage. In terms of size, ViLDAR is expected to be much smaller as it only needs a PD, which can be very small in size similar to the PDs used in [5] and [34]. On the other hand, RADAR gun system needs to have the transceiver module and the antenna which depends on the frequency of operation. One of the main advantages of ViLDAR is that its presence cannot be detected by malicious drivers because of being a one-way signal model. Due to the fact that light wave have higher frequency than RF one used in RADARs, the operation distance range in ViLDAR will be smaller than RADAR as expected. Additional advantages and features of the ViLDAR system shared with VLC and VLS systems in general can be given as; immunity from Electromagnetic interference (EMI), using unlicensed bands and low power consumption as discussed in [32].
## VI Conclusions
In this paper, a visible light sensing based speed estimation system, termed as ViLDAR, was proposed. Given the fact that the received power increases as the vehicle approaches the PD, the ViLDAR utilize linear LS method to estimate the slope of the received power with respect to time (i.e., speed). We have evaluated the performance of the proposed ViLDAR system in different road scenarios. Our results demonstrated that by using the received light intensity of vehicle's LED headlamp, the vehicle's speed can be accurately estimated for a wide range of incidence angle. In fact, more than 90% estimation accuracy is observed for up to 70-80% of simulation time. Comparison of results obtained for ViLDAR and RADAR also reveals that RADAR detectors poorly perform in fast incidence angle changing scenarios while promising performance can be observed for the ViLDAR system. The impact of different system parameters on speed estimation accuracy of the ViLDAR system were further investigated. It is observed that the half viewing angle of vehicle's headlamp is of crucial importance for the speed estimation accuracy.
## Acknowledgment
The authors thank Amit Kachroo for his valuable comments and suggestions to improve this paper.
## Appendix
In this section, the principles of RADAR/LiDAR and the factors affecting the performance of the system and the estimation accuracy are presented.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline & ViLDAR & RADAR \\\\ \\hline Range of incident angle accuracy & High & Low \\\\ \\hline Range & Low (Up to 100 m) & High (Up to 1 km) \\\\ \\hline Beam-width & Wide (\\(\\mathrm{\\SIUnitSymbolMicro m}\\)) & Narrow \\\\ \\hline Environment dependency & Sensitive & Moderate \\\\ \\hline Ambient light & Sensitive & Net affected \\\\ \\hline EMI (Electro-mantics Interference) & No & Yes \\\\ \\hline Band license & Unlicensed & Licensed(Unlicensed \\\\ \\hline Detectable by drivers & No & Yes \\\\ \\hline Cost & Low & High \\\\ \\hline Size & Small & Large \\\\ \\hline Power consumption & Low & High \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: Comparison of ViLDAR and RADAR.
Fig. 14: Speed estimation accuracy versus time for different initial SNR values in curved road scenario.
Fig. 12: The incidence angle, vertical distance and range of the vehicle in curve road scenario assuming \\(r_{c}=40\\) m and \\(w=1\\) rad/sec.
Fig. 13: Speed estimation accuracy of ViLDAR compared to RADAR/LiDAR for different estimation duration in curved road scenario.
### Principles
The fundamental idea of the RADAR/LiDAR is to measure the difference between the transmitted and received (after reflection) signal in frequency and time. While LiDAR systems utilize laser (light) bands, RADAR systems use RF signals. Commonly, RADAR systems are used in speed estimation in traffic control and regulations. Although, there are numerous studies that discuss how to improve detection and estimation accuracy of RADARs [35], there are still many limitations that impact and affect the estimation accuracy of these systems. One of the most important limitations is the LOS and narrow beam-width requirements (i.e., the angle between the device and the target) on the RADAR performance and estimation accuracy. We present a brief discussion about this challenge in the next section. Thus, the speed measurements by RADAR systems are only reliable for a certain distance, angle and availability of LOS [12, 36].
### Cosine Effect
RF- and laser-based speed detectors estimate the speed of a vehicle that is moving towards the detector. A simplified relation between the _measured_ speed (\\(V_{m}\\)) and the _actual_ speed (\\(V_{a}\\)) is given as [12, 31]:
\\[V_{m}=V_{a}\\cos(\\theta), \\tag{27}\\]
where the \\(\\theta\\), as defined in Fig. 1.a, is the angle between the detector and the direction of motion of the vehicle.
Moreover, for the case of curved road scenarios [31], the relation can be expressed as:
\\[V_{m,curved}=V_{a}\\sin\\left\\{\\frac{\\pi}{2}-\\beta+\\tan^{-1}\\frac{d_{o}+r_{c}(1- \\cos\\beta)}{R_{o}+r_{c}\\sin\\beta}\\right\\}, \\tag{28}\\]
where \\(\\beta\\) and \\(r_{c}\\) are the angle and the radius of the curved-road, respectively. Since the angle (\\(\\theta\\)) is changing fast in curved roads, microwave and laser radars cannot measure the speed accurately. In this case, fast angle changes causes the relative speed to change too fast for the RADAR or LiDAR to measure.
If the vehicle is traveling directly towards the radar, the measured speed would be the real speed with \\(\\theta=0^{o}\\). However, in practical scenarios, as shown in Fig. 1, the vehicle does not travel directly towards the detector. Hence, the angle changes, which, in turn, results in an estimated speed that is different than the actual speed. This phenomenon is called the _Cosine Effect_, where the cosine of the angle that is between the vehicle's direction of motion and the radar is relating the calculated speed of the vehicle and the real speed. As expected, as the angle increases the estimation error increases, i.e., the detector provides less accurate results.
## References
* [1] M. Koornstra, F. D. Bijleveld, M. Hagenzieker _et al._, _The safety effects of daytime running lights_. SNOV Institute for Road Safety Research, The Netherlands, 1997.
* [2] \"Effects of 24-Hour Headlight use on Traffic Safety,\" Transportation Research Synthesis, Minnesota Department of Transportation, Tech. Rep., Nov. 2010.
* [3] M. Paine _et al._, \"A review of daytime running lights,\" _Vehicle Design and Research Pty Limited. fur NEMA Motoring & Services and RACV_, 2003.
* [4] S. Ucar, S. C. Ergen, and O. Ozkasap, \"IEEE 802.11p and Visible Light Hybrid Communication based secure autonomous platoon,\" _IEEE Transactions on Vehicular Technology_, pp. 1-1, 2018.
* [5] B. Turan, O. Narmalogio, S. C. Ergen, and M. Uysal, \"Physical Layer Implementation of Standard Compliant Vehicular VLC,\" in _2016 IEEE 84th Vehicular Technology Conference (VTC-Fall)_, Sept 2016, pp. 1-5.
* [6] A. J. Suzuki and K. Mizui, \"Iaser radar and visible light in a bidirectional V2V communication and ranging system,\" in _2015 IEEE International Conference on Vehicular Electronics and Safety (ICVES)_, Nov 2015, pp. 19-24.
* [7] A. M. Cailean, B. Cagneau, L. Chassagne, M. Dimian, and V. Popa, \"Novel receiver sensor for Visible Light Communications in Automotive Applications,\" _IEEE Sensors Journal_, vol. 15, no. 8, pp. 4632-4639, Aug 2015.
* 2008 IEEE International Conference on Communications Workshops_, May 2008, pp. 410-414.
* [9] W. Viriyasitavat, S. H. Yu, and H. M. Tsai, \"Short paper: Channel model for visible light communications using off-the-shelf societ nullifi,\" in _2013 IEEE Vehicular Networking Conference_, Dec 2013, pp. 170-173.
* [10] N. Kumar, N. Lourenco, D. Terra, L. N. Alves, and R. L. Aguiar, \"Visible light communications in intelligent transportation systems,\" in _2012 IEEE Intelligent Vehicles Symposium_, June 2012, pp. 748-753.
* [11] M. Richards, W. Holm, and J. Scheer, _Principles of Modern Radar: Basic Principles_, ser. Electromagnetics and Radar. Institution of Engineering and Technology, 2010.
* [12] P. D. Fisher, \"Law enforcement: Shortcomings of radar speed measurement: It's based on sound principles, but present systems have practical limitations and may be misused,\" _IEEE Spectrum_, vol. 17, no. 12, pp. 28-31, Dec 1980.
* [13] D. Mennie, \"Consumer electronics: Helping speeders beat the radar rap: Moving radar and dual-band detectors constitute a miniature ECM race between enforcement-minded authorities and motoring outlaws,\" _IEEE Spectrum_, vol. 15, no. 8, pp. 38-42, Aug 1978.
* [14] P.D.Fisher, \"Improving on police radar,\" _IEEE Spectrum_, vol. 29, no. 7, pp. 38-43, July 1992.
* [15] D. Karunitalaka, F. Zafar, V. Kalavally, and R. Parthiban, \"LED based indoor visible light communications: State of the art,\" _IEEE Communications Surveys Tutorials_, vol. 17, no. 3, pp. 1649-1678, thirdquarter 2015.
* [16] P. H. Pathak, X. Feng, P. Hu, and P. Mohapatra, \"Visible light communication, networking, and sensing: A survey, potential and challenges,\" _IEEE communications surveys & tutorials_, vol. 17, no. 4, pp. 2047-2077, 2015.
* [17] S. H. Yu, O. Shih, H. M. Tsai, N. Wisttopongphan, and R. D. Roberts, \"Smart automotive lighting for vehicle safety,\" _IEEE Communications Magazine_, vol. 51, no. 12, pp. 50-59, December 2013.
* [18] A. M. Cilean and M. Dimian, \"Current challenges for visible light communications usage in vehicle applications: A survey,\" _IEEE Communications Surveys Tutorials_, vol. 19, no. 4, pp. 2681-2703, Fourthquarter 2017.
* [19] A. M. Cailean and M. Dimian, \"Impact of IEEE 802.15.7 standard on Visible Light Communications usage in automotive applications,\" _IEEE Communications Magazine_, vol. 55, no. 4, pp. 169-175, April 2017.
* [20] M. Akangeawa, Y. Tanaka, and M. Nakagawa, \"Basic study on traffic information system using LED traffic lights,\" _IEEE Transactions on Intelligent Transportation Systems_, vol. 2, no. 4, pp. 197-203, Dec 2001.
* [21] N. Kumar, D. Terra, N. Lourenco, L. N. Alves, and R. L. Aguiar, \"Visible light communication for intelligent transportation in road safety applications,\" in _2011 7th International Wireless Communications and Mobile Computing Conference_, July 2011, pp. 1513-1518.
* [22] S. Lee, J. K. Kwon, S.-Y. Jung, and Y.-H. Kwon, \"Evaluation of visible light communication channel delay profiles for automotive applications,\" _EURASIP Journal on Wireless Communications and Networking_, vol. 2012, no. 1, p. 370, Dec 2012.
* [23] H. Q. Nguyen, J. H. Choi, M. Kang, Z. Ghassemlooy, D. H. Kim, S. K. Lim, T. G. Kang, and C. G. Lee, \"A MATLAB-based simulation program for indoor visible light communication system,\" in _2010 7th International Symposium on Communication Systems, Networks Digital Signal Processing (CSNDSP 2010)_, July 2010, pp. 537-541.
* [24] P. Luo, Z. Ghassemlooy, H. L. Min, E. Bentley, A. Burton, and X. Tang, \"Performance analysis of a car-to-car visible light communication system,\" _Appl. Opt._, vol. 54, no. 7, pp. 1696-1706, Mar 2015.
* [25] M. Elmassie, M. Karulabyabrat, F. Miranirkhani, R. Caglar Kizilimimak, and M. Uysal, \"Effect of fog and rain on the performanceof Vehicular Visible Light Communications,\" in _IEEE 87th Vehicular Technology Conference (VTC2018-Spring)_, 2018.
* [26] F. R. Gelfler and U. Bapst, \"Wireless in-house data communication via diffuse infrared radiation,\" _Proceedings of the IEEE_, vol. 67, no. 11, pp. 1474-1486, Nov 1979.
* [27] M. Uysal, C. Capsoni, Z. Ghassemlooy, A. Boucouvalas, and E. Udvary, _Optical wireless communications: an emerging technology_. Springer, 2016.
* [28] F. Miramirkhani, O. Narmanioglu, M. Uysal, and E. Panayirci, \"A mobile channel model for vdc and application to adaptive system design,\" _IEEE Communications Letters_, vol. 21, no. 5, pp. 1035-1038, May 2017.
* [29] T. Rappaport, R. Heath, R. Daniels, and J. Murdock, _Millimeter Wave Wireless Communications_, ser. Prentice Hall Communications Engineering and Emerging Technologies Series from Ted Rappaport. Pearson Education, 2014.
* [30] R. E. Stark, \"Road surface's reflectance influences lighting design.\" _Lighting Design Applications_, vol. 16, pp. 42-64, 04 1986.
* [31] D. Sawicki, _Traffic Radar Handbook: A Comprehensive Guide to Speed Measuring Systems_. AuthorHouse, 2002.
* [32] M. Uysal, Z. Ghassemlooy, A. Bekkali, A. Kadri, and H. Menouar, \"Visible light communication for vehicular networking: Performance study of a V2V system using a measured headkamp beam pattern model,\" _IEEE Vehicular Technology Magazine_, vol. 10, no. 4, pp. 45-53, Dec 2015.
* [33] L. Cheng, W. Viriyasitavat, M. Boban, and H. M. Tsai, \"Comparison of radio frequency and visible light propagation channels for vehicular communication,\" _IEEE Access_, vol. 6, pp. 2634-2644, 2018.
* [34] C.-H. Yeh, Y.-L. Liu, and C.-W. Chow, \"Real-time white-light phosphor-LED visible light communication (VLC) with compact size,\" _Optics express_, vol. 21, no. 22, pp. 26 192-26 197, 2013.
* [35] M. Vyas, H. Sarath, K. Smitha, and A. Bagubali, \"Modern automotive embedded systems with special mention to radars,\" in _2017 2nd IEEE International Conference on Recent Trends in Electronics, Information Communication Technology (RTEICT)_, May 2017, pp. 1618-1625.
* [36] K. Tomiyasu, \"Conceptual performance of bistatic Doppler radar for vehicle speed determination,\" _IEEE Transactions on Vehicular Technology_, vol. 30, no. 3, pp. 130-134, Aug 1981.
* [37]**Hisham Abuella** received the B.Sc. degree in Communications and Electronics Engineering from Ani Shams University, Cairo, Egypt, in 2013. He worked as a Digital System Design Engineer in Surdon Semiconductor Company, Cairo, Egypt for one year. In Fall 2014, he joined Istanbul Seir University as a Research Assistant for his M.Sc. degree in Electronics and Computer Engineering at Istanbul, Turkey. Lastly, he joined Oklahoma State University as a Graduate Research Assistant to pursue his Ph.D. study at the School of Electrical and Computer Engineering in Spring 2017. He is currently working with Dr. Sabit Ekin at Wireless Communications Research Lab (WCRL). His current research interests include Visible light communication, Wireless communication systems design using SDRs, Visible light Sensing applications, and Hybrid RF/VLC systems performance analysis.
\\begin{tabular}{c c} & Farshad Miramirkhani received the B.Sc. and the M.Sc. degree with high honors in electronics and communication engineering from University of Isfahan, Isfahan, Iran, in 2011 and 2014, respectively. He joined Communication Theory and Technologies (CT&T) Research Group as a research assistant working toward his doctorate under supervision of Prof. Murat Uysal at Ozyegin University, Istanbul, Turkey, in 2014. He will be subsequently employed by Gehze Technical University as an Assistant Professor, in 2018. He has contributed to the standardization works of IEEE 802.15.7r1 (Short Range Optical Wireless Communications). The LiFi channels developed by Prof. Murat Uysal and Mr. Miramirkhani were seeded as the LiFi Reference Channel Models by the IEEE 802.15.7\\(\\cdot\\)T Task Group during the IEEEs latest meeting held in Bangkok, Thailand, in September 2015. He has served as a reviewer for several prestigious journals and conferences for IEEE and OSA Societies as well as an Editorial Board Member of Optical Communications-Clausius Scientific Press. His current research interests include optical wireless communications, indoor visible light communications, underwater visible light communications, vehicular visible light communications and channel modeling. \\\\ \\end{tabular} \\begin{tabular}{c c} & Sabit Ekin has joined the School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, as an Assistant Professor in Fall 2016. He has four years of industrial experience from Qualcomm, Inc. as a Senior Modern Systems Engineer at the Department of Qualcomm Mobile Computing. He received his Ph.D. from Electrical and Computer Engineering, Texas \\&M University, College Station, TX, 2012, the M.Sc. from Electrical Engineering, New Mexico Tech, NM, 2008, and the B.Sc. from Electrical and Electronics Engineering, Eskiseir Osmangazi University, Turkey, 2006. He was working as a visiting research assistant in Electrical and Computer Engineering Program at Texas A&M University at Qatar (20082009). During the summer of 2012, he worked with the Femto-cell interference management team in the Corporate R&D at New Jersey Research Center, Qualcomm Inc. After his Ph.D. study, he joined in Qualcomm Inc., San Diego, CA, where he has received numerous Quasider awards for his achievements/contributions on cellular modem receiver design. His research interests are in the areas of design and performance analysis of communications systems in both theoretical and practical point of views, particularly interference management and statistical modeling of interference in next-generation wireless systems, e.g., 5G, mmWave, HetNets, visible light communications, and cognitive radio networks. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Murat Uysal received the B.Sc. and M.Sc. degrees in electronics and communication engineering from Istanbul Technical University, Istanbul, Turkey, in 1995 and 1998, respectively, and the Ph.D. degree in electrical engineering from Texas A\\&M University, College Station, TX, USA, in 2001. He is currently a Full Professor and the Chair of the Department of Electrical and Electronics Engineering with Ozyegin University, Istanbul. He also serves as the Founding Director of the Center of Excellence in Optical Wireless Communication Technologies. Prior to joining Ozyegin University, he was a tenured Associate Professor with the University of Waterloo, Canada, where he still holds an adjunct faculty position. He has authored some 290 journal and conference papers in his research topics and received more than 7500 citations. His research interests are in the broad areas of communication theory and signal processing with a particular emphasis on the physical-layer aspects of wireless communication systems in radio and optical frequency bands. His distinctions include the Marsland Faculty Fellowship in 2004, the NSERC Discovery Accelerator Supplement Award in 2008, the University of Waterloo Engineering Research Excellence Award in 2010, the Turkish Academy of Sciences Distinguished Young Scientist Award in 2011, and the Ozyegin University Best Researcher Award in 2014. He currently serves on the editorial board of the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS. In the past, he was an Editor of the IEEE TRANSACTIONS ON COMMUNICATIONS, the IEEE TRANSACTIONS ON VEICULAR TECHNOLOGY, the IEEE COMMUNICATIONS LETERS, Wireless Communications and Mobile Computing Journal, and the Transactions on Emerging Telecommunications Technologies, and a Guest Editor of the IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS Special Issues on Optical Wireless Communication (2009 and 2015). He was involved in the organization of several IEEE conferences at various levels. He served as the Chair of the Communication Theory Symposium of IEEE ICC 2007, the Chair of the Communications and Networking Symposium of IEEE CCECE 2008, the Chair of the Communication and Information Theory Symposium of IWCMC 2011, a TPC Co-Chair of the IEEE WCNC 2014, and the General Chair of the IEEE IWOW 2015. Over the years, he has served on the technical program committee of more than 100 international conferences and workshops in the communications area. \\\\ \\end{tabular}
\\\\ \\end{tabular}
\\begin{tabular}{c c} & Samir Ahmed is a Professor of Transportation Systems \\& Engineering at Oklahoma State University. He has received his B.Sc. in Civil Engineering from Cairo University, Egypt, 1968; M.Sc. in Civil Engineering from McGill University, Canada; and Ph.D. from The University of Oklahoma, USA. Dr. Ahmed has more than thirty-six years of experience in Transportation Engineering research. He has been involved in numerous research projects sponsored by the Federal Highway Administration, Federal Transit Administration, Oklahoma Department of Transportation and various transportation agencies. His research interests include: design, planning, and management of transportation systems and facilities; highway traffic operations and control; intelligent transportation/infrastructure systems; transportation safety; systems modeling, simulation, and optimization; and statistical quality assurance and quality control of highway construction. \\\\ \\end{tabular} | The introduction of light emitting diodes (LED) in automotive exterior lighting systems provides opportunities to develop viable alternatives to conventional communication and sensing technologies. Most of the advanced driver-assist and autonomous vehicle technologies are based on Radio Detection and Ranging (RADAR) or Light Detection and Ranging (LiDAR) systems that use radio frequency or laser signals, respectively. While reliable and real-time information on vehicle speeds is critical for traffic operations management and autonomous vehicles safety, RADAR or LiDAR systems have some deficiencies especially in curved road scenarios where the incidence angle is rapidly varying. In this paper, we propose a novel speed estimation system so-called the Visible Light Detection and Ranging (ViLDAR) that builds upon sensing visible light variation of the vehicle's headlamp. We determine the accuracy of the proposed speed estimator in straight and curved road scenarios. We further present how the algorithm design parameters and the channel noise level affect the speed estimation accuracy. For wide incidence angles, the simulation results show that the ViLDAR outperforms RADAR/LiDAR systems in both straight and curved road scenarios. | Condense the content of the following passage. | 223 |
arxiv-format/2006_14285v2.md | # Mobile smartphone tracing can detect almost all SARS-CoV-2 infections
Bastian Prasse
Faculty of Electrical Engineering, Mathematics and Computer Science, P.O Box 5031, 2600 GA Delft, The Netherlands; _email_: [email protected], [email protected]
Piet Van Mieghem
Faculty of Electrical Engineering, Mathematics and Computer Science, P.O Box 5031, 2600 GA Delft, The Netherlands; _email_: [email protected], [email protected]
Delft University of Technology
November 4, 2021
## 1 Introduction
The COVID-19 pandemic triggered firm lockdowns of societies and economies around the world. Lockdown measures must be released gently and, if necessary, retigthened to avoid a dramatic second wave of COVID-19. To trace the pandemic, smartphone apps have recently received a lot of attention [1, 2, 3]. A particular challenge to estimating the prevalence of COVID-19 are the asymptomatic infections. Recent _contact apps_ aim to alarm the user of a potential infection, if the user has been close to another user with a confirmed SARS-CoV-2 infection. Alarming individuals by contact apps is a particular method of _social alertness_[4, 5, 6, 7, 8]. If alerted, individuals are more cautious and less likely to become infected. For a comparison of the effect of social alertness and social distancing, we refer the reader to [9].The awareness of potential infections may lead to suppression of the virus [10].
The intended use of some smartphone app goes beyond alarming individuals. For instance, in the _COVID Symptom Study_[3], smartphone users provide their health status as a self-report via an app on a daily basis. The self reports include user information, such as age and location, and potential COVID-19 symptoms, such as fever or loss of smell and taste. The self-reports aid at identifying emerging geographical hotspots of SARS-CoV-2 infections.
Previous studies [11, 12, 13, 2] consider _aggregated_ location information, in the form of mobility flow or population density. Here, we explore the full potential of location information for tracing the spread of COVID-19. More precisely, given the locations of the app users, our algorithm called BETIS, Bayesian Estimation for Tracing Infection States, finds nearly all infected users. Furthermore, BETIS traces the total number of infections in the _whole_ population, consisting of users and non-users. Hence, complementing BETIS with boarder control, medical testing and quarantine enforcement is a second potential pillar, besides vaccine development, to eradicate the coronavirus. Since society seems convinced that the only hope to abandon the destructive impact of COVID-19 is a vaccine, we believe that BETIS is a worthy second horse in the race.
## 2 Epidemic model
We consider the spread of SARS-CoV-2 among \\(N\\) individuals. The individuals \\(i=1, ,N_{\\rm u}\\), with \\(N_{\\rm u}\\leq N\\), are users of the smartphone app. Thus, the fraction of smartphone users equals \\(c_{0}=N_{\\rm u}/N\\), while the remaining individuals \\(N_{\\rm u}+1, ,N\\) do not use the app. Every user \\(i=1, ,N_{\\rm u}\\) reports COVID-19 related symptoms through the app, e.g., via a questionnaire [3]. At any discrete time \\(k\\in\\mathbb{N}\\), every individual \\(i\\) has a viral state \\(X_{i}[k]\\in\\mathcal{C}\\). The set of compartments equals \\(\\mathcal{C}=\\{\\mathcal{S},\\mathcal{S}_{\\rm fa},\\mathcal{E},\\mathcal{I}, \\mathcal{I}_{\\rm a},\\mathcal{R}\\}\\). The state \\(X_{i}[k]=\\mathcal{S}\\) denotes that individual \\(i\\) is _susceptible_ (healthy). There are other diseases with similar symptoms as COVID-19, for instance influenza. Thus, the self-reports via the app might produce _false alarms_, which point erroneously to a SARS-CoV-2 infection while the individual suffers from another disease. The viral state \\(X_{i}[k]=\\mathcal{S}_{\\rm fa}\\) indicates that individual \\(i\\) is infected by a disease other than COVID-19 with similar symptoms. The _exposed_ state \\(X_{i}[k]=\\mathcal{E}\\) denotes that individual \\(i\\) is infected by SARS-CoV-2 but not contagious yet. After the exposed state \\(\\mathcal{E}\\), an individual becomes either _infectious symptomatic_\\(\\mathcal{I}\\) or _infectious asymptomatic_\\(\\mathcal{I}_{\\rm a}\\). Individuals in either infectious state \\(\\mathcal{I}\\) and \\(\\mathcal{I}_{\\rm a}\\) are contagious to susceptible individuals in their vicinity. After some time, symptomatic infected individuals in \\(\\mathcal{I}\\) transition to the _symptomatic removed_ state \\(\\mathcal{R}\\), due to recovery, quarantine, hospitalisation or death. Removed individuals in \\(\\mathcal{R}\\) cannot infect susceptible individuals any longer. We assume that a recovered individual is immune. Hence, multiple infections do not occur.
The BETIS algorithm estimates the viral states \\(X_{i}[k]\\) of an app user \\(i\\). Additionally to the health self-reports, BETIS uses the neighbourhood \\(N_{{\\rm u},i}[k]\\subset\\{1, ,N_{\\rm u}\\}\\) for each user \\(i=1, ,N_{\\rm u}\\). The neighbourhood \\(N_{{\\rm u},i}[k]\\) consists of the contacts of user \\(i\\) to other users at time \\(k\\). Two users are \"in contact\" with each other, if the users are physically close for a sufficiently long time period. For instance, the NHS Test and Trace service define a contact when users are within 2 meters of each other for more than 15 minutes [14]. The neighbourhood \\(N_{{\\rm u},i}[k]\\) can be obtained in two ways: The mobile app can perform direct measurements of the neighbourhood \\(N_{{\\rm u},i}[k]\\), e.g., by Bluetooth. Alternatively, the app can use a \\(2\\times 1\\) location vector \\(z_{i}[k]\\in\\mathbb{R}^{2}\\), which specifies the latitude and longitude of user \\(i\\) at time \\(k\\) and can be obtained, for instance, by GPS. The neighbourhood of user \\(i\\) is obtained from the location vector \\(z_{i}[k]\\) by
\\[\\mathcal{N}_{{\\rm u},i}[k]=\\big{\\{}j=1, ,N_{\\rm u},j\
eq i\\big{\\|}\\|z_{i}[k]- z_{j}[k]\\|_{2}\\leq d_{\\rm inf}\\big{\\}}\\]
for some distance \\(d_{\\rm inf}\\). The sole location information in the BETIS estimation algorithm are the neighbourhoods \\(\\mathcal{N}_{{\\rm u},i}[k]\\). We do not distinguish between neighbourhoods \\(\\mathcal{N}_{{\\rm u},i}[k]\\) that were measured directly, by Bluetooth, or indirectly, by GPS coordinates. For the individuals \\(i=N_{\\rm u}+1, ,N\\), who do not use the app, neither location information nor health self-reports are available. Since location information for non-users \\(i=N_{\\rm u}+1, ,N\\) is not available, non-users are not registered in the neighbourhood \\(\\mathcal{N}_{\\rm u,j}[k]\\) of a user \\(j=1, ,N_{\\rm u}\\). The complete neighbourhood of an individual \\(i\\), consisting of both users and non-users, is denoted by
\\[\\mathcal{N}_{i}[k]=\\left\\{j=1, ,N,j\
eq i\\|z_{i}[k]-z_{j}[k]\\|_{2}\\leq d_{\\rm inf }\\right\\}. \\tag{1}\\]
In contrast to the neighbourhood \\(\\mathcal{N}_{\\rm u,i}[k]\\) of users, the neighbourhood \\(\\mathcal{N}_{i}[k]\\) is not measured. The number of contacts with non-users is denoted by
\\[N_{\\rm nonuser,i}[k]=\\left|\\mathcal{N}_{i}[k]\\right|-\\left|\\mathcal{N}_{\\rm u,i}[k]\\right|.\\]
We assume that the distribution of the number of neighbours \\(N_{\\rm nonuser,i}[k]\\),
\\[f(m)=\\mathrm{E}_{i,k}\\left[\\Pr\\left[N_{\\rm nonuser,i}[k]=m\\right]\\right],\\]
is known, where the expectation computed with respect to every user \\(i\\) and all times \\(k\\). The average distribution \\(f(m)\\) of contacts with non-users can be obtained from a representative subgroup of the population.
We model the spread of COVID-19 by a hidden Markov model, which consists of two parts. First, the dynamics of the viral state \\(X_{i}[k]\\). Second, the user behaviour of reporting their viral state \\(X_{i}[k]\\).
### Dynamics of the viral state \\(X_{i}[k]\\)
Consider the infection of a susceptible individual \\(i\\), with \\(X_{i}[k]=\\mathcal{S}\\) or \\(X_{i}[k]=\\mathcal{S}_{\\rm fa}\\). Then, individual \\(i\\) traverses the viral states \\(\\mathcal{E}\\rightarrow\\mathcal{I}\\rightarrow\\mathcal{R}\\) for a _symptomatic_ infection. Analogously, the course of an _asymptomatic_ infection is \\(\\mathcal{E}\\rightarrow\\mathcal{I}_{\\rm a}\\rightarrow\\mathcal{R}\\). The dynamics of the hidden Markov model are determined by the transition probabilities between the viral states. A susceptible individual \\(i\\) without symptoms, \\(X_{i}[k]=\\mathcal{S}\\), contracts a disease with similar symptoms to COVID-19 with the probability \\(\\vartheta\\),
\\[\\Pr\\left[X_{i}[k+1]=\\mathcal{S}_{\\rm fa}\\big{|}X_{i}[k]=\\mathcal{S}\\right]=\\vartheta,\\]
and cures with the curing probability \\(\\delta\\),
\\[\\Pr\\left[X_{i}[k+1]=\\mathcal{S}\\big{|}X_{i}[k]=\\mathcal{S}_{\\rm fa}\\right]=\\delta.\\]
An infectious individual \\(j\\), with \\(X_{j}[k]=\\mathcal{I}\\) or \\(X_{j}[k]=\\mathcal{I}_{\\rm a}\\), infects a susceptible individual \\(i\\) with the infection probability \\(\\beta\\), if individual \\(j\\) is in the neighbourhood \\(\\mathcal{N}_{i}[k]\\) of individual \\(i\\). The infection probability \\(\\beta\\) depends on the contagiousness of SARS-CoV-2 and on the prevalence of facemasks and other spread reduction measures. The set
\\[\\mathcal{N}_{\\rm inf,}[k]=\\left\\{j\\in\\mathcal{N}_{i}[k]\\big{|}X_{j}[k]= \\mathcal{I}\\text{ or }X_{j}[k]=\\mathcal{I}_{\\rm a}\\right\\}\\]
consists of all _infectious_ individuals \\(j\\), users and non-users, that are close to individual \\(i\\) at time \\(k\\). The number of infectious neighbours of individual \\(i\\) at time \\(k\\) is denoted by \\(|\\mathcal{N}_{\\rm inf,}[k]|\\). The probability of an infection of individual \\(i\\) follows from potential infections by any individual \\(j\\) in the set \\(\\mathcal{N}_{\\mathrm{inf},i}[k]\\) as
\\[\\Pr\\left[X_{i}[k+1]=\\mathcal{E}\\big{|}X_{i}[k]\\in\\{\\mathcal{S}, \\mathcal{S}_{\\mathrm{fa}}\\},\\mathcal{N}_{\\mathrm{inf},i}[k]\\right]=1-(1-\\beta) ^{\\left|\\mathcal{N}_{\\mathrm{inf},i}[k]\\right|}\\,. \\tag{2}\\]
Individuals leave the exposed state \\(\\mathcal{E}\\) with the incubation probability \\(\\gamma\\) to an infectious state,
\\[\\Pr\\big{[}X_{i}[k+1]=c\\big{|}X_{i}[k]=\\mathcal{E}\\big{]}=\\begin{cases}\\gamma \\alpha&\\text{if}\\quad c=\\mathcal{I}_{\\mathrm{a}},\\\\ \\gamma\\left(1-\\alpha\\right)&\\text{if}\\quad c=\\mathcal{I},\\\\ \\left(1-\\gamma\\right)&\\text{if}\\quad c=\\mathcal{E}.\\end{cases}\\]
Here, \\(\\alpha\\) denotes the probability of an asymptomatic infection. Any symptomatic infected individual is removed with the removal probability \\(\\delta\\). In other words,
\\[\\Pr\\big{[}X_{i}[k+1]=\\mathcal{R}\\big{|}X_{i}[k]=\\mathcal{I}\\big{]}=\\delta. \\tag{3}\\]
Denote the first time that individual \\(i\\) is infected by \\(k_{\\mathcal{I},i}\\), \\(X_{i}\\left[k_{\\mathcal{I},i}\\right]=\\mathcal{I}\\) and \\(X_{i}\\left[k_{\\mathcal{I},i}-1\\right]=\\mathcal{E}\\). Similarly, denote the first time that individual \\(i\\) is removed by \\(k_{\\mathcal{R},i}\\). Since the viral state compartments are in the order \\(\\mathcal{E}\\rightarrow\\mathcal{I}\\rightarrow\\mathcal{R}\\), it holds that \\(k_{\\mathcal{R},i}>k_{\\mathcal{I},i}\\). The _sojourn time_\\(k_{\\mathcal{R},i}-k_{\\mathcal{I},i}\\) of state \\(\\mathcal{I}\\) is the number of discrete times \\(k\\) that individual \\(i\\) has been infected. By (3), we implicitly assume that the sojourn time follows a geometric distribution with mean \\(1/\\delta\\).
### Reporting the viral state \\(X_{i}[k]\\)
If a user experiences COVID-19 related symptoms at time \\(k\\), then the users submits a health report. We denote the _reported_ viral state of user \\(i\\) as \\(X_{\\mathrm{rep},i}[k]\\). Since the users themselves report their health status, the reported viral state \\(X_{\\mathrm{rep},i}[k]\\) might be inaccurate. At every time \\(k\\), the reported state \\(X_{\\mathrm{rep},i}[k]\\) equals either: healthy \\(\\mathcal{S}\\); contracted a disease other than COVID-19, \\(\\mathcal{S}_{\\mathrm{fa}}\\); or infected by COVID-19, \\(\\mathcal{I}\\). A user \\(i\\) without symptoms, \\(X_{i}[k]\\in\\{\\mathcal{S},\\mathcal{E},\\mathcal{I}_{\\mathrm{a}},\\mathcal{R}\\}\\), reports a healthy viral state \\(X_{\\mathrm{rep},i}[k]=\\mathcal{S}\\). Thus, BETIS considers that asymptomatic infections in \\(\\mathcal{I}_{\\mathrm{a}}\\) cannot be detected by self-reports. If user \\(i\\) experiences symptoms that are related to COVID-19, \\(X_{i}[k]=\\mathcal{S}_{\\mathrm{fa}}\\) or \\(X_{i}[k]=\\mathcal{I}\\), then user \\(i\\) specifies the symptoms via a health report in the app. Based on the health report, a user \\(i\\) with symptoms is classified either as suffering from COVID-19, \\(X_{\\mathrm{rep},i}[k]=\\mathcal{I}\\), or from another disease, \\(X_{\\mathrm{rep},i}[k]=\\mathcal{S}_{\\mathrm{fa}}\\). Since the symptoms of COVID-19 overlap with symptoms of other diseases, the reported viral states \\(X_{\\mathrm{rep},i}[k]=\\mathcal{I}\\) and \\(X_{\\mathrm{rep},i}[k]=\\mathcal{S}_{\\mathrm{fa}}\\) can be erroneous. The errors in the reported viral state \\(X_{\\mathrm{rep},i}[k]\\) are described by the test statistics
\\[\\Pr\\big{[}X_{\\mathrm{rep},i}[k]=c\\big{|}X_{i}[k]=\\mathcal{S}_{ \\mathrm{fa}}\\big{]}=\\begin{cases}p_{\\mathrm{fa}}&\\text{if}\\quad c=\\mathcal{I},\\\\ 1-p_{\\mathrm{fa}}&\\text{if}\\quad c=\\mathcal{S}_{\\mathrm{fa}},\\end{cases}\\]
and
\\[\\Pr\\big{[}X_{\\mathrm{rep},i}[k]=c\\big{|}X_{i}[k]=\\mathcal{I}\\big{]}= \\begin{cases}p_{\\mathrm{tp}}&\\text{if}\\quad c=\\mathcal{I},\\\\ 1-p_{\\mathrm{tp}}&\\text{if}\\quad c=\\mathcal{S}_{\\mathrm{fa}}.\\end{cases}\\]
Hence, the accuracy of the health report is given by1 the _false alarm probability_\\(p_{\\mathrm{fa}}\\) and the _true positive rate_\\(p_{\\mathrm{tp}}\\).
## 3 Who is infected?
At time \\(k\\), we would like to know who is infected by COVID-19. In other words, for every user \\(i\\), BETIS computes the _symptomatic infection risk_
\\[\\Pr\\left[X_{i}[k]=\\mathcal{I}\\left|\\mathcal{M}[k]\\right.\\right] \\tag{4}\\]
and the _asymptomatic infection risk_
\\[\\Pr\\left[X_{i}[k]=\\mathcal{I}_{\\mathrm{a}}\\left|\\mathcal{M}[k]\\right.\\right].\\]
Here, we formally define all observations, or measurements, up until time \\(k\\) as \\(\\mathcal{M}[k]\\). More specifically, the set \\(\\mathcal{M}[k]\\) specifies the reported viral state \\(X_{\\mathrm{rep},i}[l]\\) and the measured neighbourhood \\(\\mathcal{N}_{\\mathrm{u},i}[l]\\) of every user \\(i=1, ,N_{\\mathrm{u}}\\) at every time \\(l\\leq k\\). In Appendix A, we propose a recursive Bayesian filtering method to (approximately) compute the infection risks \\(\\Pr\\left[X_{i}[k]=\\mathcal{I}\\left|\\mathcal{M}[k]\\right.\\right]\\) and \\(\\Pr\\left[X_{i}[k]=\\mathcal{I}_{\\mathrm{a}}\\left|\\mathcal{M}[k]\\right.\\right]\\). As a side product, we obtain the probabilities \\(\\Pr\\left[X_{i}[k]=c\\left|\\mathcal{M}[k]\\right.\\right]\\) for the other viral states \\(c=\\mathcal{S},\\mathcal{S}_{\\mathrm{fa}},\\mathcal{E},\\mathcal{R}\\). The computation time is polynomial in the number of individuals \\(N\\) and the number of observations \\(k\\).
We perform simulations of the epidemic model (Section 2) with \\(N=10,000\\) moving individuals and vary the fraction of app users \\(c_{0}\\). The false alarm probability is set to \\(p_{\\mathrm{fa}}=0.1\\) and the positive rate to \\(p_{\\mathrm{tp}}=0.9\\). _We assume that none of the initial viral states \\(X_{1}[1], ,X_{N}[1]\\) is known to the BETIS estimation method._ Instead, we solely assume that the prior distribution of the viral state \\(X_{i}[1]\\) is known. For further details on the parameter settings, we refer to Appendix B.
### Tracing the number of infections
Can BETIS estimate the evolution of the total number of infections in the population? First, we define \\(\\mathcal{I}_{\\mathrm{all}}[k]\\) as the _true_ number of individuals, users and non-users, whose viral state \\(X_{i}[k]=\\mathcal{I}\\). BETIS computes the infection risks \\(\\Pr\\left[X_{i}[k]=\\mathcal{I}\\left|\\mathcal{M}[k]\\right.\\right]\\) of the users \\(i=1, ,N_{\\mathrm{u}}\\). Thus, we obtain an estimate of the number of infected individuals, users and non-users, as
\\[\\hat{\\mathcal{I}}_{\\mathrm{all}}[k]=\\frac{N}{N_{\\mathrm{u}}}\\sum_{i=1}^{N_{ \\mathrm{u}}}\\Pr\\left[X_{i}[k]=\\mathcal{I}\\left|\\mathcal{M}[k]\\right.\\right].\\]
For the asymptomatic infections, the quantities \\(\\mathcal{I}_{\\mathrm{a,all}}[k]\\) and \\(\\hat{\\mathcal{I}}_{\\mathrm{a,all}}[k]\\) are defined analogously.
Figure 1 demonstrates the accuracy of the estimated number of symptomatic infections \\(\\hat{\\mathcal{I}}_{\\mathrm{all}}[k]\\) and asymptomatic infections \\(\\hat{\\mathcal{I}}_{\\mathrm{a,all}}[k]\\), for different fractions \\(c_{0}\\) of individuals that use the app. Unsurprisingly, the symptomatic infections \\(\\mathcal{I}_{\\mathrm{all}}[k]\\) are traced more accurately than the asymptomatic infections \\(\\mathcal{I}_{\\mathrm{a,all}}[k]\\). For all fractions \\(c_{0}\\), the simulations indicate that the BETIS estimates \\(\\hat{\\mathcal{I}}_{\\mathrm{all}}[k]\\) and \\(\\hat{\\mathcal{I}}_{\\mathrm{a,all}}[k]\\) are greater than2 the true number of infections \\(\\mathcal{I}_{\\mathrm{all}}[k]\\), \\(\\mathcal{I}_{\\mathrm{a,\\;all}}[k]\\). From a societal point of view, overestimations give safe-side warnings, resulting in a positive property of BETIS. Overall, even if only \\(c_{0}=20\\%\\) individuals are users, the epidemic outbreak is traced reasonably well.
Footnote 2: It is an open challenge to rigorously show that the BETIS overestimates the true number of infections, \\(\\hat{\\mathcal{I}}_{\\mathrm{all}}[k]>\\mathcal{I}_{\\mathrm{all}}[k]\\) and \\(\\hat{\\mathcal{I}}_{\\mathrm{a,all}}[k]>\\mathcal{I}_{\\mathrm{a,all}}[k]\\), respectively. In [16, 17] for the \\(N\\)-intertwined mean-field approximation (NIMFA) of the susceptible-infected-susceptible (SIS) epidemic process, it is shown that infection states are positively correlated, implying that an infection somewhere in the network cannot lower the probability of infection somewhere else. BETIS assumes in (6) stochastic independence of infection states of different users, and ignoring correlations may explain the overestimations of BETIS.
### Identifying infected individuals
Beyond tracing the total number of SARS-CoV-2 infections, a tremendous challenge is to identify which users are infected. BETIS approximates the posterior probability \\(\\Pr\\left[X_{i}[k]=c\\left|\\mathcal{M}[k]\\right.\\right]\\) for every compartment \\(c\\in\\mathcal{C}\\). Thus, we obtain the Bayesian estimate of the viral state \\(X_{i}[k]\\) at any time \\(k\\) as
\\[\\hat{X}_{i}[k]=\\underset{c\\in\\mathcal{C}}{\\operatorname{argmax}}\\;\\Pr\\left[X_{ i}[k]=c\\left|\\mathcal{M}[k]\\right.\\right]. \\tag{5}\\]
At any time \\(k\\), the number of _true positive_ estimates of symptomatic infections equals the number of users \\(i\\) for which both \\(X_{i}[k]=\\mathcal{I}\\) and \\(\\hat{X}_{i}[k]=\\mathcal{I}\\). Similarly, the _false positive_ estimates equals the number of users \\(i\\) for which \\(X_{i}[k]\
eq\\mathcal{I}\\) but \\(\\hat{X}_{i}[k]=\\mathcal{I}\\). The number of true and false positive estimates for asymptomatic infections is defined analogously.
In the following, we assume that a fraction of \\(c_{0}=0.6\\) individuals use the app. Figure 2 demonstrates the accuracy of identifying infectious individuals by the BETIS estimation algorithm. BETIS performs well for identifying symptomatic infections: Almost every symptomatic infection is correctly identified (true positives), with relatively few false positives. On the other hand, Figure 2 shows that
Figure 1: **Tracing the number of infections.The total number of symptomatic infections \\(\\mathcal{I}_{\\text{all}}[k]\\) and asymptomatic infections \\(\\mathcal{I}_{\\text{a,all}}[k]\\) of all individuals versus time \\(k\\), following the SIR epidemic model. The fraction \\(c_{0}\\) of individuals, who are contact app users and report COVID-19 related symptoms, is varied. Based on the self-reports, our BETIS algorithm produces estimates \\(\\hat{\\mathcal{I}}_{\\text{all}}[k]\\), \\(\\hat{\\mathcal{I}}_{\\text{a,all}}[k]\\) for the total number of infections.**asymptomatic_ infections cannot be directly identified by (5): There is no user \\(i\\) whose most likely state is asymptomatic infectious \\(\\hat{X}_{i}[k]=\\mathcal{I}_{\\text{a}}\\), which contrasts the accuracy of BETIS in tracing the _total_ number of asymptomatic infections \\(\\mathcal{I}_{\\text{a, all}}[k]\\), see Figure 1.
Nonetheless, we show in Figure 3 that BETIS is valuable for identifying asymptomatic infections. Health agencies rely on reverse transcription polymerase chain reaction (RT-PCR) test methods to accurately determine whether an individual is infected by SARS-CoV-2. In an ideal, utopian scenario, there would be sufficient RT-PCR testing capacities to check every individual regularly, such that every asymptomatic infection would be detected timely. However, the testing capacities are insufficient, and only a limited number of people can be tested by RT-PCR methods. Specifically, suppose that only \\(N_{\\text{test}}<<N\\) individuals can be tested for the identifying asymptomatic infections. Which \\(N_{\\text{test}}\\) individuals are the most likely to suffer from an asymptomatic infection and return a positive test result? Our approach is to select those \\(N_{\\text{test}}\\) users who have the greatest probability of an asymptomatic infection, \\(\\Pr\\left[X_{i}[k]=\\mathcal{I}_{\\text{a}}\\left|\\mathcal{M}[k]\\right.\\right]\\), which is computed by BETIS.
Figure 3 shows that the contact app indeed helps in identifying users with asymptomatic infections. We emphasise that \\(N_{\\text{test}}=100\\) tests corresponds to testing less than \\(2\\%\\) of the users. Furthermore,
Figure 2: **Identifying infected users. The solid line in the subplots depicts the number of users with symptomatic and asymptomatic infections, respectively. The marks correspond to the number of users that BETIS correctly (true positive) and incorrectly (false positive) identifies as infectious.**
group testing_ methods [18] are able to identify all infections within a group of \\(N_{\\text{test}}\\) individuals, by using significantly less than \\(N_{\\text{test}}\\) tests. In particular, the combination of the group testing method for SARS-CoV-2 by Shental _et al._[19] with BETIS is a promising approach to detect the majority of asymptomatic users.
### Performance limits
The value of BETIS lies in jointly processing the location information and health reports of the users. Thus, the accuracy of BETIS depends on the testing statistic of the self-reports. We deteriorate the test statistics by increasing the false alarm probability to \\(p_{\\text{fa}}=0.2\\) and decreasing the true positive rate to \\(p_{\\text{tp}}=0.75\\).
Figures 4-6, in comparison with Figures 1-3, shows that inaccurate health reports directly affect the accuracy of tracing the number of infections and identifying infectious users. Hence, the development of accurate methods for assessing the user's health status are important. Nonetheless, even for inaccurate health reports, BET
Figure 4: **Tracing the number of infections (inaccurate health reports).** The total number of symptomatic infections \\(\\mathcal{I}_{\\text{all}}[k]\\), asymptomatic infections \\(\\mathcal{I}_{\\text{a,all}}[k]\\) and the respective BETIS estimates \\(\\hat{\\mathcal{I}}_{\\text{all}}[k]\\), \\(\\hat{\\mathcal{I}}_{\\text{a,all}}[k]\\). In comparison to Figure 1, the health report by the users is less reliable.
Figure 5: **Identifying infected users (inaccurate health reports).** The solid line depicts the number of users with a symptomatic infection. The marks correspond to the number of users that BETIS correctly (true positive) and incorrectly (false positive) identifies as infectious. In comparison to Figure 2, the health report by the users is less reliable.
BETIS thus constitutes a major tool for detecting infections in any pandemic.
We emphasise that there is a twofold benefit for every person who installs the app. First, every single user actively contributes to tracing and eradicating SARS-CoV-2, which is advantageous to the whole society. Second, there is an immediate personal benefit for every app user: am I infected or not? The combination of contributing to society and gaining information on the personal health is a great incentive to install the app.
The algorithmic framework of BETIS can be used as basis for further improvements. Of particular interest are human mobility patterns, to obtain a more accurate estimate of the interactions between users and non-users. Another direction is the use of measurements additional to the health self reports, such as randomised COVID-19 tests of the whole population.
## Acknowledgements
This work has been supported by the Universiteitsfonds Delft in the program TU Delft COVID-19 Response Fund.
## References
* [1] L. Ferretti, C. Wymant, M. Kendall, L. Zhao, A. Nurtay, L. Abeler-Dorner, M. Parker, D. Bonsall, and C. Fraser, \"Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing,\" _Science_, vol. 368, no. 6491, 2020.
* [2] N. Oliver, B. Lepri, H. Sterly, R. Lambiotte, S. Deletaille, M. De Nadai, E. Letouze, A. A. Salah, R. Benjamins, C. Cattuto,V. Colizza, N. de Cordes, S. P. Fraiberger, T. Koebe, S. Lehmann, J. Murillo, A. Pentland, P. N Pham, F. Pivetta, J. Saram ki, S. V. Scarpino, M. Tizzoni, S. Ver
Figure 6: **Testing users without symptoms to identify asymptomatic infections (inaccurate health reports).** The solid line depicts the number of users with an asymptomatic infection, where the total number of users equals \\(N_{\\text{u}}=6,000\\). The marks correspond to the number of positively tested users, when \\(N_{\\text{test}}\\) users are tested. In comparison to Figure 3, the health report by the users is less reliable.
hulst, and P. Vinc, \"Mobile phone data for informing public health actions across the COVID-19 pandemic life cycle,\" _Science Advances_, 2020.
* [3] D. A. Drew, L. H. Nguyen, C. J. Steves, C. Menni, M. Freydin, T. Varsavsky, C. H. Sudre, M. J. Cardoso, S. Ourselin, J. Wolf, T. D. Spector, A. T. Chan, and COPE Consortium, \"Rapid implementation of mobile technology for real-time epidemiology of COVID-19,\" _Science_, 2020.
* [4] S. Funk, M. Salathe, and V. A. Jansen, \"Modelling the influence of human behaviour on the spread of infectious diseases: a review,\" _Journal of the Royal Society Interface_, vol. 7, no. 50, pp. 1247-1256, 2010.
* [5] I. Z. Kiss, J. Cassell, M. Recker, and P. L. Simon, \"The impact of information transmission on epidemic outbreaks,\" _Mathematical Biosciences_, vol. 225, no. 1, pp. 1-10, 2010.
* [6] F. D. Sahneh and C. Scoglio, \"Epidemic spread in human networks,\" _Proc. IEEE Conf. Decision Control_, pp. 3008-3013, 2011.
* [7] F. D. Sahneh, F. N. Chowdhury, and C. M. Scoglio, \"On the existence of a threshold for preventive behavioral responses to suppress epidemic spreading,\" _Scientific Reports_, vol. 2, p. 632, 2012.
* [8] G. Theodorakopoulos, J.-Y. Le Boudec, and J. S. Baras, \"Selfish response to epidemic propagation,\" _IEEE Transactions on Automatic Control_, vol. 58, no. 2, pp. 363-376, 2012.
* [9] P. Schumm, W. Schumm, and C. Scoglio, \"Impact of preventive behavioral responses to epidemics in rural regions,\" _Procedia Computer Science_, vol. 18, pp. 631-640, 2013.
* [10] S. Funk, E. Gilad, C. Watkins, and V. A. Jansen, \"The spread of awareness and its impact on epidemic outbreaks,\" _Proceedings of the National Academy of Sciences_, vol. 106, no. 16, pp. 6872-6877, 2009.
* [11] M. Tizzoni, P. Bajardi, A. Decuyper, G. K. K. King, C. M. Schneider, V. Blondel, Z. Smoreda, M. C. Gonzalez, and V. Colizza, \"On the use of human mobility proxies for modeling epidemics,\" _PLoS Computational Biology_, vol. 10, no. 7, p. e1003716, 2014.
* [12] L. Bengtsson, J. Gaudart, X. Lu, S. Moore, E. Wetter, K. Sallah, S. Rebaudet, and R. Piarroux, \"Using mobile phone data to predict the spatial spread of cholera,\" _Scientific reports_, vol. 5, p. 8923, 2015.
* [13] F. Finger, T. Genolet, L. Mari, G. C. de Magny, N. M. Manga, A. Rinaldo, and E. Bertuzzo, \"Mobile phone data highlights the role of mass gatherings in the spreading of cholera outbreaks,\" _Proceedings of the National Academy of Sciences_, vol. 113, no. 23, pp. 6421-6426, 2016.
* [14] \"Guidance for contacts of people with confirmed coronavirus (COVID-19) infection who do not live with the person,\" [https://www.gov.uk/government/publications/guidance-for-contacts-of-people-with-possible-or-accressed](https://www.gov.uk/government/publications/guidance-for-contacts-of-people-with-possible-or-accressed): 2020-08-25.
* [15] R. Trevethan, \"Sensitivity, specificity, and predictive values: foundations, pliabilities, and pitfalls in research and practice,\" _Frontiers in Public Health_, vol. 5, p. 307, 2017.
* [16] P. Donnelly, \"The correlation structure of epidemic models,\" _Mathematical Biosciences_, vol. 117, pp. 49-75, 1993.
* [17] E. Cator and P. Van Mieghem, \"Nodal infection in Markovian susceptible-infected-susceptible and susceptible-infected-removed epidemics on networks are non-negatively correlated,\" _Physical Review E_, vol. 89, p. 052802, 2014.
* [18] D. Du, and F. Hwang, _Combinatorial group testing and its applications_. World Scientific, 2020.
* [19] N. Shental, S. Levy, V. Wuvshet, S. Skorniakov, B. Shalem, A. Ottolenghi, Y. Greenshpan, R. Steinberg, A. Edri, R. Gillis, M. Goldhirsh, K. Moscovici, S. Sachren, L. M. Friedman, L. Nesher, Y. Shemer-Avni, A. Porgador, and T. Hertz, \"Efficient high-throughput SARS-CoV-2 testing to detect asymptomatic carriers,\" _Science Advances_, p. eabc5961, 2020.
* [20] P. Van Mieghem, _Performance Analysis of Complex Networks and Systems_. Cambridge University Press, 2014.
* [21] Y. Hong, \"On computing the distribution function for the Poisson binomial distribution,\" _Computational Statistics & Data Analysis_, vol. 59, pp. 41-51, 2013.
## Appendix A The BETIS algorithm
### Assumptions in the computations
We define the \\(N_{\\mathrm{u}}\\times 1\\) viral state vector as \\(X[k]=\\left(X_{1}[k], ,X_{N_{\\mathrm{u}}}[k]\\right)^{T}\\). The reported viral state vector \\(X_{\\mathrm{rep}}[k]\\) is defined analogously. We rely on three assumptions to compute the infection risk (4). First, we assume the conditional stochastic independence
\\[\\Pr\\left[X[k]\\big{|}X_{\\mathrm{rep}}[k],\\mathcal{M}[k-1]\\right] \\approx\\prod_{i=1}^{N_{\\mathrm{u}}}\\Pr\\left[X_{i}[k]\\big{|}X_{\\mathrm{rep},i} [k],\\mathcal{M}[k-1]\\right]. \\tag{6}\\]
There are \\(6^{N_{\\mathrm{u}}}\\) possible combinations of the entries of the viral state vector \\(X[k]\\). Thus, it is practically impossible to state the full distribution of the vector \\(X[k]\\). The assumption (6) instead implies that the distribution of the vector \\(X[k]\\) can be decomposed into the marginal distribution of the entries \\(X_{1}[k]\\), \\(X_{2}[k]\\), , \\(X_{N_{\\mathrm{u}}}[k]\\), which can be computed separately. Furthermore, assumption (6) might be of relevance to privacy: The full distribution \\(\\Pr\\left[X[k]\\big{|}X_{\\mathrm{rep}}[k],\\mathcal{M}[k-1]\\right]\\) is sensitive data. In contrast, the single factors \\(\\Pr\\left[X_{i}[k]\\big{|}X_{\\mathrm{rep},i}[k],\\mathcal{M}[k-1]\\right]\\) might _in parts_ be made accessible to some individuals.
Furthermore, we make the assumption that the viral state \\(X_{i}[k]\\) does not depend on the measured neighbourhoods \\(\\mathcal{N}_{\\mathrm{u},1}[k], ,\\mathcal{N}_{\\mathrm{u},N_{\\mathrm{u}}}[k]\\) at time \\(k\\). More precisely,
\\[\\Pr\\left[X_{i}[k]\\big{|}X_{\\mathrm{rep},i}[k],\\mathcal{N}_{\\mathrm{u},1}[k], ,\\mathcal{N}_{\\mathrm{u},N_{\\mathrm{u}}}[k],\\mathcal{M}[k-1]\\right]=\\Pr\\left[ X_{i}[k]\\big{|}X_{\\mathrm{rep},i}[k],\\mathcal{M}[k-1]\\right]. \\tag{7}\\]
The viral state \\(X_{i}[k]\\) does depend on the neighbourhoods \\(\\mathcal{N}_{\\mathrm{u},i}[k-1]\\) at the _previous_ time step \\(k-1\\), due to the infection probability (2). Thus, the impact of the location on the infection dynamics is delayed by one time step, and we consider assumption (7) rather technical. Third, we assume the analogue to (7) for the _joint_ distribution of the random variables \\(X_{1}[k], ,X_{N_{\\mathrm{u}}}[k]\\),
\\[\\Pr\\left[X[k]\\big{|}X_{\\mathrm{rep}}[k],\\mathcal{N}_{\\mathrm{u},1}[k], , \\mathcal{N}_{\\mathrm{u},N_{\\mathrm{u}}}[k],\\mathcal{M}[k-1]\\right]=\\Pr\\left[X[k ]\\big{|}X_{\\mathrm{rep}}[k],\\mathcal{M}[k-1]\\right]. \\tag{8}\\]
### Approximation of the infection probability
BETIS computes the infection risk (4) based on the hidden Markov epidemic model in Section 2. However, the location of non-users is unknown. Hence, the set \\(\\mathcal{N}_{\\mathrm{inf},i}[k]\\) of infectious neighbours is not known, and the infection probability (2) cannot be computed directly. Instead, we resort to approximating the infection probability (2), based on the neighbourhood of infected _users_ as
\\[\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]=\\left\\{j\\in\\mathcal{N}_{\\mathrm{u },i}[k]\\big{|}X_{j}[k]=\\mathcal{I}\\text{ or }X_{j}[k]=\\mathcal{I}_{\\mathrm{a}}\\right\\}.\\]
In contrast to the complete infectious neighbourhood \\(\\mathcal{N}_{\\mathrm{inf},i}[k]\\), the subset \\(\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]\\) can be inferred from the measured neighbourhood \\(\\mathcal{N}_{\\mathrm{u},i}[k]\\), as detailed in Subsection A.3.
With the set \\(\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]\\), we approximate the infection probability (2) in two steps. First, at any time \\(k\\), we approximate the probability that a randomly chosen non-user is infected (symptomatically or asymptomatically) by averaging over the infection probability of the users as
\\[p_{\\mathrm{inf}}[k]=\\frac{1}{N_{\\mathrm{u}}}\\sum_{i=1}^{N_{\\mathrm{u}}}\\left( \\Pr\\left[X_{i}[k]=\\mathcal{I}\\left|\\mathcal{M}[k]\\right.\\right]+\\Pr\\left[X_{i} [k]=\\mathcal{I}_{\\mathrm{a}}\\left|\\mathcal{M}[k]\\right.\\right]\\right).\\]
Then, the probability that, out of \\(m\\) randomly chosen non-users, \\(l\\) individuals are infected follows as
\\[p_{\\mathrm{inf},l}[k]=\\binom{m}{l}p_{\\mathrm{inf}}^{l}[k]\\left(1-p_{\\mathrm{ inf}}[k]\\right)^{m-l}.\\]
Thus, given that a user has \\(m\\) contacts with non-users, the probability of an infection by a non-user equals
\\[\\epsilon[k,m]=\\sum_{l=0}^{m}p_{\\mathrm{inf},l}[k]\\left(1-\\left(1-\\beta\\right)^ {l}\\right).\\]
The distribution \\(f(m)\\) of the number of contacts with non-users is known. Hence, the probability that a user is infected by a non-user is approximated by
\\[\\epsilon[k]=\\sum_{m=0}^{\\infty}f(m)\\epsilon[k,m]. \\tag{9}\\]
Second, we use (9) to approximate the infection probability (2). More precisely, BETIS replaces the exact probability (2) by
\\[\\Pr\\left[X_{i}[k+1]=\\mathcal{E}\\big{|}X_{i}[k]\\in\\{\\mathcal{S}, \\mathcal{S}_{\\mathrm{fa}}\\},|\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]|\\right] \\approx 1-\\left(1-\\beta\\right)^{|\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]|} \\left(1-\\epsilon[k]\\right). \\tag{10}\\]
### Recursive Bayesian filtering
The infection risk (4) can be computed by iterating over time:
#### Initialisation
At time \\(k=1\\), we assume that the probability distribution
\\[\\Pr\\left[X_{i}[1]\\right]\\]
is given for every user \\(i=1, ,N_{\\mathrm{u}}\\). Formally, we can write
\\[\\Pr\\left[X_{i}[1]\\right]=\\Pr\\left[X_{i}[1]\\big{|}\\mathcal{M}[0]\\right], \\tag{11}\\]
since there are no observations at time \\(k=0\\). (Or, the set of observation \\(\\mathcal{M}[0]\\) at time \\(k=0\\) is empty, because we start measuring at \\(k=1\\).)
**Measurement update**: We are given the distribution \\(\\Pr\\left[X_{i}[k]\\big{|}\\mathcal{M}[k-1]\\right]\\) for every user \\(i\\). (Starting with (11) at time \\(k=1\\).) For every user \\(i\\), the measurement update incorporates the reported viral state \\(X_{\\mathrm{rep},i}[k]\\) to obtain a more accurate distribution of the viral state \\(X_{i}[k]\\). More precisely, we compute the probability \\(\\Pr\\left[X_{i}[k]\\big{|}X_{\\mathrm{rep},i}[k],\\mathcal{M}[k-1]\\right]\\) with Bayes' Theorem [20] as
\\[\\Pr\\left[X_{i}[k]\\big{|}X_{\\mathrm{rep},i}[k],\\mathcal{M}[k-1]\\right]=\\frac{ \\Pr\\left[X_{\\mathrm{rep},i}[k]\\big{|}X_{i}[k],\\mathcal{M}[k-1]\\right]\\Pr\\left[ X_{i}[k]\\big{|}\\mathcal{M}[k-1]\\right]}{\\Pr\\left[X_{\\mathrm{rep},i}[k]\\big{|} \\mathcal{M}[k-1]\\right]}.\\]
Given the viral state \\(X_{i}[k]\\), the reported viral state \\(X_{\\mathrm{rep},i}[k]\\) does not depend on past measurements \\(\\mathcal{M}[k-1]\\), and hence
\\[\\Pr\\left[X_{i}[k]\\big{|}X_{\\mathrm{rep},i}[k],\\mathcal{M}[k-1]\\right]=\\frac{ \\Pr\\left[X_{\\mathrm{rep},i}[k]\\big{|}X_{i}[k]\\right]\\Pr\\left[X_{i}[k]\\big{|} \\mathcal{M}[k-1]\\right]}{\\Pr\\left[X_{\\mathrm{rep},i}[k]\\big{|}\\mathcal{M}[k-1 ]\\right]}. \\tag{12}\\]
The distribution \\(\\Pr\\left[X_{\\mathrm{rep},i}[k]\\big{|}X_{i}[k]\\right]\\) is specified by the observation model in Subsection 2.2. In particular, for \\(X_{\\mathrm{rep},i}[k]=\\mathcal{R}\\), it holds that
\\[\\Pr\\left[X_{\\mathrm{rep},i}[k]=\\mathcal{R}\\big{|}X_{i}[k]=c,\\mathcal{M}[k-1] \\right]=\\begin{cases}1&\\text{if}\\quad c=\\mathcal{R},\\\\ 0&\\text{if}\\quad c\
eq\\mathcal{R}.\\end{cases}\\]
If user \\(i\\) reports to be healthy, \\(X_{\\mathrm{rep},i}[k]=\\mathcal{S}\\), then we obtain that
\\[\\Pr\\left[X_{\\mathrm{rep},i}[k]=\\mathcal{S}\\big{|}X_{i}[k]=c,\\mathcal{M}[k-1] \\right]=\\begin{cases}1&\\text{if}\\quad c\\in\\{\\mathcal{E},\\mathcal{I}_{\\mathrm{ a}},\\mathcal{R}_{\\mathrm{a}}\\},\\\\ 1-p_{\\mathrm{fa}}&\\text{if}\\quad c=\\mathcal{S},\\\\ 0&\\text{if}\\quad c\\in\\{\\mathcal{I},\\mathcal{R}\\}.\\end{cases}\\]
Similarly, if user \\(i\\) reports to be infected, \\(X_{\\mathrm{rep},i}[k]=\\mathcal{I}\\), then it holds that
\\[\\Pr\\left[X_{\\mathrm{rep},i}[k]=\\mathcal{I}\\big{|}X_{i}[k]=c,\\mathcal{M}[k-1] \\right]=\\begin{cases}1&\\text{if}\\quad c=\\mathcal{I},\\\\ p_{\\mathrm{fa}}&\\text{if}\\quad c=\\mathcal{S},\\\\ 0&\\text{if}\\quad c\\in\\{\\mathcal{E},\\mathcal{R},\\mathcal{I}_{\\mathrm{a}}, \\mathcal{R}_{\\mathrm{a}}\\}.\\end{cases}\\]
The denominator in (12) follows from the law of total probability [20] as
\\[\\Pr\\left[X_{\\mathrm{rep},i}[k]\\big{|}\\mathcal{M}[k-1]\\right]=\\sum_{c\\in \\mathcal{C}}\\Pr\\left[X_{\\mathrm{rep},i}[k]\\big{|}X_{i}[k]=c\\right]\\Pr\\left[X_ {i}[k]=c\\big{|}\\mathcal{M}[k-1]\\right].\\]
**Time update**: The measurement update computes the distribution \\(\\Pr\\left[X_{i}[k]\\big{|}X_{\\mathrm{rep},i}[k],\\mathcal{M}[k-1]\\right]\\), from which the time update obtains the distribution \\(\\Pr\\left[X_{i}[k+1]\\big{|}\\mathcal{M}[k]\\right]\\). The law of total probability yields that
\\[\\Pr\\left[X_{i}[k+1]\\big{|}\\mathcal{M}[k]\\right] =\\sum_{c\\in\\mathcal{C}}\\Pr\\left[X_{i}[k+1],X_{i}[k]=c\\big{|} \\mathcal{M}[k]\\right]\\] \\[=\\sum_{c\\in\\mathcal{C}}\\Pr\\left[X_{i}[k+1]\\big{|}X_{i}[k]=c, \\mathcal{M}[k]\\right]\\Pr\\left[X_{i}[k]=c\\big{|}\\mathcal{M}[k]\\right], \\tag{13}\\]
where the last equation follows from the definition of the conditional probability. First, we consider the term \\(\\Pr\\left[X_{i}[k]=c\\big{|}\\mathcal{M}[k]\\right]\\) in (13). With the definition of the set of all observations \\(\\mathcal{M}[k]\\), it holds that
\\[\\Pr\\left[X_{i}[k]=c\\big{|}\\mathcal{M}[k]\\right]=\\Pr\\left[X_{i}[k]=c\\big{|}X_{ \\mathrm{rep}}[k],\\mathcal{N}_{\\mathrm{u},1}[k], ,\\mathcal{N}_{\\mathrm{u}, N_{\\mathrm{u}}}[k],\\mathcal{M}[k-1]\\right].\\]
Assumption (6) implies that
\\[\\Pr\\left[X_{i}[k]=c\\big{|}\\mathcal{M}[k]\\right]=\\Pr\\left[X_{i}[k]=c\\big{|}X_{ \\mathrm{rep},i}[k],\\mathcal{N}_{\\mathrm{u},1}[k], ,\\mathcal{N}_{\\mathrm{u}, N_{\\mathrm{u}}}[k],\\mathcal{M}[k-1]\\right].\\]
Then, with assumption (7), we obtain that
\\[\\Pr\\left[X_{i}[k]=c\\big{|}\\mathcal{M}[k]\\right]=\\Pr\\left[X_{i}[k]=c\\big{|}X_{ \\mathrm{rep},i}[k],\\mathcal{M}[k-1]\\right], \\tag{14}\\]
which has been calculated by the previous measurement update. Second, we consider the term \\(\\Pr\\left[X_{i}[k+1]\\big{|}X_{i}[k]=c,\\mathcal{M}[k]\\right]\\) in (13). The exact transition probabilities of the viral state \\(X_{i}[k]\\) from time \\(k\\) to \\(k+1\\) depends on the infectious neighbourhood \\(\\mathcal{N}_{\\mathrm{inf},i}[k]\\), as specified by the Markov epidemic model. The complete neighbourhood \\(\\mathcal{N}_{\\mathrm{inf},i}[k]\\) of infectious individuals is not measured. Thus, BETIS makes use of the transition probability approximation (10), which is based on the neighbourhood \\(\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]\\) of infectious users. However, we do not directly observe the set \\(\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]\\) but instead the set \\(\\mathcal{N}_{\\mathrm{u},i}[k]\\) of all users, infectious and non-infectious, that were close to user \\(i\\) at time \\(k\\). Since \\(\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]\\subset\\mathcal{N}_{\\mathrm{u},i}[k]\\), it holds that
\\[0\\leq\\left|\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]\\right|\\leq\\left| \\mathcal{N}_{\\mathrm{u},i}[k]\\right|.\\]
Thus, we can apply the law of total probability to obtain that
\\[\\Pr\\left[X_{i}[k+1]\\big{|}X_{i}[k]=c,\\mathcal{M}[k]\\right]=\\sum_{ m=0}^{\\left|\\mathcal{N}_{\\mathrm{u},i}[k]\\right|} \\Pr\\left[X_{i}[k+1]\\big{|}X_{i}[k]=c,\\mathcal{M}[k],\\left|\\mathcal{N}_{ \\mathrm{inf},\\mathrm{u},i}[k]\\right|=m\\right]\\] \\[\\cdot\\Pr\\left[\\left|\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k] \\right|=m\\big{|}X_{i}[k]=c,\\mathcal{M}[k]\\right],\\]
which simplifies to
\\[\\Pr\\left[X_{i}[k+1]\\big{|}X_{i}[k]=c,\\mathcal{M}[k]\\right]=\\sum_{ m=0}^{\\left|\\mathcal{N}_{\\mathrm{u},i}[k]\\right|}\\Pr\\left[X_{i}[k+1]\\big{|}X_{i}[k]=c,\\left|\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k]\\right|=m\\right] \\tag{15}\\] \\[\\cdot\\Pr\\left[\\left|\\mathcal{N}_{\\mathrm{inf},\\mathrm{u},i}[k] \\right|=m\\big{|}\\mathcal{M}[k]\\right].\\]
The set \\(\\mathcal{N}_{\\inf,u,i}[k]\\) only consists of users \\(j\\) with \\(X_{j}[k]=\\mathcal{I}\\) or \\(X_{j}[k]=\\mathcal{I}_{\\mathrm{a}}\\). For \\(j=1, ,M\\), we define the Bernoulli random variable \\(\\psi_{j}\\) as
\\[\\psi_{j}=\\begin{cases}1&\\quad\\text{with probability}\\quad p_{j},\\\\ 0&\\quad\\text{with probability}\\quad 1-p_{j},\\end{cases}\\]
with the success probability
\\[p_{j}=\\Pr\\left[X_{j}[k]=\\mathcal{I}\\big{|}X_{\\mathrm{rep},j}[k],\\mathcal{M} [k-1]\\right]+\\Pr\\left[X_{j}[k]=\\mathcal{I}_{\\mathrm{a}}\\big{|}X_{\\mathrm{rep},j}[k],\\mathcal{M}[k-1]\\right].\\]
From (16) it follows that the cardinality \\(|\\mathcal{N}_{\\inf,u,i}[k]|\\) is the sum of \\(M\\) Bernoulli random variables \\(\\psi_{j}\\in\\{0,1\\}\\) with different success probabilities \\(p_{j}\\). Hence, the cardinality \\(|\\mathcal{N}_{\\inf,u,i}[k]|\\) follows a _Poisson binomial distribution_[21]. We obtain the distribution of \\(|\\mathcal{N}_{\\inf,u,i}[k]|\\) by convolution of the distributions of the random variables \\(\\psi_{1}, ,\\psi_{M}\\). If the number \\(M\\) is large, then the convolution might take long. For large \\(M\\), there are more efficient algorithms [21] for computing the distribution of the cardinality \\(|\\mathcal{N}_{\\inf,u,i}[k]|\\) (based on the discrete Fourier transform).
After the initialisation, the measurement update and the time update are alternated for every time \\(k\\). Finally, the risk factor (4) is obtained from (14) at the last time step \\(k\\).
## Appendix B Simulation parameters
Here we give the details of the parameter values used in the simulations. To generate the locations \\(z_{i}[k]\\) at every time \\(k\\), we employ a simple movement model: For every individual \\(i\\), both entries of the initial \\(2\\times 1\\) location vector \\(z_{i}[1]\\) are set to a uniform random number in \\([0,1]\\). Given the location vector \\(z_{i}[k]\\) at any time \\(k\\), we obtain the location vector at the next time \\(k+1\\ | Currently, many countries are considering the introduction of tracing software on mobile smartphones with the main purpose to inform and alarm the mobile app user. Here, we demonstrate that, in addition to alarming and informing, mobile tracing _can detect nearly all users that are infected by SARS-CoV-2_. Our algorithm BETIS (Bayesian Estimation for Tracing Infection States) makes use of self-reports of the user's health status. Then, BETIS guarantees that almost all SARS-CoV-2 infections of the group of users can be detected. Furthermore, BETIS estimates the virus prevalence in the _whole_ population, consisting of users and non-users. BETIS is based on a hidden Markov epidemic model and recursive Bayesian filtering. The potential that mobile tracing apps, in addition to medical testing and quarantining, can eradicate COVID-19 may persuade citizens to trade-off privacy against public health. | Provide a brief summary of the text. | 184 |
arxiv-format/2309_01803v1.md | # Importance of overnight parameters to predict Sea Breeze on Long Island
Kira Adaricheva\\({}^{1,3}\\), Jase E. Bernhardt\\({}^{2,3}\\), Wenxin Liu\\({}^{3}\\), and Briana Schmidt\\({}^{3}\\)
\\({}^{1}\\)Department of Mathematics
\\({}^{2}\\)Department of Geology, Environment and Sustainability
\\({}^{3}\\)Hofstra University
######
Footnote 1: Key Words: sea breeze, weather forecasting, Long Island, \\(D\\)-basis algorithm, association rules analysis
###### Contents
* 1 Introduction
* 2 Data
* 3 Detailing the definition of sea breeze
* 4 Approach
\t* 4.1 Parameter Selection
\t* 4.2 \\(D\\)-basis algorithm and ranking of attributes by the relevance
\t* 4.3 Code development for the data conversion
* 5 Data conversion
* 6 Results
\t* 6.1 Initial stage of the project
\t* 6.2 Second stage of the project
\t* 6.3 Computation of the Forecasting Value
\t* 6.4 Computation of Forecasting Values on the random subset of the data
\t* 6.5 Comparison with synthetic data
Conclusions and Discussion
* A Appendix: The description of the \\(D\\)-basis algorithm
* A.1 \\(D\\)-basis algorithm
* A.2 Ranking attributes relevant to a target attribute
* A.3 Formula for the parameter of relevance
## Impact Statement
The sea breeze is a phenomenon frequently influencing Long Island, especially during warm season months, due to differences in land and ocean surface temperatures. Local weather impacts from the sea breeze can include a wind shift and decreasing air temperature. Given its importance to Long Island's climatology, a tool to forecast the presence or absence of the sea breeze in near real time is desirable. To do so, we used a novel algorithm, the \\(D\\)-Basis, to predict sea breeze occurrence during June 2017-2020, based on weather and climate data from the previous 24 hours. The algorithm was able to predict the sea breeze with relatively high accuracy and can be adapted as a tool for operational forecasters in the future.
## 1 Introduction
Due to its location situated between the Atlantic Ocean and Long Island Sound, a sea breeze is common on Long Island during warm season months. The sea breeze originating from the south shore of Long Island, which faces the Atlantic Ocean, can be especially potent, moving several miles inland and sometimes reducing the near-surface air temperature by 5-10 degrees Celsius, see Novak and Colle 2006.
For example, Colle, Olson, and Tongue 2003 describes a representative event occurring on 7 June 2001, in which a sea breeze boundary moved into the middle of Long Island by the late afternoon, with a temperature gradient of 3-5 degrees Celsius across the immediate coastline. Further, that study objectively cataloged Long Island sea breeze instances during the 2000 and 2001 warm seasons, with an event required to have an approximately 6 degree Celsius temperature gradient between a near offshore buoy and a surface observing station near the coast, along with light winds during the morning. That climatology indicated that sea breeze events were by far most common in the month of June during those years, owing to a large difference between land surface and ocean temperatures typical during the late spring and early summer. More recently, McCabe and Freedman 2023 objectively identified Long Island sea breeze events between 2010 and 2020 using a variety of surface, near-surface, and lower atmospheric weather observations. Those variables included near-surface temperature and moisture, sea surface temperature, station pressure, and wind speed and direction at both 10 meters and 100 meters above the surface. That study determined an average of 32 sea breeze days annually, with a maximum in July and the surrounding warm season months.
Given the high frequency of the sea breeze on the south shore of Long Island during the early portion of the warm season, and its ability to reduce air temperature and increase wind speed in the lower atmosphere, improved advance prediction of this phenomena could be advantageous to weather forecasters. The impact of the sea breeze is further magnified by Long Island's high population density, so the ability to predict for a given day whether the sea breeze will reach a certain point inland would help with projecting energy demand, outdoor work and recreation conditions, and other societal impacts. Thus, the goal of this study is to apply the \\(D\\)-basis algorithm to the problem of short term sea breeze prediction for the south shore of Long Island. We hypothesize that by using antecedent conditions the night before a potential sea breeze event, the \\(D\\)-basis algorithm can forecast whether or not the sea breeze will reach a certain weather station on Long Island with a high degree of accuracy.
An important advance of this project compared to earlier applications of the \\(D\\)-basis, or other analysis tools involving association rules, is the development of the new methodology of processing of retrieved rules. Given the multiple runs of the algorithm and the ranking of the attributes associated with the particular outcome (in our case: either a sea breeze or non-sea breeze day), we aggregate the results and produce a unique numerical value, which determines the forecast.
## 2 Data
To objectively determine the presence of the sea breeze, 5-minute near-surface weather observations were acquired from the Hofstra University WeatherSTEM network in Nassau County, New York, specifically the station at Hofstra's Soccer Stadium (Figures 1 and 2). Weather data at the station is recorded every minute, however, some weather observations, such as wind speed and direction, are highly variable or occasionally unavailable at that temporal resolution. Thus, all one minute observations were averaged into 5-minute intervals so that time intervals with more readings would not weigh unequally in later groupings.
for the months of June 2017, June 2018, June 2019, and June 2020.
Of the weather data considered, temperature and wind direction were used to objectively determine whether a particular day was a sea breeze event, while the remaining variables were included for later analysis as potential predictors in the algorithm. A candidate day was classified as a sea breeze day if the maximum temperature occurred between 10 AM and 3:30 PM local time and was followed by a period with at least 70% of the 5-minute wind direction observations coming from an onshore direction (i.e., between 70 and 250 degrees, perpendicular to the south shore of Long Island) for the subsequent two hours.
Those criteria were chosen based on the previous literature showing that the sea breeze boundary frequently reaches Hofstra University's location in the middle of Long Island by late afternoon. Further, once the boundary passed through Hofstra, it could be expected that the wind would come from a direction mainly perpendicular to the shore. 120 days (four months) of data were checked for the occurrence of the sea breeze, with 54 days being classified as sea breeze days, while eight were so-called'very moist' days, as discussed later in this section, and the remaining 58 days were categorized as non-sea breeze days.
Some days were automatically categorized as non-sea breeze days, based on the Spatial Synoptic Classification (SSC), an air mass climatology index, see Sheridan 2002. The SSC analyzes surface observations to classify the _weather type_ of a given day for a weather station. The nearest weather station to Hofstra University for which the SSC is available: JFK airport, in Queens, New York City - was used, with certain _weather type_ days automatically being considered non-sea breeze days. Specifically, days with an SSC number of \"66\" (Moist Tropical Plus) or \"67\" (Moist Tropical Double Plus) were classified as _very moist days_ and were not eligible to be categorized as sea breeze. This was due to the fact that when very tropical air masses are present over Long Island, they are almost always accompanied by deep synoptic flow from the south or southwest, the same direction as the sea breeze. Thus, it is rare for a meaningful sea breeze boundary to develop on those days, since the larger scale flow is already coming from a similar direction.
Figure 4: 5-minute interval data data
Figure 3: Raw data
After the classification of _very moist days_ was completed, the remaining sea breeze and non-sea breeze days were investigated to determine whether the \\(D\\)-basis algorithm and chosen predictors could successfully forecast the presence or absence of the sea breeze using data from the preceding night.
## 3 Detailing the definition of sea breeze
We began with the aforementioned definition of a sea breeze day being a day where _the highest temperature occurs between 10 am and 3:30 pm_ in locations near the coast. The goal of the algorithm development was to specify which other weather variables could provide advanced notice of an impending sea breeze day. Doing so could allow those weather parameters to be leveraged using the computer algorithm to forecast the sea breeze in advance. Indeed, while there is no universally accepted numerical definition of the sea breeze for a given locality, having a binary definition (i.e., yes sea breeze or no sea breeze) was necessary to test the algorithm developed. Moreover, it is easier to remove false positives than to account for false negatives in developing the algorithm, so the definition of the sea breeze was slightly expanded after initial testing, to allow for more days to be classified as potential sea breeze days. For example, the high temperature cutoff was initially set at 3 PM, but later moved back slightly to 3:30 PM.
One method used to filter out false positives was the implementation of a wind direction requirement into the sea breeze definition. Since the Atlantic Ocean is roughly due south of Long Island, the five-minute averaged wind direction in the two hours following the maximum temperature being recorded was required to be from the southern half of the wind rose (i.e., between 70 and 250 degrees) for at least 70% of the time. That requirement helped to eliminate false positives stemming from instances such as a strong cold frontal passage, which could result in the high temperature occurring much earlier than normal during the day, but due to a reason other than the sea breeze. On the other hand, the definition should also not be too specific to a location or time of year so that it does not work as accurately in other settings. One way to mitigate against that issue was to use changes in weather variables, instead of absolute values, though the use of the latter was still necessary in relation to air pressure. Thus, the following was used as the basic definition for a sea breeze:
**Definition**.: Sea Breeze
1. Occurs in a coastal area.
2. Occurs in the warm season, when the land temperature is sufficiently warmer than the water temperature.
3. Occurs in absence of larger scale weather phenomena.
4. Occurs near surface temperature of the day happens between 10 AM and 3:30 PM (local time, not precisely generalizable to all locations).
5. For two hours after the highest temperature, the wind direction must be coming from the direction of the body of water for at least 70% of the time (i.e., roughly perpendicular to shore).
## 4 Approach
### Parameter Selection
According to a formula presented in Biggs and Graves 1962, the lake breeze forecasting index \\(\\epsilon\\) is proportional to the square near surface wind speed \\(|U|\\) and inversely proportional to the land-sea temperature contrast \\(\\Delta T\\):
\\[\\epsilon=\\frac{|U|^{2}}{C_{p}\\Delta T},\\]
where \\(C_{p}\\) is a heat coefficient of dry air at constant pressure. \\(\\Delta T=T_{land}-T_{sea}\\) measures the temperature difference, with \\(T_{land}\\) taken at the point far enough inland. Many other forecasting mechanisms are reported in Miller et al. 2003, where the wind and temperature measurements were of central importance. Most of their forecasting relied on the morning measurement to predict the sea breeze happening on the same day.
In our study, we relied solely on measurements produced by a Hofstra University weather station, located considerably inland compared to south shore of Long Island, and not sea surface temperature observations. We compared the data of one of the years with similar measurements of another weather station, near Wantagh, New York, much closer to the shore, but the difference was insignificant.
The prediction method we developed involved the examination of station pressure, dew point, and wind direction during the prior night from 7pm to 7am on the morning of the predicted day. Initial focus was on the dynamic of pressure and dew point: rising, dropping or constant, and at later stage of the project the absolute values of pressure were added to the list of attributes.
According to Laird and Kristovich 2004 three quarters of sea breeze events happened with cross shore component \\(|U_{x}|\\) being less than 2 m/sec, thus, wind speed might not have considerable effect on the sea breeze. This attribute was not considered in current study, especially given the relative distance of the weather station from the shore. Pressure, pressure dynamic, dew point dynamic and wind direction were chosen as predictor variables based on the weather variables most closely associated with sea breeze days for Long Island. It was also important to consider the development of those variables overnight, prior to sea breeze initiation, and we used 2-hour intervals when observing the dynamics of changes in parameters. Those overnight parameters, one for each 2-hour interval between 7pm and 7am prior to day of prediction, were classified as follows:
* pressure rising or dropping during the interval;
* dew point rising or dropping during the interval;
* onshore wind direction (70-250 degree) dominating (\\(>60\\%\\)) during the interval;
* station pressure above or below the climatological average sea-level pressure of 29.92 inches mercury (1013 mb) during the interval (given Long Island's elevation very close to sea level).
Generally, the sea breeze occurs in June in the absence of a strong synoptic low pressure system (pressure not dropping) or cold frontal passage (dew point not dropping). On the mesoscale, persistence was also selected as an indicator for sea breeze through the onshore wind requirement. At the synoptic scale, the SSC weather type was used to remove days where a sea breeze would not be expected to occur (i.e., due to persistent onshore caused by a synoptic weather system).
### \\(D\\)-basis algorithm and ranking of attributes by the relevance
In this work we explored an approach in weather forecasting leveraging the discovery of association rules in the binary data. One of recent explorations of this sort was done in Coulibaly, Kamsu-Foguem, and Tangara 2021.
The \\(D\\)-basis is a new algorithm described in Adaricheva and Nation 2017 that discovers the _implications_\\(S\\to d\\) in a table with entries 0 and 1. This algorithm belongs to the family of analytic tools based on _association rules_ and/or _implications_, but it also involves secondary statistical analysis of the retrieved rules.
In the current study, \\(S\\) is a subset of attributes/columns (weather observation attributes) and \\(d\\) is another column (e.g., an indicator of sea breeze day). Note that non-binary attributes can be expressed as a combination of binary ones. For example, attributes with five values can be given by five binary columns. Attributes that have values of real numbers within some interval can be discretized by splitting the range into a few sub-intervals and assigning a new attribute to each subinterval of values. The rows of the table represent the days of the observation. Implications are association rules that hold in all rows of the table. In our context, for every day/row, if all attributes in set \\(S\\) occur (marked by 1 in that row), the attribute \\(d\\) occurs as well.
The algorithm is controlled by several parameters, which filter the retrieved rules \\(S\\to d\\) with the fixed target attribute \\(d\\). The most important are the _minimal support_ and _row deletion_. We say that an observation (a row of the table) _validates_ rule \\(S\\to d\\), when all attributes in \\(S\\) as well as \\(d\\) are present in the observation, that is, the entries in the row are marked by 1, corresponding to columnsin \\(S\\) and \\(d\\). The _support of the rule_ is the number of observations/rows where the rule is validated. When the test runs with parameter of _minimum support_\\(=k\\), only the rules with the support of at least \\(k\\) are returned. Further discussion of the parameters of the test is given in appendix section A.2.
Using the sets of rules retrieved on the two runs of the algorithm: one on the target attribute \\(d\\), and another on its negation \\(\
eg d\\), which is not necessarily present in the data, so it needs to be created - permits computation of a real non-negative number assigned to each attribute \\(a\\) different from \\(d\\). It is called the _relevance_ of \\(a\\) with respect to \\(d\\) and denoted \\(rel_{d}(a)\\). The higher \\(rel_{d}(a)\\), the more frequently attribute \\(a\\) appears in set \\(S\\) for rules \\(S\\to d\\) compared to rules \\(S\\to\
eg d\\). All attributes different from \\(d\\), therefore, may be ranked by the relevance with respect to \\(d\\), and our method would investigate the attributes with highest ranks with respect to \\(d=\\)_sea breeze day_ or \\(d=\\)_non-sea breeze day_, which we call _normal day_ for the rest of the paper.
Note that the relevance is computed within the sets of retrieved rules, one for target \\(d\\) and another for target \\(\
eg d\\). Changing the parameters of the tests will change the sets of rules, which may change the relevance. For this reason we run multiple tests, then aggregate the results. More detailed description of the \\(D\\)-basis algorithm is given in appendix section A.1, and the definition and computation of the relevance in section A.3.
### Code development for the data conversion
The \\(D\\)-basis code was developed in github [https://gitlab.com/npar/dbasis](https://gitlab.com/npar/dbasis) and was first made publicly available at the time of publication in Nation et al. 2021. The weather data conversion for the \\(D\\)-basis entry was performed in the R programming language following the rules (3)-(5) of the Sea Breeze Definition in section 3. The files related to this project are located in [https://gitlab.com/npar/seabreeze](https://gitlab.com/npar/seabreeze)
## 5 Data conversion
Since the \\(D\\)-basis algorithm acts on binary data, the weather variables collected from the Hofstra weather station were converted into binary form. Each day in the converted data was represented by a row of the table, while the columns represented a weather attribute which was marked by 1 when it was observed on a particular day/row, and by 0 when the attribute was not observed. Several attributes were the target attributes representing whether or not a sea breeze was observed, or if the day was excluded due to the synoptic climatology (i.e., a very moist day):
71: Next Day Sea Breeze (1=SB, 0=Normal or Very Moist)
72: Complement of 71 (1=Normal or Very Moist, 0=SB)
73: Next Day Normal Day (1=Normal, 0=SB or Very Moist)
74: Complement of 73 (1=SB or Very Moist, 0=Normal)
For example, the first row of the table has entry 1 in column 71, because on June 1, 2017 a sea breeze was observed (and it also had 0 in column 72 and 73 and 1 in column 74). Note that \"Next Day\" applies to June 1, because the observation of some weather attributes were made on May 31, 2017.
The rows in the converted data were sorted as follows:
Rows 1-30: May 31 2017-June 29 2017
Rows 31-60: May 31 2018-June 29 2018
Rows 61-90: May 31 2019-June 29 2019
Rows 91-120: May 31 2020-June 29 2020
The first 6 attributes given in Table 1 were possible descriptors of the day prior to prediction, based on the SSC categorization of that day.
For example, the first row had entry 1 in column 2 and 0 entry in columns 1 and 3-6, because May 31, 2017, was classified as a Normal Day (i.e., no sea breeze).
The attributes 7-16, as in Table 2, measured weather parameters for the time interval 7-9 pm the previous evening, with entry value 1=yes or 0=no.
The pressure trend and dew point trend of each 2-hour interval were determined simply by subtracting the end value from the start value. For pressure a difference of 0.00 would designate it as \"consistent\", while for dew point, a change was classified to have occurred when the difference was in interval (-1,1).
Pressure above 29.92 in Hg was categorized when all 5-minute intervals during the 2-hour period were above or on that climatological threshold; otherwise, when any 5-minute interval during the 2-hour period is dropped below the threshold, it is determined as \"Pressure below 29.92 in Hg.\"
Similar attributes, in the same sequence of 10, described the measurements during the following two-hour time intervals:
Columns 17-26 between 9 - 11 pm,
Columns 27-36 between 11 pm - 1 am,
Columns 37-46 between 1 - 3 am,
Columns 47-56 between 3-5 am, and
Columns 57-66 between 5-7 am.
Columns 67-70, as described in Table 3, were added to further confirm the initial hypothesis that high pressure, together with non-decreasing pressure, may be a significant factor in sea breeze prediction, given that a sea breeze is likely to form in the absence of strong fronts or cyclones, which may be indicated by low or falling pressure. Hence, some combinations of the six 2-hour intervals were introduced. If 4 or more intervals possessed non-falling pressure, column 67 of that day was marked as 1. If 4 or more intervals had no time instance with pressure lower than 29.92 inches Hg, column 68 was marked as 1.
These observations associated with anticyclonic conditions were selected for inclusion because they imply weak flow at the surface and aloft. Such conditions have been shown to be a key indicator of sea breezes along the US East Coast in previous studies, such as Hughes and Veron 2018 and Cetola 1997.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Column \\# & Column description & Entry values \\\\ \\hline
1 & Sea Breeze & 1=yes, 0=no \\\\ \\hline
2 & Normal Day & 1=yes, 0=no \\\\ \\hline
3 & Very Moist Days & 1=yes, 0=no \\\\ \\hline
4 & SSC Dry Day & 1=yes, 0=no \\\\ \\hline
5 & SSC Moist Day & 1=yes, 0=no \\\\ \\hline
6 & SSC Transition Day & 1=yes, 0=no \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Attributes for Day Categories
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Col \\# & Column description & Details \\\\ \\hline
7 & Pressure rising & Pressure increased by the end of interval \\\\ \\hline
8 & Pressure falling & Pressure decreased by the end of interval \\\\ \\hline
9 & Pressure consistent & No change in pressure \\\\ \\hline
10 & Wind North & at least 60\\% of the time interval \\\\ \\hline
11 & Wind South & at least 60\\% of the time interval \\\\ \\hline
12 & Dew point rising & Dew point increased by more than 1 degree F \\\\ \\hline
13 & Dew point falling & Dew point decreased by more than 1 degree F \\\\ \\hline
14 & Dew point consistent & Dew point changed by less than 1 degree F \\\\ \\hline
15 & High pressure & Pressure above 29.92 in Hg \\\\ \\hline
16 & Low pressure & Pressure below 29.92 in Hg \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Attributes of weather observations
## 6 Results
### Initial stage of the project
The initial phase of this project was to conduct \\(D\\)-basis testing on 3 years of June data ranging from 2017-2019 to verify the effectiveness of the initial determinants in predicting the presence or absence of the sea breeze. Three attributes were tested: the change in pressure, the change in dew point, and the dominant wind direction.
After computing the relevance, the variables that were most important for predicting a _sea breeze_ day were _pressure rising, winds from the south (i.e., on-shore), and dew point either rising or staying consistent_. The attributes that were important for predicting _normal_ days were _pressure falling, wind from the north (offshore), and dew point falling_.
The following tables collect the results on each weather variable with respect to 6 time intervals of 2 hours between 7pm and 7am. Table 4 shows, for example, that all 6 time intervals are important for the pressure rising attribute because they are included in the top 17 (out of 58 total) attributes ranked by the relevance, when listed in order from highest to lower.
The time intervals that are not in the top ranked for southerly wind are always earlier in the night. This implies that southerly winds in the later hours of the night prior to sea breeze development are more consistently important. The combination of dew point rising and dew point consistent for prediction of the sea breeze covered all time intervals.
Results of predicting a normal day are in Table 5. The earlier hours of the prediction interval were more important for pressure and dew point, while the later hours were more important for the wind when predicting a normal day.
### Second stage of the project
The strong separation of patterns of the winds from the south (north), pressure rising (falling) and dew point not falling (falling) when occurring prior to a sea breeze (normal) day served as the basis to formulate the prediction method in the second stage of the project.
The main new features in the second phase of the project were:
1. adding absolute values of pressure above/below the climatological threshold of 29.92\" Hg (one standard atmosphere), during six 2-hours intervals;
2. correcting wind direction, given that the shoreline is not in a perfect west to east orientation;
3. adding 30 more observation days for June 2020;
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Sea breeze forecast & Relevance & Rank \\\\ \\hline Attributes: & \\# of intervals & in the top \\\\ \\hline Pressure rising & 6 & 17 \\\\ \\hline Wind South & 6 & 24 \\\\ \\hline Dew point rising & 3 & 22 \\\\ \\hline Dew point consistent & 5 & 28 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Number of time intervals in the sea breeze forecast
\\begin{table}
\\begin{tabular}{|c|c|} \\hline Col \\# & Column description \\\\ \\hline
67 & When \\(>3\\) of 6 intervals have rising or constant pressure \\\\ \\hline
68 & When \\(>3\\) of 6 intervals have pressure above 29.92 inches Hg \\\\ \\hline
69 & Pressure rising or high pressure \\\\ \\hline
70 & Pressure falling or low pressure \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: High pressure attributes4. running \\(D\\)-basis with minimum support = 6,8,11.
The decision to include the absolute pressure as an additional parameter came after careful observation of the graphs of pressure behavior over 120 days. To examine the pressure trend for sea breeze days, one can observe the daily pressure charts. Each dot on the daily trend graph represents a 5 minute interval during the 9pm to 7am time frame, where the purple dashed line represent the 29.92\" Hg standard that we later decided to use to determine whether each 2 hour interval is considered as _high pressure_, Figures 4(a) and 4(b).
Two pressure trend diagrams are chosen to demonstrate the general trend for a sea breeze day versus a normal day. On Figure 4(a) there is an example of overnight pressure chart between June 25-26 in 2018, with June 26 being a sea breeze day.
On Figure 4(b) there is an example of overnight pressure chart between June 20-21, 2018, with June 21 being a normal day.
The normal day trend is sometimes seen slightly decreasing/flat or increasing, but rarely remains completely above the purple dashed line.
June 2018 is a good example to account for the 29.92\" Hg pressure standard. In the graph on Figure 6, each dot represents the average pressure of each day in June from 9pm-7am; the sea breeze days (red dots/circles) are mostly above the purple dash line, while the normal days (blue dots/squares) and very moist days (black dots/triangles) are mostly below the purple line with few exceptions.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Normal day forecast & Relevance & Rank \\\\ \\hline Attributes: & \\# of intervals & in the top \\\\ \\hline Pressure falling & 5 & 20 \\\\ \\hline Wind North & 4 & 13 \\\\ \\hline Dew point falling & 5 & 24 \\\\ \\hline \\end{tabular}
\\end{table}
Table 5: Number of time intervals in the normal day forecast
Figure 5: Patterns of pressure measurement between 9 pm and 7am
As was observed, the two distinctive trends for normal days and sea breeze days deserve potentially more quantitative validation. Hence, we also created two more columns combining the absolute pressure values and the trend. To consolidate the 6 intervals, we created column 68 for when more than half of the 6 intervals have high pressure. Also to support both increasing and constant pressure, we created column 67 that returns 1 if at least 4 intervals have pressure rising or constant pressure. Lastly, column 69 is created as [67 OR 68] to consolidate the attributes of high pressure, pressure rising as a possible predictor of a sea breeze day.
Similarly, column 70 for possible prediction of normal days was defined as [\\(\
eg\\) 67 OR \\(\
eg\\)68], so it would mark a day 1 when at least 4 intervals have low pressure or falling pressure on that day, see Table 3.
In addition, a color code scheme for several groups of parameters was designed to better visually represent the relevance of the attributes. For each \\(D\\)-basis run, the top 15 attributes of the highest relevance with their relevance value listed on the side were included. Those attribute numbers were then classified into ten colors based on the same attribute for different time periods. The color blue showed attributes for pressure, green for wind direction, yellow for dew point, and orange for the absolute pressure. Figure 7 below indicates the mapping of the ten colors to column numbers with the time periods specified. Coincidentally, there are ten parameters for each 2-hour interval, which enables the column numbers for each parameters to be exactly ten numbers apart.
Figure 6: June 2018 average pressure dot graph
Figure 7: Color coding of attributes
The color scheme described above was then applied to the top 15 attributes with highest relevance, computed at minimal support=6,8 and 11, with row removal. The left side of Figure 8 lists the top attributes, together with normalized values of relevance, when target is the Sea breeze, and the right side is the relevance when the the target is the Normal day. Similar results are shown for the tests without row removal on Figure 9.
The prevalence of dark colors on the left and light colors on the right demonstrates a clearly observable pattern. For the normal day prediction on the right side, most columns are of lighter colors except for one dark blue and several medium yellow. Generally, the color coded view confirms the potential predictors for both sea breeze days and normal days.
Figure 8: Color coded aggregated relevance from the test with row removal
It is also observed that the columns 69 and 70 both appear in Figures 8 and 9.
Column 69 even ranked in the top 3 relevant to sea breeze, when testing with minimal support of 11 in both graphs. On the other hand, column 70 appeared with the high rank for normal day, when testing with minimal support 8 and 11 in both graphs.
Since the color-coded relevance charts clearly demonstrate the importance of the dark colored attributes in predicting sea breeze, we attempted to incorporate those numbers into our prediction model to achieve better prediction results.
### Computation of the Forecasting Value
In this section we describe a novel approach to use leading attributes and their \\(D\\)-basis based relevances to compute a numerical Forecasting Value (FCV) which would predict either a sea breeze or normal day using measurements from the Hofstra weather station taken between 7pm on the previous day until 7am of the forecast day, which is a generalizable approach that could be developed for any coastal area by adjusting the coefficients of the formula.
If we choose attributes \\(a_{1},a_{2},\\ldots,a_{s}\\) as the most relevant for the prediction of sea breeze, and use relevances for these attributes when targeting \\(d=\\)'sea breeze day', we determine coefficients \\(k_{1},k_{2},\\ldots k_{s}\\). Similarly, we can choose \\(a_{s+1},a_{s+2},\\ldots a_{2s}\\) as the most relevant attributes in predicting a normal day, and their relevances when targeting \\(d=\\)'normal day' are converted into coefficients \\(k_{s+1},k_{s+2},k_{2s}\\). This yields a formula for computation of FCV:
\\[\\text{FCV}(\\overline{x})=k_{1}*x_{1}+k_{2}*x_{2}+\\cdots+k_{s}*x_{s}-k_{s+1}* x_{s+1}-k_{s+2}*x_{s+2}-\\cdots-k_{2s}*x_{2s}\\]
Figure 9: Color coded aggregated relevance from the test without row removal
Here \\(\\overline{x}=\\langle x_{1},x_{2},\\ldots,x_{2s}\\rangle\\) is a vector describing attributes for day \\(x\\): \\(x_{i}=1\\), if attribute \\(a_{i}\\) is observed on the day \\(x\\), and \\(x_{i}=0\\) otherwise. We forecast that \\(x\\) is a sea breeze day, when \\(\\text{FCV}(\\overline{x})>0\\), and a normal day otherwise.
What follows is the proposed technique of the choice of attributes and computation of the corresponding coefficients \\(k_{1},\\ldots,k_{2s}\\). We chose to compute these coefficients based on relevance values given by the \\(D\\)-basis.
We varied the minimum support in several runs of the \\(D\\)-basis: when minimal support is lower, more implications are returned; when minimal support is increased, there are less implications, but they are more valuable as they are manifested in more observations.
The minimal support was chosen at levels 6, 8 and 11, thus, the highest support was corresponding to 10% of the days of the observation. We also found that running \\(D\\)-basis with the row removal, that is, partly removing the rows corresponding to the very moist days, provides better prediction values.
After all attributes were ranked by the relevance in each run, the relevance values were normalized by setting the largest relevance value as the standard 1 so that all normalized relevance values were placed in intervals of values between 0 and 1. This way, a standard measure was available for comparison across \\(D\\)-basis runs of different target columns and minimal supports. Then, for each attribute, the normalized relevance values were averaged over the three runs with different minimum supports, where more weight was given to the values from the runs of larger minimum support. To demonstrate the effectiveness of the normalized relevance values, we composed a dot graph with their averaged values as the yellow line graph. All dots in Figure 10 are within the 0 to 1 range on the vertical axis. The yellow dots at the tops of peaks of line graph represent 6:8:11 ratio averages of the relevance numbers from three runs of algorithm with different minimal supports.
More precisely, if \\(Rel(6,a)\\), \\(Rel(8,a)\\) and \\(Rel(11,a)\\) are normalized relevance values for attribute \\(a\\), when targeting, for example, the sea breeze day in the \\(D\\)-basis run with minimum support 6, 8 and 11, respectively, then the average value of relevance for this attribute would be computed with the following formula:
\\[rel(a)=\\frac{6*Rel(6,a)+8*Rel(8,a)+11*Rel(11,a)}{25}\\]
These values would produce a new ranking of the attributes, one for sea breeze target, and another for the normal day target. Then subset \\(a_{1},\\ldots,a_{s}\\) of the top \\(s\\) attributes was chosen from the sea breeze rank, and another subset \\(a_{s+1},\\ldots,a_{2s}\\) was chosen from the rank of the normal day. The initial choice was \\(s=10\\), but we also considered smaller values of \\(s\\).
Figure 10: 0-1 normalized relevance numbers for three minimal support levels
As the result, the top 10 attributes with aggregated relevance values, associated with the sea breeze, are given in Figure 11. Per color code described in section 6.2, these all have either dark colors or medium colors, and half of attributes are either _high pressure_ or _constant dew point_ attributes.
The top 10 attributes associated with normal day are all non-dark colors, as seen on Figure 12. _Offshore wind_ appears 4 times, for time periods between 11pm and 7am, two others are _low pressure_.
To illustrate the computation, we switch to smaller value of attributes \\(s=5\\), and show how the formula for \\(\\text{FCV}(x)\\) would be used to predict the sea breeze on June 4, 2017.
The top 5 attributes \\(a_{i}\\) for the sea breeze and normal day are listed in Tables 6 and 7, respectively, and the corresponding coefficients \\(k_{i}\\), are listed in Figures 11 and 12.
Vector \\(\\overline{x}\\) corresponding to this entry of the data contains two sub-vectors: \\(x_{sb}\\) for the components corresponding to attributes of the sea breeze
\\[\\overline{x_{sb}}=\\langle x_{35},x_{44},x_{17},x_{34},x_{51}\\rangle=\\langle 1,0,1,1,0\\rangle\\]
and sub-vector \\(\\overline{x_{nor}}\\) for the attributes of the normal day:
\\[\\overline{x_{nor}}=\\langle x_{30},x_{60},x_{53},x_{8},x_{26}\\rangle=\\langle 1,1,0,0,0\\rangle\\]
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline \\(a_{i}\\) & attribute \\# & Description \\\\ \\hline \\(a_{1}\\) & 35 & high pressure 11 pm - 1 am \\\\ \\hline \\(a_{2}\\) & 44 & dew point constant 1-3 am \\\\ \\hline \\(a_{3}\\) & 17 & pressure rising 9 pm - 11 pm \\\\ \\hline \\(a_{4}\\) & 34 & dew point constant 11 pm - 1 am \\\\ \\hline \\(a_{5}\\) & 51 & onshore wind 3-5 am \\\\ \\hline \\end{tabular}
\\end{table}
Table 6: Sea breeze attributes for an example in FCV computation
Figure 11: Top 10 attributes for a Sea Breeze day
Figure 12: Top 10 attributes for a Normal day
Thus \\(\\overline{x}=\\langle 1,0,1,1,0,1,1,0,0,0\\rangle\\). Observations on June 4, 2017 show the presence of three attributes associated with the sea breeze and two associated with normal day predictions, and actual addition and subtraction of corresponding coefficients \\(k_{i}\\) produce positive number:
\\[\\text{FCV}(\\overline{x})=0.663478*1+0.596512*0+0.460862*1+0.406937*1+0.393462*0\\]
\\[-0.781797*1-0.645949*1-0.527335*0-0.527244*0-0.448329*0\\]
\\[=1.531277-1.427746=0.103531>0\\]
Thus, we predict this day to be a sea breeze day. It turns out to be a sea breeze day, indeed.
Changing the number of top attributes used in prediction may result in various success rates. When using all the attributes, we correctly predict 78 days out of the 112 non-moist days.
Since using all attributes gives the overall best result, we focus more on this choice. The false negatives and false positives are further analyzed through Table 8.
Other success measurements could also be computed:
\\[\\text{Sensitivity/Recall}=38/(38{+}14)=38/52=73.1\\%\\]
\\[\\text{F1 score}=40/(40+\\frac{1}{2}\\cdot(14+20))=70.2\\%\\]
\\[\\text{Precision rate}=38/(38{+}20)=65.5\\%\\]
Furthermore, the success rate is further broken down into each year in Table 9.
Table 10 shows the successful prediction rate on non-moist days for various choice of the number of top attributes.
### Computation of Forecasting Values on the random subset of the data
To explore the prediction method of Forecasting Values demonstrated above, the process was repeated after 30 rows were removed from the data. The rankings of the Forecasting Values were compared between the two runs to validate the overall consistency and usefulness of the method. The expectation was that the rankings might be slightly different, but the overall important attributes would be similar.
The 30 rows were chosen using the _random.sample_ function in Python:
\\[\\text{sorted(random.sample(range(3,123),30)).}\\]
It selected 30 non-repeating random numbers in the range 3-123, which are numbers for the rows of the \\(D\\)-basis entries corresponding to 120 days of observation.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline \\(a_{i}\\) & attribute \\# & Description \\\\ \\hline \\(a_{8}\\) & 30 & offshore wind 11 pm - 1 am \\\\ \\hline \\(a_{9}\\) & 60 & offshore wind 5-7 am \\\\ \\hline \\(a_{10}\\) & 53 & dew point falling 3 am - 5 am \\\\ \\hline \\(a_{11}\\) & 8 & pressure falling 7-9 pm \\\\ \\hline \\(a_{12}\\) & 26 & low pressure 9 pm- 11 pm \\\\ \\hline \\end{tabular}
\\end{table}
Table 7: Normal day attributes for an example in FCV computation
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline & Predicted SB & Predicted Normal \\\\ \\hline Actual SB & 40 & 14 \\\\ \\hline Actual Normal & 20 & 38 \\\\ \\hline \\end{tabular}
\\end{table}
Table 8: Contingency table of sea breeze forecast This set of 30 random days included 15 sea breeze days, 2 very moist days and 13 normal days. The distribution resembled the overall distribution of those day categories in the entire data set, which made this randomly selected subset reasonable for random subset testing.
The entries for those 30 rows were changed to 0, so that they were not taken into consideration in the subsequent \\(D\\)-Basis runs. This is similar to the procedure of row removal in \\(D\\)-Basis built into its functionality.
Since the total number of the entry dropped from 120 down to 90, the parameter of minimal support in the \\(D\\)-Basis was also scaled down in order to preserve the consistency of the testing. The scaling down from minimal support values of 6, 8, 11 for 120 effective rows down to 90 effective rows was as follows:
\\[\\frac{6}{120}=\\frac{m_{1}}{90}\\to m_{1}=4.5\\to m_{1}\\approx 4\\]
\\[\\frac{8}{120}=\\frac{m_{2}}{90}\\to m_{2}=6\\]
\\[\\frac{11}{120}=\\frac{m_{3}}{90}\\to m_{3}=8.25\\to m_{3}\\approx 9\\]
For \\(m_{1}\\), we decided to use the floor function of 4.5 as its ceiling 5 is too close to \\(m_{2}\\). Thus, the values of minimal supports were finalized to \\(4,6,9\\).
Since in the testing with full data the row removal of the very moist days brought to the better results, this removal was done on the subset of 90 days as well.
Then, the same process was repeated to compute the Forecasting Values for that modified input matrix. The success rates on non-stormy days when using modified list of top attributes are given in Table 11.
Similarly, the attributes associated with normal days were still light-colored, see Figure 14, and low pressure became more dominant than offshore wind, compared to full data results on Figure 10.
It is observed that 'high pressure' tends to come up on top of the list for sea breeze as in Figure 14. 4 'high pressure' attributes are now in the top 10 list, including the previous 3 'high pressure' attributes: 35,15,45. Meanwhile, for the normal day list, we still see the 'offshore wind' attributes, and more of the 'low pressure' attributes, including those appearing on the full data: 26 and 46.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Top \\# of attributes utilized & \\# of days predicted correctly & Success rate out of 112 non-moist days \\\\ \\hline All attributes & 78 & 69.6\\% \\\\ \\hline Top 10 & 77 & 68.8\\% \\\\ \\hline Top 9 & 75 & 67.0\\% \\\\ \\hline Top 8 & 75 & 67.0\\% \\\\ \\hline Top 7 & 76 & 67.9\\% \\\\ \\hline Top 6 & 75 & 67.0\\% \\\\ \\hline Top 5 & 76 & 67.9\\% \\\\ \\hline Top 4 & 75 & 67.0\\% \\\\ \\hline Top 3 & 75 & 67.0\\% \\\\ \\hline Top 2 & 74 &
### Comparison with synthetic data
An earlier study Segal et al. 2018 compared the performance of the \\(D\\)-basis algorithm on real data with the random data of similar characteristics. It showed that there was a much lower probability of rules of high support occurring in random data of same size and density, compared to real data. In particular, it showed more uniform distribution of total support figures among all attributes. That result brought the average of relevances across all attributes close to 1, for the wide range of density of random data.
For example, the testing of several thousand tables of size \\(20\\times 32\\) in Segal et al. 2018 showed the average relevance values between 1 and 2, i.e., close to insignificant, for the range of densities of entry 1 in the tables between 0.3-0.6. We note that the majority of real data densities does fit into this range, given the rules of conversions between raw data and its binary representation.
Figure 14: Top 10 attributes for a Normal day on a random subset
Figure 13: Top 10 attributes for a Sea Breeze day on a random subset
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Top \\# of attributes utilized & \\# of days predicted correctly & Success rate out of 84 non-moist days \\\\ \\hline All rows & 54 & 64.3\\% \\\\ \\hline Top 10 & 54 & 64.3\\% \\\\ \\hline Top 9 & 54 & 64.3\\% \\\\ \\hline Top 8 & 54 & 64.3\\% \\\\ \\hline Top 7 & 55 & 65.5\\% \\\\ \\hline Top 6 & 55 & 65.5\\% \\\\ \\hline Top 5 & 54 & 64.3\\% \\\\ \\hline Top 4 & 52 & 61.9\\% \\\\ \\hline Top 3 & 55 & 65.5\\% \\\\ \\hline Top 2 & 52 & 61.9\\% \\\\ \\hline Top 1 & 55 & 65.5\\% \\\\ \\hline \\end{tabular}
\\end{table}
Table 11: Success rate on a random subset of observationsTo confirm the actual signal from the weather data in our analysis, we ran a similar comparison with random data of similar parameters. The synthetic data was made of the unchanged target columns, while the rest of columns were randomized based on original density of entry 1 at 0.402738. The results of three runs of synthetic data on table of size \\(120\\times 72\\), targeting column 71, which was not changed compared to real data, are given in Table 12.
Since there was only one rule generated for minimal support 11, the relevance values were not computed in random data test. Thus, only the relevance for minimal supports 6 and 8 were computed and compared with the real data.
The histograms on Figures 15 and 16 show resulting distribution of relevance values \\(rel_{d}(a)\\) across all attributes \\(a\\) in the data.
The values of \\(a\\) are placed along the \\(x\\)-axis, and the component along the \\(y\\)-axis is the relevance value computed in the synthetic data for the target column \\(d=\\)71.
For the \\(D\\)-basis test with minimal support \\(=6\\), the majority of the relevance values were insignificant and only a few attributes stood out. But as the minimum support increased to 8 and then 11, the number of rules dwindles down considerably and most attributes do not appear in the rules.
In the real data, since each 2-hour time interval has 10 attributes, the peaks in about every 10 attributes are observed; those attributes correspond to the important weather observations occurring in each 2-hour interval. Conversely, the pattern for the synthetic data does not possess any trend as observed by eye and are significantly smaller in magnitude as well.
In the test with the minimal support \\(=8\\) significantly less fluctuations in the synthetic data is observed. More importantly, the grey line representing the synthetic data shows considerably smaller total support values, and only 9 attributes have non-zero relevance values, see Figure 16.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Minimal Support & Synthetic Data (\\# of rules) & Real Data (\\# of rules) \\\\ \\hline
6 & 203 & 1035 \\\\ \\hline
8 & 5 & 266 \\\\ \\hline
11 & 1 & 33 \\\\ \\hline \\end{tabular}
\\end{table}
Table 12: Comparison between number or rules in synthetic data and real data
Figure 15: Relevance values in real data vs synthetic data in test with minimal support=6
In conclusion, from the two figures we observe that the real data demonstrates a considerably strong signal reflected in the high frequency of some attribute appearing in the rules, compared to the flat and uniform frequences of most attributes in the rules, when the ones in the table randomly permuted.
## 7 Conclusions and Discussion
In this study we relied on data collected by weather stations located relatively far inland instead of using the measurement of the sea temperature. This differs our approach from the standard reliance on temperature gradient between sea and land, when predicting the sea breeze. In the present analysis, we included measurements during the 12 hours overnight prior to the day of prediction, averaging the behavior over two-hour intervals. Thus, our methods could be useful in developing a prediction algorithm for other locations where sea surface temperatures are unavailable.
To determine the predicted presence or absence of the sea breeze, we used the \\(D\\)-basis algorithm. That algorithm retrieves specific implications and association rules between the columns of the binary table, namely, those whose conclusion is a selected column. In our case it was either the column marking sea breeze days occurring during month of June between 2017-2020 on Long Island, New York, or normal days of the same period. The third category of days is very moist, when the synoptic scale features typically control the onshore wind flow, and not the smaller scale sea breeze. The \\(D\\)-basis algorithm allows for an analysis in the presence of entries for which the partial failure of implications occurs, thus, very moist days were partially and randomly omitted on multiple runs of the algorithm.
Among the weather observations included in the initial analysis were dynamic parameters: increasing, decreasing, or constant pressure and dew point, as well as the direction of wind in relation to coast line of the south shore of Long Island. Later we added an absolute value of station pressure distinguishing high pressure and low pressure, using a threshold of 29.92\" Hg. Using that definition, high pressure, especially during the first half of the overnight observations between 7pm-1 am, appeared as a prominent predictor of a sea breeze, as well as a constant dew point and early morning (3-5 am) onshore winds. In contrast, a combination of early morning offshore winds, falling or low pressure, and rising dew point appeared as predictors of a normal, non-sea breeze, day. The strongest association was found with observations made between 7 pm and 1 am the previous evening. Those weather variables most closely corresponding with sea breeze formation match the climatological expectations. High pressure typically implies a lack of larger scale synoptic controls on wind direction, such as fronts and cyclones, while a constant dew point also indicates a stable air mass and lack of surface boundaries.
We also determined that by selecting all weather attributes, and converting their frequencies coming from retrieved rules in the \\(D\\)-basis algorithm, that we could set a linear combination of binary variables for each column with predetermined coefficients. Each day's observation resulted in values of either 1 or 0 for each binary variable \\(x_{i}\\), depending on the presence or absence of a particular weather attribute on that day. The total value of this linear combination, when positive, indicates the
Figure 16: Relevance values in real data vs synthetic data in test with minimal support=8
presence of most sea breeze predictors, and therefore, forecasts a sea breeze day, otherwise, it forecasts a non sea breeze day. Prediction of sea breeze days with this formula was about 70% accurate: with only 14 wrong predictions over a total 54 sea breeze days. Moreover, the success rate for the year 2020 alone was about 74%. This success rate is lower than the 85% rate in McCabe and Freedman 2023, which approached the same region and time period. However, that particular study included both 'classic' and 'hybrid' sea breeze days, the latter of which we attempted to eliminate from consideration in the present study, through the synoptic climatology requirement, which means the success rates are not directly comparable. This is due to the fact that the McCabe and Freedman 2023 study was less restrictive in the sense that days where synoptic influences impacted the sea breeze were counted as sea breeze days and thus an accurate prediction made by their algorithm. These contrasting definitions of a Long Island sea breeze event reflect the notion that there is no universally accepted definition for what constitutes a sea breeze event, and further, that the definition and associated identification algorithm can vary based on its application. For example, when attempting to forecast the effect of enhanced wind speeds from the sea breeze on wind energy potential, the broader definition employed in McCabe and Freedman 2023 would be more appropriate. Conversely, if attempting to isolate the mesoscale influence of the sea breeze on near-surface weather conditions, a more restrictive definition, such as the one used in the present study, would be necessary.
It should also be noted that different combinations of predicting attributes are possible too. For example, using only two attributes resulted in a similar success rate in prediction. Moreover, an adjustment of coefficients in the forecasting formula could be undertaken when more observation are collected over time, thus allowing the formula to potentially be more accurate. Last, in our study, we did not distinguish between sea breeze days with respect to the classification of a prior day. Indeed, a simple persistence forecast (i.e., today will be the same as yesterday) can also be somewhat effective when predicting whether a sea breeze will occur. Nevertheless, the algorithm developed here was still more accurate than a persistence forecast would have been, and future iterations are likely to be even more accurate when a larger sample of data can be ingested, allowing for this tool to have value to weather forecasters in the region.
AcknowledgmentsWe appreciate the technical support of Dr. Oren Segal in the Hofstra University Department of Computer Science for his involvement in various projects related to the \\(D\\)-basis algorithm. All tests and computations were performed using the Virtual Machine at the Hofstra University Fred DeMatteis School of Engineering and Applied Science's Computing Center. We also appreciate the initiative and support of Justin Cabot-Miller, who helped establish this project while an undergraduate research assistant at Hofstra.
## References
* Adaricheva and Nation (2017) Adaricheva, K. and J.B. Nation (2017). Discovery of the \\(D\\)-basis in binary tables based on hypergraph dualization. _Theor. Comp. Sci., Part B_ 658, 307-315.
* Adaricheva et al. (2015) Adaricheva, K., J.B. Nation, G. Okimoto, et al. (2015). Measuring the Implications of the \\(D\\)-basis in Analysis of Data in Biomedical Studies. _Proceedings of ICFCA-15, Nerja, Spain_. Springer, 39-57.
* Adaricheva et al. (2013) Adaricheva, K., J.B. Nation, and R. Rand (2013). Ordered direct implicational basis of a finite closure system. _Disc. Appl. Math._ 161, 707-723.
* Agrawal et al. (1993) Agrawal, R., T. Imielinski, and A. Swami (1993). Mining associations between sets of items in large databases. _Proceedings of ACM SIGMOD, Washington D.C._ 207-16.
* Balcazar (2010) Balcazar, J.L. (2010). Redundancy, deduction schemes, and minimum-size bases for association rules. _Log. Meth. Comput. Sci._ 6, 1-33.
* Barbut and Monjardet (1970) Barbut, M. and B. Monjardet (1970). Ordres et classifcations: Algebre et combinatoire. Paris: Hachetite.
* Biggs and Graves (1962) Biggs, W. G. and M. E. Graves (1962). A lake breeze index. _J. Appl. Meteorol._ 1, 474-480.
* Birkhoff (1940) Birkhoff, G. (1940). Lattice theory. Providence, RI: AMS Colloquium Publications.
* Cetola (1997) Cetola, J. D. (1997). A climatology of the sea breeze at Cape Canaveral, Florida. _Air Force Institute of Technology Master's Thesis_.
* Cattin et al. (2015)Colle, Brian A., Joseph B. Olson, and Jeffrey S. Tongue (2003). Multiseason verification of the MM5. Part I: Comparison with the Eta model over the central and eastern United States and impact of MM5 resolution. _Weather and Forecasting_ 18.3, 431-457.
* Coulibaly et al. (2021) Coulibaly, L., B. Kamsu-Foguem, and F. Tangara (2021). Explainability with Association Rule Learning for Weather Forecast. _SN Computer Science_ 2.116, 17.
* Ganter and Wille (1999) Ganter, B. and R. Wille (1999). Formal concept Analysis: Mathematical Foundations. Springer.
* Hughes and Veron (2018) Hughes, C. P. and D. E. Veron (2018). A characterization of the Delaware sea breeze using observations and modeling. _Journal of Applied Meteorology and Climatology_ 57, 1405-1421.
* Laird and Kristovich (2004) Laird, Neil F and David A R Kristovich (2004). Comparison of observations with idealized model results for a method to resolve winter lake-effect mesoscale morphology. _Monthly Weather Review_, 1093-1103.
* McCabe et al. (2023) McCabe, Elizabeth and Jeffrey Freedman (2023). Development of an Objective Methodology for Identifying the Sea Breeze Circulation and Associated Low-Level Jet in the New York Bight. _Weather and Forecasting_.
* Miller et al. (2003) Miller, S.T.K. et al. (2003). Sea breeze: structure, forecasting, and impacts. _Reviews of Geophysics_ 41, 1011-2003.
* Nation et al. (2021) Nation, J.B. et al. (2021). Combining algorithms to find signatures that predict risk in early-stage stomach cancer. _Journal of Computational Biology_ 28, 985-1006.
* Novak and Colle (2006) Novak, David R. and Brian A. Colle (2006). Observations of multiple sea breeze boundaries during an unseasonably warm day in metropolitan New York City. _Bulletin of the American Meteorological Society_ 87.2, 169-174.
* Segal et al. (2018) Segal, O. et al. (2018). The \\(D\\)-basis algorithm for association rules of high confidence. _IT in Industry_ 6 (N3).
* Sheridan (2002) Sheridan, Scott C (2002). The redevelopment of a weather-type classification scheme for North America. _International Journal of Climatology: A Journal of the Royal Meteorological Society_ 22.1, 51-68.
## Appendix A Appendix: The description of the \\(D\\)-basis algorithm
### \\(D\\)-basis algorithm
In this work we explore an approach in weather forecasting that utilize the discovery of association rules in the binary data. Association rules took off as a premiere tool of Data Mining analysis of transaction data since introduction of _Apriori_ algorithm in Agrawal, Imielinski, and Swami 1993. One of more recent surveys on association rules is in Balcazar 2010.
_Apriori_ remains a main tool of extraction of association rules in binary data and was included into libraries of R and Microsoft office.
In our work we employ a novel algorithm with the background in Formal Concept Analysis (FCA), see Ganter and Wille 1999. It is based on non-trivial mathematical theorem that associates a finite binary table with a uniquely defined _Galois lattice_, see Birkhoff 1940; Barbut and Monjardet 1970. The structure of that mathematical object is fully described by some set of _implications_, defined on the set of columns (or rows) of the given table. Implications can be understood in the framework of Data Mining as association rules of confidence = 1.
The \\(D\\)-basis was introduced in Adaricheva, Nation, and Rand 2013, and the theoretical background of the algorithm of the \\(D\\)-basis extraction from a binary table is described in Adaricheva and Nation 2017. It discovers the implications \\(S\\to d\\) in a table with entries 0 and 1. Here \\(S\\) is a subset of attributes/columns (weather observation attributes) and \\(d\\) is another column (say, an indicator of sea breeze day). The rows of the table represent the days of the observation.
The rule \\(S\\to d\\) is found in the table, if in each row (each day of observation in our case) the following is true: if all entries in columns (weather observations) of subset \\(S\\) are 1, then column \\(d\\) also has entry 1.
The _support_ of a rule \\(S\\to d\\) is the number of rows (days), where all entries in columns \\(S\\cup d\\) are 1, i.e. where the rules is _validated_.
In practice, the algorithm computes rules that hold in almost all rows of the table, that is, rules such that the probability of \\(d=1\\), when the entries in \\(S\\) are all 1, is above a given threshold. Moreprecisely, parameter _the confidence of rule_\\(S\\to d\\) is computed as follows:
\\[conf(S\\to d)=\\frac{\\#rows\\text{ where all }S\\text{ and }d\\text{ have values }1}{\\#rows\\text{ where all }S\\text{ have values }1}\\]
Confidence of rule \\(S\\to d\\) is 1 iff \\(d=1\\) in every row, where all values of \\(S\\) are 1. In order to retrieve the rules that may fail in some rows, thus, have \\(conf<1\\), one may remove those rows and retrieve the rules from a sub-table.
This functionality of the application is described in publication Segal et al. 2018.
**Example 8-1.** On one of earlier tests when the data of three years was used, thus, there were 90 rows of the table representing 90 days of three months of June between 2017-2019, the \\(D\\)-basis was applied with row-removal parameter of the following list of rows:
_Row removal 3 of \\(\\{13,14,19,22,33,67,87,90\\}\\). These are the rows very moist days are used as prediction._
The meaning of such test is that algorithm will run multiple times on sub-tables of the table, when 3 rows of the given list are removed. Since very moist days may disturb the weather patterns and may not predict correctly what happens on the following day, some of these days are removed from analysis.
In addition to the test above, the list of removed rows was also shifted by one:
_Row removal 3 of \\(\\{12,13,18,21,32,66,86,89\\}\\). These are the rows that are predicting very moist days._
This is done because the the days prior to very moist days may also have disturbed weather patterns.
### Ranking attributes relevant to a target attribute
Unlike the most approaches in mining the association rules, when some rules are selected based on some techniques measuring the rules themselves, we are looking at the measurement of all attributes as being more or less _relevant_ to some chosen _target_ attribute.
To this aim, the \\(D\\)-basis algorithm is most suitable, since it may retrieve only a sector of the total basis of implications describing the table, namely, the rules \\(S\\to d\\), when \\(d\\) is some fixed _target_ attribute.
Instead of looking into the rules themselves, we measure the frequency of any other attribute \\(a\\) as it appears in the antecedents of the rules \\(S\\to d\\), together with other attributes. Note that the same idea is highlighted in Coulibaly, Kamsu-Foguem, and Tangara 2021, in relation to 18 rules considered there, which connect several weather attributes with the same target=\"heavy rain'.
Our parameter of _the relevance_, which appeared first in Adaricheva, Nation, Okimoto, et al. 2015, when applying the \\(D\\)-basis to the medical data, requires the computation of the rules not only on target \\(d\\) but also on its complement \\(\
eg d\\), which may or may not appear in original data.
Another advantage of our approach is that we do not shy away from having a big amount of retrieved rules, because they provide better representation of all attributes and allow better comparison of attributes related to a given target.
The top attributes were identified through testing with variation of the _minimal support_, which refers to percentage of observations validating the rules connecting attributes and sea breeze/absence of sea breeze. In our testing, we varied minimal support between 5% and 15% of observations in the testing set.
### Formula for the parameter of relevance
For a fixed column \\(d\\) and any other column \\(a\\), one can compute the total support of all rules \\(S\\to d\\) such that \\(a\\) is in \\(S\\). This parameter shows the frequency that \\(a\\) appears in implications targeting\\(d\\). The algorithm can also compute a similar frequency of \\(a\\), when targeting \\(\
eg d\\), i.e., an additional column where all entries in \\(d\\) are switched. The ratio of the two frequencies gives the _relevance_ of attribute \\(a\\) to \\(d\\). Thus, all attributes of the table can be ranked in their relevance to a fixed attribute \\(d\\).
The \\(D\\)-basis can be applied to the entry table formed by attributes available in objects/instances of observation, and choose \\(d\\) as a marker for a particular property/attribute (in our case - for the sea breeze, or normal day). It was used in ovarian cancer analysis in Adaricheva, Nation, Okimoto, et al. 2015, and in stomach cancer analysis in Nation et al. 2021, where observations were represented by patients in the study, and target attribute was an indicator of long survival after the treatment.
Let us give more precise definition how the relevance of attribute \\(a\\) with respect to target attribute \\(d\\) is computed. For each attribute \\(a\\in S\\setminus d\\), the important parameter of relevance of this attribute to \\(d\\in X\\) is a parameter of _total support_, computed with respect to any set of rules/basis \\(\\beta\\) of association rules describing the table (in our case, it is portion of the \\(D\\)-basis which only includes rules of requested minimum-support at least \\(\\delta\\)):
\\[\\text{tsup}_{d}(a)=\\Sigma\\{\\frac{|sup(X)|}{|X|}*conf(X\\to d):a\\in X,(X \\to d)\\in\\beta\\text{ and }|sup(X)|\\geq\\delta\\}.\\]
Thus \\(\\text{tsup}_{d}(a)\\) shows the frequency of parameter \\(a\\) appearing together with some other attributes in implications \\(S\\to d\\) of the basis \\(\\beta\\). The contribution of each implication \\(S\\to d\\), where \\(a\\in S\\), into the computation of total support of \\(a\\) is higher when the support of \\(X\\) is higher, i.e., column \\(a\\) is marked by 1 in more rows of the table, together with other attributes from \\(X\\), but also when \\(X\\) has fewer other attributes besides \\(a\\).
**Example 8-2.** In the earlier stage of testing, when we used 58 attributes (columns) on 120 days representing rows, we targeted column \\(d=55\\):
55: Next Day Sea Breeze (1=SB, 0=Normal or very moist)
and one of about 500 implications with minimum support \\(\\geq 6\\) that appeared in the output may look as follows:
5 16 30 52 \\(\\rightarrow\\) 55 ; RealSupport = 7; rows = 23, 62, 78, 80, 109, 110, 118,
conf=0.78, fail rows = 18, 66
Here the attributes on the left side of implication mean:
5: SSC Moist Day (1=yes, 0=no)
16: Pressure Falling between 9-11 pm (1=yes, 0=no)
30: Dew Point Consistent (1=yes, 0=no) within (-1,1) between 11 pm -1 am
52: Dew Point Rising (1=yes, 0=no) between 5-7 am
List of rows for the RealSupport means that on each of days 23, 62, 78, 80, 109, 110 and 118 in our data of 120 days it was a sea breeze day (1 in column 55) and all four attributes appeared. The confidence is \\(0.78=7/9\\), because all four attributes appeared on 9 days, but on days 18 and 66 attribute 55 did not appear, since it was not a sea breeze.
For each of attributes \\(a\\) = 5, 16, 30, 52 this implication produces the following contribution into \\(tsup_{55}(a):\\frac{7}{4}*0.78=1.36\\). The totals of \\(tsup_{55}(a)\\) are then combined across all implications, where \\(a\\) appears. For example, \\(tsup_{55}(5)=34.27\\) and \\(tsup_{55}(30)=136.36\\).
While the frequent appearance of a particular attribute \\(a\\) in implications \\(S\\to d\\) might indicate the relevance of \\(a\\) to \\(d\\), the same attribute may appear in implications \\(U\\rightarrow\
eg d\\). The attribute \\(\
eg d\\) may not be present in the table and can be obtained by converting the column of attribute \\(d\\) into its complement.
Let \\(\\beta(\
eg d)\\) be the basis of closure system obtained after replacing the original column of attribute \\(d\\) by its complement column \\(\
eg d\\). Then the _total support_ of \\(\
eg d\\) can be computed, for each \\(a\\in X\\setminus d\\), as before:
\\[\\text{tsup}_{\
eg d}(a)=\\Sigma\\{\\frac{|sup(X)|}{|X|}*conf(X\\rightarrow\
eg d):a \\in X,(X\\rightarrow\
eg d)\\in\\beta(\
eg d)\\}.\\]
**Example 8-2.** (continued)
For the test described in Example above, we also run \\(D\\)-basis with the target column 56, which is marked by 1 for every _non-sea breeze_ day
56: Complement of 55 (1=Normal or very moist, 0=SB)
For example, the same attribute 5 appeared in one of implications in the output:
5 18 26 54 \\(\\rightarrow\\) 56 ; Real Support = 6; rows = 49, 55, 59, 60, 103, 120, conf=1.00
As the result, \\(tsup_{56}(5)\\) will have an addend for this implication: \\(\\frac{6}{4}*1=1.5\\). Computing the sum across all implications that have attribute 5, we will get \\(tsup_{56}(5)=114.88\\). According to this number, attribute 5 appears more frequently in all association rules of minimal support \\(\\geq 6\\) on _non-sea breeze_ days.
Define now the parameter of relevance of parameter \\(a\\in X\\setminus d\\) to parameter \\(d\\), with respect to basis \\(\\beta\\):
\\[\\text{rel}_{d}(a)=\\frac{\\text{tsup}_{d}(a)}{\\text{tsup}_{\
eg d}(a)+1}.\\]
The highest relevance of \\(a\\) is achieved by a combination of high total support of \\(a\\) in implications \\(S\\to d\\) and low total support in implications \\(U\\rightarrow\
eg d\\). This parameter provides the ranking of all parameters \\(a\\in X\\setminus d\\).
**Example 8-2.** (continued) Using two numbers \\(tsup\\) computed for attribute \\(a=5\\), when targeting \\(d=55\\), we come up with relevance of 5 for the sea-breeze:
\\[\\text{rel}_{55}(5)=\\frac{\\text{tsup}_{55}(5)}{\\text{tsup}_{56}(5)+1}=\\frac{34.27}{114.88+1}\\approx 0.29\\]
The relevance number smaller than 1 indicates that attribute appears more frequently on the days that are _not sea-breeze_ than on sea breeze days.
More precisely, the relevance is compared with number \\(\\alpha=\\frac{sup(55)}{sup(56)}\\), which is the ratio of ones to zeroes in the target column. In our case, it is approximately 1. | The sea breeze is a phenomenon frequently impacting Long Island, New York, especially during the spring and early summer, when land surface temperatures can exceed ocean temperatures considerably. The sea breeze influences daily weather conditions by causing a shift in wind direction and speed, limiting the maximum temperature, and occasionally serving as a trigger for precipitation and thunderstorms. Advance prediction of the presence or absence of the sea breeze for a certain location on a given day would therefore be beneficial to weather forecasters. To forecast sea breeze occurrence based on the previous night's weather conditions, we used a novel algorithm called the \\(D\\)-Basis. We analyzed sea breeze data from a recent four year period (2017-2020) at a single weather station several miles inland from the coast. High or constant station pressure, high or constant dew point, and onshore wind from the previous night were found to be strong predictors of sea breeze formation the following day. The accuracy of the prediction was around 74% for June 2020. Unlike other prediction methods which involve the comparison of sea surface and land surface temperatures in near real time, our prediction method is based on the parameters from the prior night, allowing it to potentially aid in advanced forecasting of the sea breeze. 1 | Summarize the following text. | 252 |
arxiv-format/1907_07748v1.md | # End-to-end sensor modeling for LiDAR Point Cloud
Khaled Elmadawi\\({}^{1}\\), Moemen Abdelrazek\\({}^{1}\\), Mohamed Elsobky\\({}^{1}\\), Hesham M. Eraqi\\({}^{1}\\), and Mohamed Zahran\\({}^{1}\\)
\\({}^{1}\\)Khaled Elmadawi, Moemen Abdelrazek, Mohamed Elsobky, Hesham M. Eraqi, and Mohamed Zahran are with Valeo AI Research, Cairo, Egypt [email protected], {moemen.abdelrazek.ext, mohamed.elsobky, hesham.eraqi, mohamed.zahran}@valeo.com
## I Introduction
In this paper we are introducing a novel method to mimic LiDAR sensor from reality into simulation environments by using Deep Neural Networks, giving us a chance to generate as much data as possible with a limited cost, unlike the followed methods used in the automotive field as will be explained in the below sections. However, before explaining our problem and our proposed solution, let's start establish some basics and definitions about LiDAR sensor, how it works and what exactly are we modeling.
### _LiDAR Sensor_
LiDAR sensor basic components are emitter, receptor(Avalanche Photodiode known as APD), and a processing unit. LiDAR sensor measures distance by multiplying the speed of light by half the time of travel of the pulse from the emitter to the receiver APD.LiDAR processing unit applies filters and thresholds on the received pulse to differentiate between noisy detections and meaningful detections. Received LiDAR pulse carries information about the materials and the reflectively of the detected object. These information are represented in one of two values, detected reflection Intensity, or detected reflection Echo Pulse width depending on what exactly the LiDAR processing unit is calculating or returning in it's byte stream, as shown in fig [1].
LiDAR sensor can have multiple reflections from the same pulse due to the division of the ray profile resulting into multiple echos as shown in fig [2].
In our work we are mimicking the EPW from real LiDAR sensor, with it's multiple reflections as will be illustrated in the coming sections.
### _Motivation_
Data generation[2] in Self-driving cars applications is one of the biggest challenges that manufacturers are facing in the field, where it is either gathering data from the street, and
Fig. 1: Basic Lidar Anatomy.
Fig. 2: LiDAR multiple echos phenomina.
annotating them, then passing them through development and validation processes and cycles which is too expensive, or generating syntactic data from an artificial environment that will be very ideal and far from realistic effects.
### _Contribution_
We provide a measurable accurate sensor model that represents LiDAR sensors physical properties in EPW and noise model. Our proposed sensor model can run in real-time and compatible with the famous simulation environments used in automotive industry (like CarMaker [16], Unity [17] and ROS[18]). We conduct a quantitative and qualitative benchmarks against comprehensive real LiDAR data which demonstrate the effectiveness of our proposed sensor model, and present measurable way of how close or far are we from the real sensor model itself, and to give you a closer image of what we are presenting, we challenge you to identify which of the two images in fig [3] is coming from simulation environment, and which is coming from real environment.
## II Related Work
In this section we provide the related work to real to sim and sim to real transformations, where we can use simulation environments in to train a reinforcement learning DNN then applying it in real world, as shown in sim-to-Real[4], or by generating data streams from syntactic environment to provide more training data to the neural networks, provide the neural network with scenarios that can't be reproduced in real life, and having an automatic annotation to the generated perception[1], all of these efforts require the syntactic generated environments to be mapped from the ideal simulated domain to real domain with contains imperfection and physical effects or from domain A to domain B as shown in [3][6], otherwise self driving vehicles will suffer a huge bias in the performance when we shift from simulation domain to real domain.
This is what we propose in this paper, we propose a method of how to transform LiDAR perception from simulated ideal domain to real domain by using end to end Deep neural network stack that contains physical properties, environmental effects and sensor artifacts as shown in fig [3]
## III Our Approach
We hypothesize that the EPW value depends mainly on the object materials, distance from the sensors, and the laser beam inclination and yaw shooting angles. Hence we construct two kind of histograms; one for EPW values per echo and object materials, and the other one is the probability of echos occurrence over different yaw shooting angles. Such histograms as shown in fig [4] represent a look-up table that contains EPW sensitivity to aforementioned dependencies, which relate it to signal strength or attenuation, noise model through different distances, different objects and echos occurrence distributions information. Typically, as will be further detailed in the Experimental Setup section, the sensor we used in our analysis supports up to 3 echos. Our hypothesis validity is verified in the experiments and results section.
In addition, we propose an end-to-end deep learning-based approach that learns the complex function mapping from physical environmental setup to LiDAR data physical properties.
Our architecture is two stage DNN, first is fully convolutional DNN that inferences EPW value based on Polar grid map approach, second is one out of many selection block, where it selects one discrete representation of the ray profile of the many discrete signals of the simulated ray profile for
Fig. 4: Multidimensional Lockup table that the DNN need to learn.
Fig. 3: A comparison between real LiDAR data and data from synthetic data generated from our proposed sensor model to left and right respectively. Each scan point color represent its Echo Pulse Width (EPW) value. It is obvious that both examples 1- Our approach has clearly mimicked EPW values from real data. 2- Our approach could mimic noise model in syntactic generated data in the far perception. 3- Our model could learn how to represent lanes as learned from real traceseach echo, and from this selection noise model is represented, as shown in fig [5].
### _Architecture_
Our architecture is based mainly on two stages, first is fully convolutional DNN that inferences EPW value based on Polar grid map approach, second is one out of many selection block, where it selects one discrete representation of the ray profile of the many discrete signals of the simulated ray profile, and from this selection noise model is applied.
### _Polar Grid Map_
half of the solution in most of the deeplearning problems is in the approach that the problem is tackled by, how will you represent the input, the objective that is required to be learned by the DNN, and what exactly does the output layer represents.
The Polar Grid Map (PGM) is a representation for a LiDAR full scan in a 3D tensor. The full scan is composed of a cloud of scan points which is encoded in a PGM. Each channel of the PGM is a 2D grid map, where each row represents a horizontal LiDAR layer. Having the sensor as a reference point, each scan point is determined by a distance from such reference, and azimuth and altitude angles. Each PGM cell corresponds to a scan point, the row and column indices represent the scan point altitude and azimuth angles respectively. The cell value represent information about the corresponding scan point. In the first PGM channel, the value holds the scan point distance, while the second channel holds its class. The same representation can be extended to represent more information by adding extra channels, while in our study we only use the two mentioned channels. An example for PGM representation coming from a single LiDAR full scan is shown in fig [6].
The input coming from our simulation environments would be a dense annotated point cloud, the dense point cloud represents the discrete representation of the laser ray profile, as shown in fig [7].
### _Epw Dnn_
We adopt the U-Net architecture [5] to work on input data of only two channels, and we reduce the number of blocks to fulfill reasonable run-time constraints. For each full scan, the corresponding PGM is used as an input to the network. It goes through an encoder composed of three down-sampling blocks, each block has two convolutional layers with variable number of 3x3 kernels, followed by a max-pooling layer. Then the output goes through three up-sampling blocks, each block has 2 convolutional layers with variable number of 3x3 kernels followed by a transpose convolution layer. The contracting paths, or skip connections as in [5], from U-Net are adopted to capture LiDAR data context. The network output represents a PGM of 1 channel that holds the EPW information for the input full scan. The network is shown in fig[8].
Fig. 5: DNN Pipeline that Encapsulate Sensor model N-dimensional Lockup table.
Fig. 8: Unet architecture. Each white box corresponds to a multi-channel feature map. The number of channels is denoted on top of the box. The x-y-size is provided at the middle of the box.
Fig. 6: Annotated Polar Grid Map point cloud, Upper PGM is depth representation, lower PGM is point level annotation.
Fig. 7: Dense Point level Annotated Point Cloud.
We use reduced mean square error in the epw dnn loss with regularization.We chosen this loss to penalize more large errors.
\\[Loss=\\frac{1}{n}\\sum_{i=1}^{n}(y_{i}-y_{i}^{\\prime})^{2}+\\frac{\\lambda}{2}*\\sum W _{t} \\tag{1}\\]
The EPW DNN output's is the same point level annotated point cloud with their Inferred EPW, as shown in fig [9].
The EPW was trained in 350 epochs, with early stopping depending on the L1 score, we used a batch size of 8 samples, learning rate was 1e-5, and the annotated classes were [None, Cars, Trucks, Pedestrians, Motorbikes, High reflective materials].
### _Second Stage Histogram Classifier_
We pass the Dense Point cloud to a Histogram Classifier to make one out of many point selection, where it selects from the Dense Point cloud what makes the Scan points looks like the realistic scan points as in real Lidar sensor model, as shown in fig [10].
## IV Experiments and Results
### _Experimental Setup_
#### Iv-A1 Lidar Specs:
:
Lidar is a 16 layer sensor \\(\\pm\\)5degwith vertical resolution of 0.625deg, with 145deghorizontal FOV with resolution that of 0.125deg. The sensor provides three echos for each reflected ray cast, we may have for each reflection one, two or three scan point(s) based on the reflectivity of the object, it's geometrical shape and reflections from another objects for divided Lidar ray profiles. Each reflected scan point carries two information, depth and echo pulse width of the reflected scan point.
### _Database_
Our Data set is 30k frames, with frame rate of 25Hz, divided into two main traces, first recorded trace is 20k frames in a Road was used for training our DNNs, second recorded trace is 10k frames in another Road was used to validate our DNNs, and in synthetic data we used different data coming from Carmaker, Gazebo, and Unity simulation environments.
### _Evaluation Metrics_
One of the main advantages of our proposed sensor model is evaluating our output with a measurable metric, to see how close or far are we from realistic sensor model, however we faced a huge challenge to achieve this because in simulated environments we don't have a ground truth or how the output should look like, which lead us to use statistical evaluation metrics, and devide our evaluation KPIs to two sets, first Real Vs Real, second real Vs simulated.
The biggest problem in evaluating this problem is the association property, where how would we associate our reference is the real data, and our the coming from different simulation environments, where the nature of these two data types are different. the only way to over come this problem is to use the unsupervised evaluation techniques, where we are evaluating distribution Vs distribution, and apply evaluation on multiple stages, over all scenario evaluation(Real Vs Sim), class per class evaluation(Real Vs Sim), and box to box evaluation(Real Vs Sim).
First Real Vs Real, we measure the average error of the inferred EPWs from the DNN, which was equal to **2.3058474 ns** as pulse width Error, but the number is not the best indicator to how good or bad the DNN would behave, so we plotted Error Histogram of the inferred EPWs, as shown in fig [11].
However, if the most of the input matrix and most of the output matrix are zeros, then this wouldnt be the best indication, so we measured this histogram of the nonzero EPWs in real Vs inferred on real data, as shown fig [12].
Second Real Vs Simulated, where we will plot the histogram of inferred EPWs from simulated environment Vs
Fig. 11: Error Histogram of inferred EPWs.
Fig. 9: Point cloud with its inferred EPWs.
the real EPWs of real Scala gen2 Data, of a scenario that is close to the real scenario, and the results is as shown in fig [13].
Fourth evaluation metric is the class to class evaluation metric where we evaluate each class EPW distribution in Real sensor perception Vs DNN inferred, as shown in fig [14]. where the mean square error of the class per class evaluation metric is **1.51032 ns** error in EPW for None class, **2.31995 ns** error in EPW for Car class, and **2.89777 ns** error in EPW for Truck class.
Fifth evaluation metric of box to box, where we evaluate scan points within a certain oriented bounding box from real scan point perception and simulated scan point perception as shown in fig [15].
### _Results_
Different implementations were made based on different DNN architectures and their execution times and performances were bench marked, the results were as follows:
where **Unet**, is the architecture proposed above, **Unet LF** is the same architecture but with half the kernel number per each convolution process, **tiny Unet** is the same Unet architecture but with only one convolution process in each Unet block instead of two convolutional processes, **tiny Unet LF** is same as tiny Unet but with half the kernel number per each convolution process, **CAE** is a simple convolution auto encoder, where the encoder is 3 down sampling convolutional processes, and decoder is 3 up sampling convolutional processes, and the **CAE LF** is the same architecture of CAE but with half it's kernel numbers per each convolutional process.
By that if accuracy is needed Unet architecture would be our choice, however if faster execution time is the target CAE LF architecture can be used, and for a tread of between good execution time and good accuracy, Tiny Unet LF would be our choice.
### _Deployment_
one of the best added values of the approach that we are proposing is that it is acting as a plugin to any simulation
Fig. 14: EPW Class to Class evaluation KPI, left is the None Class, middle is the Car class, right is the Truck class.
Fig. 12: EPW evaluation from real trace Real EPW from the trace Vs Inferred EPW from the same trace.
Fig. 13: EPW evaluation between real trace and simulated trace that looks like the real trace, Real EPW from Real trace Vs Inferred EPW from Simulated trace.
Fig. 15: EPW vehicle(Real trace) to vehicle(Simulated trace) evaluation environment, where it takes annotated point cloud as an input, and gives back Scan points perception with it's noise model and it's different physical properties. Demos for Carmaker, Unity and Gazebo are shown in fig [16], fig [17] and fig [18] respectively.
## VI Future Work
As a future work we plan to work on using other methods in the histogram classifier, like time resolved signal modeling, and fusing these outputs instead of one out of all selection block. We investigate more quantitative KPI's to evaluate our model accuracy. Also, LiDAR-camera fusion can provide color information to scan points as shown in Fig. 19. This can provide our EPW DNN and the Histogram classifier with the relation between LiDAR physical properties and object colors of each class and material. We also consider working on simulating other ADAS sensors like Radars.
## References
* [1] A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, \"Virtual worlds as proxy for multi-object tracking analysis,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 4340-4349.
* [2] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, \"Vision meets robotics: The kitti dataset,\" _The International Journal of Robotics Research_, vol. 32, no. 11, pp. 1231-1237, 2013.
* [3] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, \"Image-to-image translation with conditional adversarial networks,\" _arXiv preprint_, 2017.
* [4] J. Matas, S. James, and A. J. Davison, \"Sim-to-real reinforcement learning for deformable object manipulation,\" _arXiv preprint arXiv:1806.07851_, 2018.
* [5] O. Ronneberger, P. Fischer, and T. Brox, \"U-net: Convolutional networks for biomedical image segmentation,\" in _International Conference on Medical image computing and computer-assisted intervention_. Springer, 2015, pp. 234-241.
* [6] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, \"Learning from simulated and unsupervised images through adversarial training,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2017, pp. 2107-2116.
Fig. 16: Carmaker deployment, where colors refers to the EPW values of each scan point
Fig. 19: Camera LiDAR fused Colored point cloud.
Fig. 18: Summary, we learn from real traces(left image), to transfer syntactic data(middle image) to be more realistic(right image). | Advanced sensors are a key to enable self-driving cars technology. Laser scanner sensors (LiDAR, Light Detection And Ranging) became a fundamental choice due to its long-range and robustness to low light driving conditions. The problem of designing a control software for self-driving cars is a complex task to explicitly formulate in rule-based systems, thus recent approaches rely on machine learning that can learn those rules from data. The major problem with such approaches is that the amount of training data required for generalizing a machine learning model is big, and on the other hand LiDAR data annotation is very costly compared to other car sensors. An accurate LiDAR sensor model can cope with such problem. Moreover, its value goes beyond this because existing LiDAR development, validation, and evaluation platforms and processes are very costly, and virtual testing and development environments are still immature in terms of physical properties representation.
In this work we propose a novel Deep Learning-based LiDAR sensor model. This method models the sensor echos, using a Deep Neural Network to model echo pulse widths learned from real data using Polar Grid Maps (PGM). We benchmark our model performance against comprehensive real sensor data and very promising results are achieved that sets a baseline for future works. | Write a summary of the passage below. | 250 |
arxiv-format/1807_09358v2.md | # An Approximation Algorithm for Risk-averse Submodular Optimization
Lifeng Zhou
Department of Electrical & Computer Engineering, Virginia Tech, USA.
1
Pratap Tokekar
Department of Electrical & Computer Engineering, Virginia Tech, USA.
1
Footnote 1: email: {lfzhou, tokekar}@vt.edu
## 1 Introduction
Combinatorial optimization problems find a variety of applications in robotics. Typical examples include:
* _Sensor placement:_ Where to place sensors to maximally cover the environment [1] or reduce the uncertainty in the environment [2]?
* _Task allocation:_ How to allocate tasks to robots to maximize the overall utility gained by the robots [3]?
* _Combinatorial auction:_ How to choose a combination of items for each player to maximize the total rewards [4]?
Algorithms for solving such problems find use in sensor placement for environment monitoring [1, 2], robot-target assignment and tracking [5, 6, 7], and informative path planning [8]. The underlying optimization problem in most cases can be written as:
\\[\\underset{\\mathcal{S}\\in\\mathcal{I}.\\mathcal{S}\\in\\mathcal{X}}{\\max}f( \\mathcal{S}), \\tag{1}\\]where \\(\\mathcal{X}\\) denotes a ground set from which a subset of elements \\(S\\) must be chosen. \\(f\\) is a monotone submodular utility function [9, 10]. Submodularity is the property of diminishing returns. Many information theoretic measures, such as mutual information [2], and geometric measures such as the visible area [11], are known to be submodular. \\(\\mathcal{I}\\) denotes a matroidal constraint [9, 10]. Matroids are a powerful combinatorial tool that can represent constraints on the solution set, e.g., cardinality constraints (\"place no more than \\(k\\) sensors\") and connectivity constraints (\"the communication graph of the robots must be connected\") [12]. The objective of this problem is to find a set \\(\\mathcal{S}\\) satisfying a matroidal constraint \\(\\mathcal{I}\\) and maximizing the utility \\(f(\\mathcal{S})\\). The general form of this problem is NP-complete. However, a greedy algorithm yields a constant factor approximation guarantee [9, 10].
In practice, sensors can fail or get compromised [13] or robots may not know the exact positions of the targets [14]. Hence, the utility \\(f(\\mathcal{S})\\) is not necessarily deterministic but can have uncertainty. Our main contribution is to extend the traditional formulation given in Eq. 1 to also account for the uncertainty in the actual cost function. We model the uncertainty by assuming that the utility function is of the form \\(f(\\mathcal{S},y)\\) where \\(\\mathcal{S}\\in\\mathcal{X}\\) is the decision variable and \\(y\\in\\mathcal{Y}\\) represents a random variable which is independent of \\(\\mathcal{S}\\). We focus on the case where \\(f(\\mathcal{S},y)\\) is monotone submodular in \\(\\mathcal{S}\\in\\mathcal{X}\\) and integrable in \\(y\\).
The traditional way of stochastic optimization is to use the expected utility as the objective function: \\(\\max_{\\mathcal{S}\\in\\mathcal{I},\\mathcal{S}\\in\\mathcal{X}}\\mathbb{E}_{y}[f( \\mathcal{S},y)]\\). Since the sum of the monotone submodular functions is monotone submodular, \\(\\mathbb{E}_{y}[f(\\mathcal{S},y)]\\) is still monotone submodular in \\(\\mathcal{S}\\). Thus, the greedy algorithm still retains its constant-factor performance guarantee [9, 10]. Examples of this approach include influence maximization [15], moving target detection and tracking [14], and robot assignment with travel-time uncertainty [16].
While optimizing the expected utility has its uses, it also has its pitfalls. Consider the example of mobility-on-demand where two self-driving vehicles, \\(v_{1}\\) and \\(v_{2}\\), are available to pick up the passengers at a demand location (Fig. 1). \\(v_{1}\\) is closer to the demand location, but it needs to cross an intersection where it may need to stop and wait. \\(v_{2}\\) is further from the demand location but there is no intersection along the path. The travel time for \\(v_{1}\\) follows a bimodal distribution (with and without traffic stop) whereas that for \\(v_{2}\\) follows a unimodal distribution with a higher mean but lower uncertainty. Clearly, if the passenger uses the expected travel
Figure 1: Mobility on demand with travel time uncertainty of self-driving vehicles.
time as the objective, they would choose \\(v_{1}\\). However, they will risk waiting a much longer time, i.e., \\(17\\sim 20min\\) about half of the times. A more risk-averse passenger would choose \\(v_{2}\\) which has higher expected waiting time \\(16min\\) but a lesser risk of waiting longer.
Thus, in these scenarios, it is natural to go beyond expectation and focus on a risk-averse measure. One popular coherent risk measure is _Conditional-Value-at-Risk_ (CVaR) [17, 18]. CVaR takes a risk level \\(\\alpha\\) which is the probability of the worst \\(\\alpha\\)-tail cases. Loosely speaking, maximizing CVaR is equivalent to maximizing the expectation of the worst \\(\\alpha\\)-tail scenarios.1 This risk-averse decision is rational especially when the failures can lead to unrecoverable consequences, such as a sensor failure.
Footnote 1: We formally review CVaR and other related concepts in Section 2.1
**Related work**. Yang and Chakraborty studied a chance-constrained combinatorial optimization problem that takes into account the risk in multi-robot assignment [19]. They later extended this to knapsack problems [20]. They solved the problem by transforming it to a risk-averse problem with mean-variance measure [21]. Chance-constrained optimization is similar to optimizing the Value-at-Risk (VaR), which is another popular risk measure in finance [22]. However, Majumdar and Pavone argued that CVaR is a better measure to quantify risk than VaR or mean-variance based on six proposed axioms in the context of robotics [23].
Several works have focused on optimizing CVaR. In their seminal work [18], Rockafellar and Uryasev presented an algorithm for CVaR minimization for reducing the risk in financial _portfolio optimization_ with a large number of instruments. Note that, in portfolio optimization, we select a distribution over available decision variables, instead of selecting a single one. Later, they showed the advantage of optimizing CVaR for general loss distributions in finance [24].
When the utility is a discrete submodular set function, i.e., \\(f(\\mathcal{S},y)\\), Maehara presented a negative result for maximizing CVaR [25]-- there is no polynomial time multiplicative approximation algorithm for this problem under some reasonable assumptions in computational complexity. To avoid this difficulty, Ohsaka and Yoshida in [26] used the same idea from portfolio optimization and proposed a method of selecting a distribution over available sets rather than selecting a single set, and gave a provable guarantee. Following this line, Wilder considered a CVaR maximization of a continuous submodular function instead of the submodular set functions [27]. They gave a \\((1-1/e)\\)-approximation algorithm for continuous submodular functions. They also evaluated the algorithm for discrete submodular functions using portfolio optimization [26].
**Contributions**. We focus on the problem of selecting a single set, similar to [25], to maximize CVaR rather than portfolio optimization [26, 27]. This is because we are motivated by applications where a one-shot decision (placing sensors and assigning vehicles) must be taken. Our contributions are as follows:
* We propose the Sequential Greedy Algorithm (SGA) which uses the deterministic greedy algorithm [9, 10] as a subroutine to find the maximum value of CVaR (Algorithm 1).
* We prove that the solution found by SGA is within a constant factor of the optimal performance along with an additive term which depends on the optimal value. We also prove that SGA runs in polynomial time (Theorem 1) and the performance improves as the running time increases.
* We demonstrate the utility of the proposed CVaR maximization problem through two case studies (Section 3.2). We evaluate the performance of SGA through simulations (Section 5).
**Organization of rest of the paper**. We give the necessary background knowledge for the rest of the paper in Section 2. We formulate the CVaR submodular maximization problem with two case studies in Section 3. We present SGA along with the analysis of its computational complexity and approximation ratio in Section 4. We illustrate the performance of SGA to the two case studies in Section 5. We conclude the paper in Section 6.
## 2 Background and Preliminaries
We start by defining the conventions used in the paper.
Calligraphic font denotes a set (e.g., \\(\\mathcal{A}\\)). Given a set \\(\\mathcal{A}\\), \\(2^{\\mathcal{A}}\\) denotes its power set. \\(|\\mathcal{A}|\\) denotes the cardinality of \\(\\mathcal{A}\\). Given a set \\(\\mathcal{B}\\), \\(\\mathcal{A}\\setminus\\mathcal{B}\\) denotes the set of elements in \\(\\mathcal{A}\\) that are not in \\(\\mathcal{B}\\). \\(\\Pr[\\cdot]\\) denotes the probability of an event and \\(\\mathbb{E}[\\cdot]\\) denotes the expectation of a random variable. \\(\\lceil x\\rceil=\\min\\{n\\in\\mathbb{Z}|x\\leq n\\}\\) where \\(\\mathbb{Z}\\) denotes the set of integer.
Next, we give the background on set functions (in the appendix file) and risk measures.
### Risk measures
Let \\(f(\\mathcal{S},y)\\) be a utility function with decision set \\(\\mathcal{S}\\) and the random variable \\(y\\). For each \\(\\mathcal{S}\\), the utility \\(f(\\mathcal{S},y)\\) is also a random variable with a distribution induced by that of \\(y\\). First, we define the Value-at-Risk at risk level \\(\\alpha\\in(0,1]\\).
**Value at Risk:**
\\[\\text{VaR}_{\\alpha}(\\mathcal{S})=\\inf\\{\\tau\\in\\mathbb{R},\\Pr[f(\\mathcal{S},y) \\leq\\tau]\\geq\\alpha\\}. \\tag{2}\\]
Thus, \\(\\text{VaR}_{\\alpha}(\\mathcal{S})\\) denotes the left endpoint of the \\(\\alpha\\)-quantile(s) of the random variable \\(f(\\mathcal{S},y)\\). The Conditional-Value-at-Risk is the expectation of this set of \\(\\alpha\\)-worst cases of \\(f(\\mathcal{S},y)\\), defined as:
Figure 2: An illustration of risk measures: VaR and CVaR.
**Conditional Value at Risk:**
\\[\\text{CVaR}_{\\alpha}(\\mathcal{S})=\\operatorname*{\\mathbb{E}}_{y}[f(\\mathcal{S},y) |f(\\mathcal{S},y)\\leq\\text{VaR}_{\\alpha}(\\mathcal{S})]. \\tag{3}\\]
Fig. 2 shows an illustration of \\(\\text{VaR}_{\\alpha}(\\mathcal{S})\\) and \\(\\text{CVaR}_{\\alpha}(\\mathcal{S})\\). \\(\\text{CVaR}_{\\alpha}(\\mathcal{S})\\) is more popular than \\(\\text{VaR}_{\\alpha}(\\mathcal{S})\\) since it has better properties [18], such as _coherence_[28].
When optimizing \\(\\text{CVaR}_{\\alpha}(\\mathcal{S})\\), we usually resort to an auxiliary function:
\\[H(\\mathcal{S},\\tau)=\\tau-\\frac{1}{\\alpha}\\mathbb{E}[(\\tau-f(\\mathcal{S},y))_{+ }].\\]
We know that optimizing \\(\\text{CVaR}_{\\alpha}(\\mathcal{S})\\) over \\(\\mathcal{S}\\) is equivalent to optimizing the auxiliary function \\(H(\\mathcal{S},\\tau)\\) over \\(\\mathcal{S}\\) and \\(\\tau\\)[18]. The following lemmas give useful properties of the auxiliary function \\(H(\\mathcal{S},\\tau)\\).
Lemma 1: _If \\(f(\\mathcal{S},y)\\) is normalized, monotone increasing and submodular in set \\(\\mathcal{S}\\) for any realization of \\(y\\), the auxiliary function \\(H(\\mathcal{S},\\tau)\\) is monotone increasing and submodular, but not necessarily normalized in set \\(\\mathcal{S}\\) for any given \\(\\tau\\)._
We provide the proofs for all the Lemmas and Theorem in the appendix file.
Lemma 2: _The auxiliary function \\(H(\\mathcal{S},\\tau)\\) is concave in \\(\\tau\\) for any given set \\(\\mathcal{S}\\)._
Lemma 3: _For any given set \\(\\mathcal{S}\\), the gradient of the auxiliary function \\(H(\\mathcal{S},\\tau)\\) with respect to \\(\\tau\\) fulfills: \\(-(\\frac{1}{\\alpha}-1)\\leq\\frac{\\partial H(\\mathcal{S},\\tau)}{\\partial\\tau}\\leq 1\\)._
## 3 Problem Formulation and Case Studies
We first formulate the CVaR submodular maximization problem and then present two applications which we use as case studies.
### Problem Formulation
**CVaR Submodular Maximization**: We consider the problem of maximizing \\(\\text{CVaR}_{\\alpha}(\\mathcal{S})\\) over a decision set \\(\\mathcal{S}\\subseteq\\mathcal{X}\\) under a matroid constraint \\(\\mathcal{S}\\in\\mathcal{I}\\). We know that maximizing \\(\\text{CVaR}_{\\alpha}(\\mathcal{S})\\) over \\(\\mathcal{S}\\) is equivalent to maximizing the auxiliary function \\(H(\\mathcal{S},\\tau)\\) over \\(\\mathcal{S}\\) and \\(\\tau\\)[18]. Thus, we propose the maximization problem as:
**Problem 1**.: \\[\\text{max}\\ \\ \\tau-\\frac{1}{\\alpha}\\mathbb{E}[(\\tau-f(\\mathcal{S},y))_{+ }]\\] \\[s.t.\\ \\ \\mathcal{S}\\in\\mathcal{I},\\mathcal{S}\\subseteq\\mathcal{X}, \\tau\\in[0,\\Gamma],\\] (4)
where \\(\\Gamma\\) is the upper bound of the parameter \\(\\tau\\). Problem 1 gives a risk-averse version of maximizing submodular set functions.
### Case Studies
The risk-averse submodular maximization has many applications, as it has been written in Section 3.2. We describe two specific applications which we will use in the simulations.
#### 3.2.1 Resilient Mobility-on-Demand
Consider a mobility-on-demand problem where we assign \\(R\\) vehicles to \\(N\\) demand locations under arrival-time uncertainty. An example is shown in Fig. 3 where seven self-driving vehicles must be assigned to three demand locations to pick up passengers. We follow the same constraint setting in [16]-- each vehicle can be assigned to at most one demand but multiple vehicles can be assigned to the same demand. Only the vehicle that arrives first is chosen for picking up the passengers. Note that the advantage of the redundant assignment to each demand is that it counters the effect of uncertainty and reduces the waiting time at demand locations [16]. This may be too conservative for consumer mobility-on-demand services but can be crucial for urgent and time-critical tasks such as delivery medical supplies [29].
Assume the arrival time for the robot to arrive at demand location is a random variable. The distribution can depend on the mean-arrival time. For example, it is possible to have a shorter path that passes through many intersections, which leads to an uncertainty on arrival time. While a longer road (possibly a highway) has a lower arrival time uncertainty. Note that for each demand location, there is a set of robots assigned to it. The vehicle selected at the demand location is the one that arrives first. Then, this problem becomes a minimization one since we would like to minimize the arrival time at all demand locations. We convert it into a maximization one by taking the reciprocal of the arrival time. Specifically, we use the arrival _efficiency_ which is the reciprocal of arrival time. Instead of selecting the vehicle at the demand location with minimum arrival time, we select the vehicle with maximum arrival efficiency. The arrival efficiency is also a random variable, and has a distribution depending on mean-arrival efficiency. Denote the arrival efficiency for robot \\(j\\in\\{1, ,R\\}\\) arriving at demand location \\(i\\in\\{1, ,N\\}\\) as \\(e_{ij}\\). Denote the assignment utility as the arrival efficiency at all locations, that is,
\\[f(\\mathcal{S},y)=\\sum_{i\\in N}\\max_{j\\in\\mathcal{S}_{i}}e_{ij} \\tag{5}\\]
Figure 3: Mobility-on-demand with multiple demands and multiple self-driving vehicles.
with \\(\\bigcup_{i=1}^{N}\\mathcal{S}_{i}=\\mathcal{S}\\) and \\(\\mathcal{S}_{i}\\cap\\mathcal{S}_{k}=\\emptyset,\\ i,k\\in\\{1,\\cdots,N\\}\\). \\(\\mathcal{S}_{i}\\cap\\mathcal{S}_{k}=\\emptyset\\) indicates the selected set \\(\\mathcal{S}\\) satisfies a partition matroid constraint, \\(\\mathcal{S}\\in\\mathcal{I}\\), which represents that each robot can be assigned to at most one demand. The assignment utility \\(f(\\mathcal{S},y)\\) is monotone submodular in \\(\\mathcal{S}\\) due to the \"max\" function. \\(f(\\mathcal{S},y)\\) is normalized since \\(f(\\emptyset,y)=0\\). Here, we regard the uncertainty as a risk. Our risk-averse assignment problem is a trade-off between efficiency and uncertainty. Our goal is to maximize the total efficiencies at the demand locations while considering the risk from uncertainty.
#### 4.2.2 Robust Environment Monitoring
Consider an environment monitoring problem where we monitor part of a campus with a group of ground sensors (Fig. 4).
Given a set of \\(N\\) candidate positions \\(\\mathcal{X}\\), we would like to choose a subset of \\(M\\) positions \\(\\mathcal{S}\\subseteq\\mathcal{X},M\\leq N\\), to place visibility-based sensors to maximally cover the environment. The visibility regions of the ground sensors are obstructed by the buildings in the environment (Fig. 4-(b)). Consider a scenario where the probability of failure of a sensor depends on the area it can cover. That is, a sensor covering a larger area has a larger risk of failure associated with it. This may be due to the fact that the same number of pixels are used to cover a larger area and therefore, each pixel covers proportionally a smaller footprint. As a result, the sensor risks missing out on detecting small objects.
Denote the probability of success and the visibility region for each sensor \\(i,i\\in\\{1, ,N\\}\\) as \\(p_{i}\\) and \\(v_{i}\\), respectively. Thus, the polygon each sensor \\(i\\) monitors is also a random variable. Denote this random polygon as \\(A_{i}\\) and denote the
Figure 4: Campus monitoring by using a set of sensors with visibility regions.
selection utility as the joint coverage area of a set of sensors, \\(\\mathcal{S}\\), that is,
\\[f(\\mathcal{S},y)=\\text{area}(\\bigcup_{i=1:M}A_{i}),\\ i\\in\\mathcal{S},\\mathcal{S} \\subseteq\\mathcal{I}. \\tag{6}\\]
The selection utility \\(f(\\mathcal{S},y)\\) is monotone submodular in \\(\\mathcal{S}\\) due to the overlapping area. \\(f(\\mathcal{S},y)\\) is normalized since \\(f(\\emptyset,y)=0\\). Here, we regard the sensor failure as a risk. Our robust environment monitoring problem is a trade-off between area coverage and sensor failure. Our goal is to maximize the joint-area covered while considering the risk from sensor failure.
## 4 Algorithm and Analysis
We present the Sequential Greedy Algorithm (SGA) for solving Problem 1 by leveraging the useful properties of the auxiliary function \\(H(\\mathcal{S},\\tau)\\). The pseudo-code is given in Algorithm 1. SGA mainly consists of searching for the appropriate value of \\(\\tau\\) by solving a subproblem for a fixed \\(\\tau\\) under a matroid constraint. Even for a fixed \\(\\tau\\), the subproblem of optimizing the auxiliary function is NP-complete. Nevertheless, we can employ the greedy algorithm for the subproblem, and sequentially apply it for searching over all \\(\\tau\\). We explain each stage in detail next.
### Sequential Greedy Algorithm
These are four stages in SGA:
a) Initialization (line 1):Algorithm 1 defines a storage set \\(\\mathcal{M}\\) and initializes it to be the empty set. Note that, for each specific \\(\\tau\\), we can use the greedy algorithm to obtain a near-optimal solution \\(\\mathcal{S}^{G}\\) based on the monotonicity and submodularity of the auxiliary function \\(H(\\mathcal{S},\\tau)\\). \\(\\mathcal{M}\\) stores all the \\((\\mathcal{S}^{G},\\tau)\\) pairs when searching all the possible values of \\(\\tau\\).
b) Searching for \\(\\tau\\) (**for** loop in lines 2-10):We use a user-defined separation \\(\\Delta\\) (line 3) to sequentially search for all possible values of \\(\\tau\\) within \\([0,\\Gamma]\\). \\(\\Gamma\\) is an upper bound on \\(\\tau\\) and can be set by the user based on the specific problem at hand. We show how to find \\(\\Gamma\\) for the specific cases in Section 5.
c) Greedy algorithm (lines 4-8):For a specific \\(\\tau\\), say \\(\\tau_{i}\\), we use the greedy approach to choose set \\(\\mathcal{S}^{G}_{i}\\). We first initialize set \\(\\mathcal{S}^{G}_{i}\\) to be the empty set (line 4). Under a matroid constraint, \\(\\mathcal{S}^{G}_{i}\\in\\mathcal{I}\\) (line 5), we add a new element \\(s\\) which gives the maximum marginal gain of \\(H(\\mathcal{S}^{G}_{i},\\tau_{i})\\) (line 6) into set \\(\\mathcal{S}^{G}_{i}\\) (line 7) in each round.
d) Find the best pair (line 11):Based on the collection of pairs \\(\\mathcal{M}\\) (line 9), we pick the pair \\((\\mathcal{S}^{G}_{i},\\tau_{i})\\in\\mathcal{M}\\) that maximizes \\(H(\\mathcal{S}^{G}_{i},\\tau_{i})\\) as the final solution \\(S^{G}_{i}\\). We denote this value of \\(\\tau\\) by \\(\\tau^{G}\\).
Designing an Oracle:Note that an oracle \\(\\mathcal{O}\\) is used to calculate the value of \\(H(\\mathcal{S},\\tau)\\). We use a sampling based method to approximate this oracle. Specifically, we sample \\(n_{s}\\) realizations \\(\\tilde{y}(s)\\) from the distribution of \\(y\\) and approximate \\(H(\\mathcal{S},\\tau)\\) as \\(H(\\mathcal{S},\\tau)\\approx\\tau-\\frac{1}{n_{s}\\alpha}\\sum_{\\tilde{y}}[(\\tau-f( \\mathcal{S},\\tilde{y}))_{+}]\\). According to [26, Lemma 4.1], if the number of samples is \\(n_{s}=O(\\frac{1}{\\epsilon^{2}}\\log\\frac{1}{\\delta}),\\delta,\\epsilon\\in(0,1)\\), the CVaR approximation error is less than \\(\\epsilon\\) with the probability at least \\(1-\\delta\\).
### Performance Analysis of SGA
Theorem 4.1: _Let \\(\\mathcal{S}^{G}\\) and \\(\\tau^{G}\\) be the set and the scalar chosen by the SGA, and let the \\(\\mathcal{S}^{\\star}\\) and \\(\\tau^{\\star}\\) be the set and the scalar chosen by the OPT, we have_
\\[H(\\mathcal{S}^{G},\\tau^{G})\\geq\\frac{1}{1+k_{f}}(H(\\mathcal{S}^{\\star},\\tau^{ \\star})-\\Delta)-\\frac{k_{f}}{1+k_{f}}\\Gamma(\\frac{1}{\\alpha}-1), \\tag{7}\\]
_where \\(k_{f}\\in[0,1]\\) is the curvature of the \\(H(\\mathcal{S},\\tau)\\) in set \\(\\mathcal{S}\\). Please see the detailed definition of the curvature in the appendix. The computational time is \\(O(\\lceil\\frac{\\Gamma}{2}\\rceil|\\mathcal{X}|^{2}n_{s})\\) where \\(\\Gamma\\) and \\(\\Delta\\) are the upper bound on \\(\\tau\\) and searching separation parameter, \\(|\\mathcal{X}|\\) is the cardinality of the ground set \\(\\mathcal{X}\\) and \\(n_{s}\\) is the number of the samplings used by the oracle._
SGA gives \\(1/(1+k_{f})\\) approximation of the optimal with two approximation errors. One approximation error comes from the searching separation \\(\\Delta\\). We can make this error very small by setting \\(\\Delta\\) to be close to zero with the cost of increasing the computational time. The second approximation error comes from the additive term,
\\[H^{\\rm add}=\\frac{k_{f}}{1+k_{f}}\\Gamma(\\frac{1}{\\alpha}-1), \\tag{8}\\]
which depends on the curvature \\(k_{f}\\) and the risk level \\(\\alpha\\). When the risk level \\(\\alpha\\) is very small, this error is very large which means SGA may not give a good performance guarantee of the optimal. However, if the function \\(H(\\mathcal{S},\\tau)\\) is close to modular in \\(\\mathcal{S}\\) (\\(k_{f}\\to 0\\)), this error is close to zero. Notably, when \\(k_{f}\\to 0\\) and \\(\\Delta\\to 0\\), SGA gives a near-optimal solution (\\(H(\\mathcal{S}^{G},\\tau^{G})\\to H(\\mathcal{S}^{\\star},\\tau^{\\star})\\)).
Next, we prove Theorem 4.1. We start with the proof of approximation ratio, then go to the analysis of the computational time. We first present the necessary lemmas for the proof of the approximation ratio.
Lemma 4: _Let \\(\\mathcal{S}_{i}^{\\star}\\) be the optimal set for a specific \\(\\tau_{i}\\) that maximizes \\(H(\\mathcal{S},\\tau=\\tau_{i})\\). By sequentially searching for \\(\\tau\\in[0,\\Gamma]\\) with a separation \\(\\Delta\\), we have_
\\[\\underset{i\\in\\{0,1,\\cdots,\\lceil\\frac{\\Gamma}{\\Delta}\\rceil\\}}{\\text{max}}H( \\mathcal{S}_{i}^{\\star},\\tau_{i})\\geq H(\\mathcal{S}^{\\star},\\tau^{\\star})-\\Delta. \\tag{9}\\]
Next, we build the relationship between the set selected by the greedy approach, \\(\\mathcal{S}_{i}^{G}\\), and the optimal set \\(\\mathcal{S}_{i}^{\\star}\\) for \\(\\tau_{i}\\).
Lemma 5: _Let \\(\\mathcal{S}_{i}^{\\star}\\) and \\(\\mathcal{S}_{i}^{G}\\) be the sets selected by the greedy algorithm and the optimal approach for a fixed \\(\\tau_{i}\\) that maximizes \\(H(\\mathcal{S},\\tau=\\tau_{i})\\). We have_
\\[H(\\mathcal{S}_{i}^{G},\\tau_{i})\\geq\\frac{1}{1+k_{f}}H(\\mathcal{S}_{i}^{\\star},\\tau_{i})-\\frac{k_{f}}{1+k_{f}}\\Gamma(\\frac{1}{\\alpha}-1). \\tag{10}\\]
_where \\(k_{f}\\) is the curvature of the function \\(H(\\mathcal{S},\\tau)\\) in \\(\\mathcal{S}\\) with a matroid constraint \\(\\mathcal{I}\\). \\(\\Gamma\\) is the upper bound of parameter \\(\\tau\\)._
We leverage Lemma 4 and Lemma 5 to prove the approximation ratio in Theorem 4.1.
## 5 Simulations
We perform numerical simulations to verify the performance of SGA in resilient mobility-on-demand and robust environment monitoring. Our code is available online.2
### Resilient Mobility-on-Demand under Arrival Time Uncertainty
We consider assigning \\(R=6\\) supply vehicles to \\(N=4\\) demand locations in a 2D environment. The positions of the demand locations and the supply vehicles are randomly generated within a square environment of 10 units side length. Denote the Euclidean distance between demand location \\(i\\in\\{1, ,N\\}\\) and vehicle position \\(j\\in\\{1, ,R\\}\\) as \\(d_{ij}\\). Based on the distribution discussion of the arrival efficiency distribution in Section 3.2, we assume each arrival efficiency \\(e_{ij}\\) has a uniform distribution with its mean proportional to the reciprocal of the distance between demand \\(i\\) and vehicle \\(j\\). Furthermore, the uncertainty is higher if the mean efficiency is higher. Note that, the algorithm can handle other, more complex, distributions for arrival times. We use a uniform distribution for ease of exposition. Specifically, denote the mean of \\(e_{ij}\\) as \\(\\bar{e}_{ij}\\) and set \\(\\bar{e}_{ij}=10/d_{ij}\\). We model the arrival efficiency distribution to be a uniform distribution as follows:
\\[e_{ij}=[\\bar{e}_{ij}-\\bar{e}_{ij}^{2.5}/\\max\\{\\bar{e}_{ij}\\},\\bar{e}_{ij}+\\bar {e}_{ij}^{2.5}/\\max\\{\\bar{e}_{ij}\\}],\\]
where \\(\\max\\{\\bar{e}_{ij}\\}=\\max_{i,j}e_{ij},i\\in\\{1, ,N\\},j\\in\\{1, ,R\\}\\).
From the assignment utility function (Eq. 5), for any realization of \\(y\\), say \\(\\tilde{y}\\),
\\[f(\\mathcal{S},\\tilde{y}):=\\sum_{i\\in N}\\max_{j\\in\\mathcal{S}_{i}}\\tilde{e}_{ij}\\]
where \\(\\tilde{e}_{ij}\\) indicates one realization of \\(e_{ij}\\). If all vehicle-demand pairs are independent from each other, \\(y\\) models a multi-independent uniform distribution. We sample \\(n_{s}\\) times from underlying multi-independent uniform distribution of \\(y\\) and approximate the auxiliary function \\(H(\\mathcal{S},\\tau)\\) as
\\[H(\\mathcal{S},\\tau)\\approx\\tau-\\frac{1}{n_{s}\\alpha}\\sum_{\\tilde{y}}[(\\tau- \\sum_{i\\in N}\\max_{j\\in\\mathcal{S}_{i}}\\tilde{e}_{ij})_{+}].\\]
We set the upper bound of the parameter \\(\\tau\\) as \\(\\Gamma=N\\text{max}(\\tilde{e}_{ij}),i=\\{1, N\\},j=\\{1, ,R\\}\\), to make sure \\(\\Gamma-f(\\mathcal{S},y)\\geq 0\\). We set the searching separation for \\(\\tau\\) as \\(\\Delta=1\\).
After receiving the pair \\((\\mathcal{S}^{G},\\tau^{G})\\) from SGA, we plot the value of \\(H(\\mathcal{S}^{G},\\tau^{G})\\) and \\(H(\\mathcal{S}^{G},\\tau)\\) with respect to different risk levels \\(\\alpha\\) in Fig. 5. Fig. 5-(a) shows that \\(H(\\mathcal{S}^{G},\\tau^{G})\\) increases when \\(\\alpha\\) increases. This suggests that SGA correctly maximizes \\(H(\\mathcal{S},\\tau)\\). Fig. 5-(b) shows that \\(H(\\mathcal{S}^{G},\\tau)\\) is concave or piecewise concave, which is consistent with the property of \\(H(\\mathcal{S},\\tau)\\).
We plot the distribution of assignment utility,
\\[f(\\mathcal{S}^{G},y)=\\sum_{i\\in N}\\max_{j\\in\\mathcal{S}^{G}_{i}}e_{ij}\\]
in Fig. 6 by sampling \\(n_{s}=1000\\) times from the underlying distribution of \\(y\\). \\(\\mathcal{S}^{G}_{i}\\) is a set of vehicles assigned to demand \\(i\\) by SGA. \\(\\mathcal{S}^{G}=\\cup_{i=1}^{N}\\mathcal{S}^{G}_{i}\\). When the risk level \\(\\alpha\\) is small, vehicle-demand pairs with low efficiencies (equivalently, low uncertainties) are selected. This is because small risk level indicates theassignment is conservative and only willing to take a little risk. Thus, lower efficiency with lower uncertainty is assigned to avoid the risk induced by the uncertainty. In contrast, when \\(\\alpha\\) is large, the assignment is allowed to take more risk to gain more assignment utility. Vehicle-demand pairs with high efficiencies (equivalently, high uncertainties) are selected in such a case. Note that, when the risk level is close to zero, SGA may not give a correct solution because of a large approximation error (Fig. 7). However, this error decreases quickly to zero when the risk level increases.
We also compare SGA with CVaR measure with the greedy algorithm with the expectation, i.e., risk-neutral measure [16] in Fig. 8. Note that risk-neutral measure is a special case of CVaR\\({}_{\\alpha}(\\mathcal{S})\\) measure when \\(\\alpha=1\\). We give an illustrative example of the assignment by SGA for two extreme risk levels, \\(\\alpha=0.1\\) and \\(\\alpha=1\\). When \\(\\alpha\\) is small (\\(\\alpha=0.1\\)), the assignment is conservative and thus further vehicles (with lower efficiency and lower uncertainty) are assigned to each demand (Fig. 8-(a)). In contrast, when \\(\\alpha=1\\), nearby vehicles (with higher efficiency and higher uncertainty) are selected for the demands (Fig. 8-(b)). Even though the mean value of the assignment utility distribution is larger at \\(\\alpha=1\\) than \\(\\alpha=0.1\\), it is exposed to the risk of receiving lower utility since the mean-std bar at \\(\\alpha=1\\) has smaller left endpoint than the mean-std bar at \\(\\alpha=0.1\\) (Fig. 8-(c)).
### Robust Environment Monitoring
We consider selecting \\(M=4\\) locations from \\(N=8\\) candidate locations to place sensors in the environment (Fig. 4). Denote the area of the free space as \\(v^{\\text{free}}\\). The positions of \\(N\\) candidate locations are randomly generated within the free
Figure 5: The value of \\(H(\\mathcal{S},\\tau)\\) by SGA with respect to different risk confidence levels.
space \\(v^{\\text{free}}\\). We calculate the visibility region for each sensor \\(v_{i}\\) by using the VisiLibity library [30]. Based on the sensor model discussed in Section 3.2, we set the probability of success for each sensor \\(i\\) as
\\[p_{i}=1-v_{i}/v^{\\text{free}},\\]
and model the working of each sensor as a Bernoulli distribution with \\(p_{i}\\) probability of success and \\(1-p_{i}\\) probability of failure. Thus the random polygon monitored by each sensor \\(A_{i}\\), follows the distribution
\\[\\begin{cases}\\Pr[A_{i}=v_{i}]=p_{i},\\\\ \\Pr[A_{i}=0]=1-p_{i}.\\end{cases} \\tag{11}\\]
From the assignment utility function (Eq. 6), for any realization of \\(y\\), say \\(\\tilde{y}\\),
\\[f(\\mathcal{S},y)=\\text{area}(\\bigcup_{i=1:M}\\tilde{A}_{i}),\\]
where \\(\\tilde{A}_{i}\\) indicates one realization of \\(A_{i}\\) by sampling \\(y\\). If all sensors are independent of each other, we can model the working of a set of sensors as a multi-independent Bernoulli distribution. We sample \\(n_{s}=1000\\) times from the underlying multi-independent Bernoulli distribution of \\(y\\) and approximate the auxiliary function \\(H(\\mathcal{S},\\tau)\\) as
\\[H(\\mathcal{S},\\tau)\\approx\\tau-\\frac{1}{n_{s}\\alpha}\\sum_{\\tilde{y}}[(\\tau- \\bigcup_{i=1:M}\\tilde{A}_{i})_{+}],\\]
where \\(\\tilde{A}_{i}\\) is one realization of \\(A_{i}\\) by sampling \\(y\\). We set the upper bound for the parameter \\(\\tau\\) as the area of all the free space \\(v^{\\text{free}}\\) and set the searching separation for \\(\\tau\\) as \\(\\Delta=1\\).
We use SGA to find the pair \\((\\mathcal{S}^{G},\\tau^{G})\\) with respect to several risk levels \\(\\alpha\\). We plot the value of \\(H(\\mathcal{S}^{G},\\tau^{G})\\) for several risk levels in Fig. 9-(a). A larger risk level gives a larger \\(H(\\mathcal{S}^{G},\\tau^{G})\\), which means the pair \\((\\mathcal{S}^{G},\\tau^{G})\\) found by SGA correctly maximizes \\(H(\\mathcal{S},\\tau)\\) with respect to the risk level \\(\\alpha\\). Moreover, we plot functions \\(H(\\mathcal{S}^{G},\\tau)\\) for several risk levels \\(\\alpha\\) in Fig. 9-(b). Note that \\(\\mathcal{S}^{G}\\) is computed by SGA at each \\(\\tau\\). For each \\(\\alpha\\), \\(H(\\mathcal{S}^{G},\\tau)\\) shows the concavity or piecewise concavity of function \\(H(\\mathcal{S},\\tau)\\).
Figure 8: Assignments and utility distributions by SGA with two extreme risk level values. The red solid star represents the demand location. The black solid square represents the vehicle position. The line between the vehicle and the demand represents an assignment.
Based on the \\(\\mathcal{S}^{G}\\) calculated by SGA, we sample \\(n_{s}=1000\\) times from the underlying distribution of \\(y\\) and plot the distribution of the selection utility,
\\[f(\\mathcal{S}^{G},y)=\\bigcup_{i=1:M}A_{i},i\\in\\mathcal{S}^{G}\\]
in Fig. 10. Note that, when the risk level \\(\\alpha\\) is small, the sensors with smaller visibility region and a higher probability of success should be selected. Lower risk level suggests a conservative selection. Sensors with a higher probability of success are selected to avoid the risk induced by sensor failure. In contrast, when \\(\\alpha\\) is large, the selection would like to take more risk to gain more monitoring utility. The sensors with larger visibility region and a lower probability of success should be selected. Fig. 10 demonstrates this behavior except between \\(\\alpha=0.001\\) to \\(\\alpha=0.01\\). This is because when \\(\\alpha\\) is very small, the approximation error (Eq. 8) is very large as shown in Fig. 11, and thus SGA may not give a good solution.
We also compare SGA by using CVaR measure with the greedy algorithm by using the expectation, i.e., risk-neutral measure (mentioned in [2, Section 6.1]) in Fig. 12. In fact, the risk-neutral measure is equivalent to case of CVaR\\({}_{\\alpha}(\\mathcal{S})\\) when \\(\\alpha=1\\). We give an illustrative example of the sensor selection by SGA for two extreme risk levels, \\(\\alpha=0.1\\) and \\(\\alpha=1\\). When risk level \\(\\alpha\\) is small (\\(\\alpha=0.1\\)), the selection is conservative and thus the sensors with small visibility region are selected (Fig. 12 -(a)). In contrast, when \\(\\alpha=1\\), the risk is neutral and the selection is more adventurous, and thus sensors with large visibility region are selected (Fig. 12 -(b)). The mean-std bars of the selection utility distributions in Fig. 12 -(c) show that the selection utility at the expectation (\\(\\alpha=1\\)) has larger mean value than the selection at \\(\\alpha=0.1\\). However, the selection at \\(\\alpha=1\\) has the risk of gaining lower utility since the left endpoint of mean-std bar at \\(\\alpha=1\\) is smaller than the left endpoint of mean-std bar at \\(\\alpha=0.1\\).
## 6 Conclusion and Discussion
We studied a risk-averse discrete submodular maximization problem. We provide the first positive results for discrete CVaR submodular maximization for selecting a set under matroidal constraints. In particular, we proposed the Sequential Greedy Algorithm and analyzed its approximation ratio and the running time. We demonstrated the two practical use-cases of the CVaR submodular maximization problem.
Notably, our Sequential Greedy Algorithm works for any matroid constraint. In particular, the multiplicative approximation ratio can be improved to \\(1/k_{f}(1-e^{-k_{f}})\\) if we know that the constraint is a uniform matroid [31, Theorem 5.4].
The additive term in our analysis depends on \\(\\alpha\\). This term can be large when the risk level \\(\\alpha\\) is very small. Our ongoing work is to remove this dependence on \\(\\alpha\\), perhaps by designing another algorithm specifically for low risk levels. We note that if we use an optimal algorithm instead of the greedy algorithm as a subroutine, then the additive term disappears from the approximation guarantee. The algorithm also requires knowing \\(\\Gamma\\). We showed how to find \\(\\Gamma\\) (or an upper bound for it) for the two case studies considered in this paper. Devising a general strategy for finding \\(\\Gamma\\) is part of our ongoing work.
Our second line of ongoing work focuses on applying the risk-averse strategy to multi-vehicle routing, patrolling, and informative path planning in dangerous environments [32] and mobility on demand with real-world data sets (2014 NYC Taxicab Factbook).3
Footnote 3: [http://www.nyc.gov/html/tlc/downloads/pdf/2014_taxicab_fact_book.pdf](http://www.nyc.gov/html/tlc/downloads/pdf/2014_taxicab_fact_book.pdf)
## 7 Acknowledgements
This work was supported by NSF award IIS-1637915 and ONR Award N00014-18-1-2829.
Figure 12: Sensor selection and utility distributions by SGA with two extreme risk level values. The red solid circle represents the sensor selected by SGA.
## 8 Appendix
Background on set functions:
### Monotonicity, Submodularity, Matroid and Curvature
We begin by reviewing useful properties of a set function \\(f(\\mathcal{S})\\) defined for a finite ground set \\(\\mathcal{X}\\) and matroid constraints.
**Monotonicity [9]:** A set function \\(f:2^{\\mathcal{X}}\\mapsto\\mathbb{R}\\) is monotone (non-decreasing) if and only if for any sets \\(\\mathcal{S}\\subseteq\\mathcal{S}^{\\prime}\\subseteq\\mathcal{X}\\), we have \\(f(\\mathcal{S})\\leq f(\\mathcal{S}^{\\prime})\\).
**Normalized Function [10]:** A set function \\(f:2^{\\mathcal{X}}\\mapsto\\mathbb{R}\\) is called normalized if and only if \\(f(\\emptyset)=0\\).
**Submodularity [9, Proposition 2.1]:** A set function \\(f:2^{\\mathcal{X}}\\mapsto\\mathbb{R}\\) is submodular if and only if for any sets \\(\\mathcal{S}\\subseteq\\mathcal{S}^{\\prime}\\subseteq\\mathcal{X}\\), and any element \\(s\\in\\mathcal{X}\\) and \\(s\
otin\\mathcal{S}^{\\prime}\\), we have: \\(f(\\mathcal{S}\\cup\\{s\\})-f(\\mathcal{S})\\geq f(\\mathcal{S}^{\\prime}\\cup\\{s\\})- f(\\mathcal{S}^{\\prime})\\). Therefore the marginal gain \\(f(\\mathcal{S}\\cup\\{s\\})-f(\\mathcal{S})\\) is non-increasing.
**Matroid [33, Section 39.1]--** Denote a non-empty collection of subsets of \\(\\mathcal{X}\\) as \\(\\mathcal{I}\\). The pair \\((\\mathcal{X},\\mathcal{I})\\) is called a matroid if and only if the following conditions are satisfied:
* for any set \\(\\mathcal{S}\\subseteq\\mathcal{X}\\) it must hold that \\(\\mathcal{S}\\in\\mathcal{I}\\), and for any set \\(\\mathcal{P}\\subseteq\\mathcal{S}\\) it must hold that \\(\\mathcal{P}\\in\\mathcal{I}\\).
* for any sets \\(\\mathcal{S},\\mathcal{P}\\subseteq\\mathcal{X}\\) and \\(|\\mathcal{P}|\\leq|\\mathcal{S}|\\), it must hold that there exists an element \\(s\\in\\mathcal{S}\\backslash\\mathcal{P}\\) such that \\(\\mathcal{P}\\cup\\{s\\}\\in\\mathcal{I}\\).
We will use two specific forms of matroids that are reviewed next.
**Uniform Matroid:** A _uniform matroid_ is a matroid \\((\\mathcal{X},\\mathcal{I})\\) such that for a positive integer \\(\\kappa\\), \\(\\{\\mathcal{S}:\\mathcal{S}\\subseteq\\mathcal{X},|\\mathcal{S}|\\leq\\kappa\\}\\). Thus, the uniform matroid only constrains the cardinality of the feasible sets in \\(\\mathcal{I}\\).
**Partition Matroid:** A _partition matroid_ is a matroid \\((\\mathcal{X},\\mathcal{I})\\) such that for a positive integer \\(n\\), disjoint sets \\(\\mathcal{X}_{1}, ,\\mathcal{X}_{n}\\) and positive integers \\(\\kappa_{1}, ,\\kappa_{n}\\), \\(\\mathcal{X}\\equiv\\mathcal{X}_{1}\\cup\\cdots\\mathcal{X}_{n}\\) and \\(\\mathcal{I}=\\{\\mathcal{S}:\\mathcal{S}\\subseteq\\mathcal{X},|\\mathcal{S}\\cap \\mathcal{X}_{i}|\\leq\\kappa_{i}\\text{ for all }i=1, ,n\\}\\).
**Curvature [31]:** consider a matroid \\(\\mathcal{I}\\) for \\(\\mathcal{X}\\), and a non-decreasing submodular set function \\(f:2^{\\mathcal{X}}\\mapsto\\mathbb{R}\\) such that (without loss of generality) for any element \\(s\\in\\mathcal{X}\\), \\(f(s)\
eq 0\\). The curvature measures how far \\(f\\) is from submodularity or linearity. Define _curvature_ of \\(f\\) over the matroid \\(I\\) as:
\\[k_{f}\\triangleq 1-\\min_{s\\in\\mathcal{S},\\mathcal{S}\\in\\mathcal{I}}\\frac{f( \\mathcal{S})-f(\\mathcal{S}\\setminus\\{s\\})}{f(s)}. \\tag{12}\\]
Note that the definition of curvature \\(k_{f}\\) (Equation 12) implies that \\(0\\leq k_{f}\\leq 1\\). Specifically, if \\(k_{f}=0\\), it means for all the feasible sets \\(\\mathcal{S}\\in\\mathcal{X}\\), \\(f(\\mathcal{S})=\\sum_{s\\in\\mathcal{S}}f(s)\\).
In this case, \\(f\\) is a modular function. In contrast, if \\(k_{f}=1\\), then there exist a feasible \\(\\mathcal{S}\\in\\mathcal{I}\\) and an element \\(s\\in\\mathcal{X}\\) such that \\(f(\\mathcal{S})=f(\\mathcal{S}\\setminus\\{s\\})\\). In this case, the element \\(s\\) is redundant for the contribution of the value of \\(f\\) given the set \\(\\mathcal{S}\\setminus\\{s\\}\\).
### Greedy Approximation Algorithm
In order to maximize a set function \\(f\\), the greedy algorithm selects each element \\(s\\) of \\(\\mathcal{S}\\) based on the maximum marginal gain at each round.
We consider maximizing a normalized monotone submodular set function \\(f\\). For any matroid, the greedy algorithm gives a \\(1/2\\) approximation [10]. In particular, the greedy algorithm can give a \\((1-1/e)\\)-approximation of the optimal solution under the uniform matroid [9]. If we know the curvature of the set function \\(f\\), we have a \\(1/(1+k_{f})\\) approximation for any matroid constraint [31, Theorem 2.3]. That is,
\\[\\frac{f(\\mathcal{S}^{G})}{f^{\\star}}\\geq\\frac{1}{1+k_{f}}.\\]
where \\(\\mathcal{S}^{\\mathcal{G}}\\in\\mathcal{I}\\) is the set selected by the greedy algorithm, \\(\\mathcal{I}\\) is the uniform matroid and \\(f^{\\star}\\) is the function value with optimal solution. Note that, if \\(k_{f}=0\\), which means \\(f\\) is modular, then the greedy algorithm reaches the optimal. If \\(k_{f}=1\\), then we have the \\(1/2\\)-approximation.
Proof of Lemma 1:
Proof.: \\(H(\\mathcal{S},\\tau)=\\tau-\\frac{1}{\\alpha}\\mathbb{E}[\\max(\\tau-f(\\mathcal{S},y),0)]\\). Since \\(f(\\mathcal{S},y)\\) is monotone increasing and submodular in \\(\\mathcal{S}\\), \\(\\max\\{\\tau-f(\\mathcal{S},y),0\\}\\) is monotone decreasing and supermodular in \\(\\mathcal{S}\\), and its expectation is also monotone decreasing and supermodular in \\(\\mathcal{S}\\). Then \\(H(\\mathcal{S},\\tau)\\) is monotone increasing and submodular in \\(\\mathcal{S}\\).
\\(H(\\emptyset,\\tau)=\\tau(1-\\frac{1}{\\alpha})\\) given \\(f(\\mathcal{S},y)\\) is normalized (\\(f(\\emptyset,y)=0\\)). Thus, \\(H(\\mathcal{S},\\tau)\\) is not necessarily normalized since \\(\\tau\\) is not necessarily zero. See a similar proof in [18, 25].
Proof of Lemma 2:
Proof.: \\(H(\\mathcal{S},\\tau)=\\tau-\\frac{1}{\\alpha}\\mathbb{E}[\\max(\\tau-f(\\mathcal{S},y),0)]\\). Since \\(\\max(\\tau-f(\\mathcal{S},y),0)\\) is convex in \\(\\tau\\), its expectation is also convex in \\(\\tau\\). Then \\(-\\frac{1}{\\alpha}\\mathbb{E}[\\max(\\tau-f(\\mathcal{S},y),0)]\\) is concave in \\(\\tau\\) and \\(H(\\mathcal{S},\\tau)\\) is concave in \\(\\tau\\).
Proof of Lemma 3:
Proof.: By using the result in [18, Lemma 1 and Proof of Theorem 1], we know that \\(H(\\mathcal{S},\\tau)\\) is concave and continuously differentiable with derivative given by
\\[\\frac{\\partial H(\\mathcal{S},\\tau)}{\\partial\\tau}=1-\\frac{1}{\\alpha}(1- \\Phi(f(\\mathcal{S},y)))\\]
where \\(\\Phi(f(\\mathcal{S},y))\\) is the cumulative distribution function of \\(f(\\mathcal{S},y)\\). Thus, \\(0\\leq\\Phi(f(\\mathcal{S},y)))\\leq 1\\), which proves the lemma.
Proof of Lemma 4:
Proof.: Denote \\(H_{i}^{\\star}=\\max\\ H(\\mathcal{S},\\tau)\\) with \\(\\tau\\in[i\\Delta,(i+1)\\Delta),\\mathcal{S}\\in\\mathcal{I}\\). From Lemmas 2 and 3, we know \\(H(\\mathcal{S},\\tau)\\) is concave in \\(\\tau\\) and \\(\\frac{\\partial H(\\mathcal{S},\\tau)}{\\partial\\tau}<1\\). The properties of concavity and bound on the gradient give
\\[H_{i}^{\\star}-H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\leq\\Delta.\\]
We illustrate this claim by using Figure 13-(a). Since \\(\\mathcal{S}_{i}^{\\star}\\) is the optimal set at \\(\\tau_{i}\\) for maximizing \\(H(\\mathcal{S},\\tau)\\), the value of \\(H(\\mathcal{S},\\tau)\\) with any other set \\(\\mathcal{S}\\in\\mathcal{I}\\) at \\(\\tau_{i}\\) is at most \\(H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\). That is, \\(H(\\mathcal{S},\\tau_{i})\\leq H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\). Since \\(H(\\mathcal{S},\\tau)\\) is a concave function of \\(\\tau\\) for any specific \\(\\mathcal{S}\\), \\(H(\\mathcal{S},\\tau)\\) can be a single concave function, i.e., \\(H(\\mathcal{S}_{m},\\tau)\\) or \\(H(\\mathcal{S}_{n},\\tau)\\) or a piecewise concave function by a combination of several concave functions, i.e., a combination of \\(H(\\mathcal{S}_{m},\\tau)\\) and \\(H(\\mathcal{S}_{n},\\tau)\\) during \\(\\tau\\in[\\tau_{i},\\tau_{i+1}]\\) (Figure 13-(a)). In either case, \\(H(\\mathcal{S},\\tau)\\) is below the line starting at \\(H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\) with \\(slope=1\\) during \\(\\tau\\in[\\tau_{i},\\tau_{i+1}]\\) (the red dotted line in Figure 13-(a)). Since \\(H(\\mathcal{S},\\tau_{i})\\leq H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\) and \\(H(\\mathcal{S},\\tau)\\) has a bounded gradient \\(\\frac{\\partial H(\\mathcal{S},\\tau)}{\\partial\\tau}\\leq 1\\). Thus, \\(H_{i}^{\\star}-H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\leq\\frac{\\partial H( \\mathcal{S},\\tau)}{\\partial\\tau}\\Delta=\\Delta,\\ \\forall i\\in\\{0,1,\\cdots,\\lceil\\frac{\\Gamma}{ \\Delta}\\rceil\\}\\).
Then we have \\(H_{i}^{\\star}-\\max_{i}H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\leq\\Delta,\\ \\forall i\\in\\{0,1,\\cdots,\\lceil\\frac{\\Gamma}{ \\Delta}\\rceil\\}\\). Note that \\(H_{i}^{\\star}\\) is the maximum value of \\(H(\\mathcal{S},\\tau)\\) at each interval \\(\\tau\\in[i\\Delta,(i+1)\\Delta)\\). The maximum value of \\(H(\\mathcal{S},\\tau)\\), \\(H(\\mathcal{S}^{\\star},\\tau^{\\star})\\) is equal to one of \\(H_{i}^{\\star},i\\in\\{0,1,\\cdots,\\lceil\\frac{\\Gamma}{\\Delta}\\rceil\\}\\). Thus, we reach the claim in Lemma 4.
Proof of Lemma 5:Proof: We use a the previous result [31, Theorem 2.3 ] for the proof of this claim. We know that for any given \\(\\tau\\), \\(H(\\mathcal{S},\\tau)\\) is a non-normalized monotone submodular function in \\(\\mathcal{S}\\) (Lemma 1). For maximizing normalized monotone submodular set functions, the greedy approach can give a \\(1+1/k_{f}\\) approximation of the optimal performance with any matroid constraint [31, Theorem 2.3 ]. After normalizing \\(H(\\mathcal{S},\\tau)\\) by \\(H(\\mathcal{S},\\tau)-H(\\emptyset,\\tau)\\), we have
\\[\\frac{H(\\mathcal{S}_{i}^{G},\\tau_{i})-H(\\emptyset,\\tau_{i})}{H(\\mathcal{S}_{i} ^{\\star},\\tau_{i})-H(\\emptyset,\\tau_{i})}\\geq\\frac{1}{1+k_{f}}, \\tag{13}\\]
with any matroid constraint. Given \\(0\\leq k_{f}\\leq 1\\) and \\(H(\\emptyset,\\tau)=-\\tau(\\frac{1}{\\alpha}-1)\\), we transform Equation 13 into,
\\[H(\\mathcal{S}_{i}^{G},\\tau_{i}) \\geq\\frac{1}{1+k_{f}}H(\\mathcal{S}_{i}^{\\star},\\tau_{i})-\\frac{k _{f}}{1+k_{f}}\\tau_{i}(\\frac{1}{\\alpha}-1)\\] \\[\\geq\\frac{1}{1+k_{f}}H(\\mathcal{S}_{i}^{\\star},\\tau_{i})-\\frac{k _{f}}{1+k_{f}}\\Gamma(\\frac{1}{\\alpha}-1) \\tag{14}\\]
where Equation 14 holds since \\(\\Gamma\\) is the upper bound of \\(\\tau\\). Thus, we prove the Lemma 5.
\\(\\sqcap\\)\\(\\sqcup\\)
Proof of Theorem 1:
Proof: From Equation 10 in Lemma 5, we have \\(H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\) is bounded by
\\[H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\leq(1+k_{f})H(\\mathcal{S}_{i}^{G},\\tau_{i} )+k_{f}\\Gamma(\\frac{1}{\\alpha}-1). \\tag{15}\\]
Denote this upper bound as
\\[H^{b}(\\mathcal{S}_{i}^{G},\\tau_{i}):=(1+k_{f})H(\\mathcal{S}_{i}^{G},\\tau_{i}) +k_{f}\\Gamma(\\frac{1}{\\alpha}-1).\\]
We know \\(H(\\mathcal{S},\\tau)\\) is below the line starting at \\(H(\\mathcal{S}_{i}^{\\star},\\tau_{i})\\) with \\(slope=1\\) during \\(\\tau\\in[\\tau_{i},\\tau_{i+1}]\\) (the red dotted line in Figure 13-(a)/(b)) (Lemma 4). \\(H(\\mathcal{S},\\tau)\\) must be also below the line starting at \\(H^{b}(\\mathcal{S}_{i}^{G},\\tau_{i})\\) with \\(slope=1\\) during \\(\\tau\\in[\\tau_{i},\\tau_{i+1}]\\) (the green dotted line in Figure 13-(b)). Similar to the proof in Lemma 4, we have \\(H_{i}^{\\star}-H^{b}(\\mathcal{S}_{i}^{G},\\tau_{i})\\leq\\Delta\\) and
\\[\\max_{i\\in\\{0,1,\\cdots,\\lceil\\frac{\\Gamma}{\\Delta}\\rceil\\}}H^{b}(\\mathcal{S}_ {i}^{G},\\tau_{i})\\geq H(\\mathcal{S}^{\\star},\\tau^{\\star})-\\Delta. \\tag{16}\\]
SGA selects the pair \\((\\mathcal{S}^{G},\\tau^{G})\\) as the pair \\((\\mathcal{S}_{i}^{G},\\tau_{i})\\) with \\(\\max_{i}\\,H(\\mathcal{S}_{i}^{G},\\tau_{i})\\). Then by Inequalities 15 and 16, we have
\\[(1+k_{f})H(\\mathcal{S}^{G},\\tau^{G})+k_{f}\\Gamma(\\frac{1}{\\alpha}-1)\\geq H( \\mathcal{S}^{\\star},\\tau^{\\star})-\\Delta. \\tag{17}\\]
By rearranging the terms, we get the approximation ratio in Theorem 1.
Next, we give the proof of the computational time of SGA in Theorem 1. We verify the computational time of SGA by following the stages of the pseudo code in SGA. First, from line 2 to 10, we use a \"for\" loop for searching \\(\\tau\\) which takes \\(\\lceil\\frac{\\Gamma}{\\Delta}\\rceil\\) evaluations. Second, within the \"For\" loop, we use the greedy algorithm to solve the subproblem (lines 4-8). In order to select a subset \\(\\mathcal{S}\\) with size \\(|\\mathcal{S}|\\) from a ground set \\(\\mathcal{X}\\) with size \\(|\\mathcal{X}|\\), the greedy algorithm takes \\(|\\mathcal{S}|\\) rounds (line 5), and calculates the marginal gain of the remaining elements in \\(\\mathcal{X}\\) at each round (line 6). Thus, the greedy algorithm takes \\(\\sum_{i=1}^{|\\mathcal{S}|}|\\mathcal{X}|-i\\) evaluations. Thus, the greedy algorithm takes \\(|\\mathcal{S}|(\\sum_{i=1}^{|\\mathcal{S}|}|\\mathcal{X}|-i)\\) evaluations. Third, by calculating the marginal gain for each element, the oracle \\(\\mathcal{O}\\) samples \\(n_{s}\\) times for computing \\(H(\\mathcal{S})\\). Thus, overall, the \"for\" loop containing the greedy algorithm with the oracle sampling takes \\(\\lceil\\frac{\\Gamma}{\\Delta}\\rceil|\\mathcal{S}|(\\sum_{i=1}^{|\\mathcal{S}|}| \\mathcal{X}|-i)n_{s}\\) evaluations. Last, finding the best pair from storage set \\(\\mathcal{M}\\) (line 11 of Alg. 1) takes \\(O(\\lceil\\frac{\\Gamma}{\\Delta}\\rceil)\\) time. Therefore, the computational complexity for SGA is,
\\[\\lceil\\frac{\\Gamma}{\\Delta}\\rceil|\\mathcal{S}|(\\sum_{i=1}^{|\\mathcal{S}|}| \\mathcal{X}|-i)n_{s}+O(\\lceil\\frac{\\Gamma}{\\Delta}\\rceil)=O(\\lceil\\frac{ \\Gamma}{\\Delta}\\rceil|\\mathcal{X}|^{2}n_{s}),\\]
given \\(|\\mathcal{S}|\\leq|\\mathcal{X}|\\).
## References
* [1] Joseph O'Rourke. _Art gallery theorems and algorithms_, volume 57. Oxford University Press Oxford, 1987.
* [2] Andreas Krause, Ajit Singh, and Carlos Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. _Journal of Machine Learning Research_, 9(Feb):235-284, 2008.
* [3] Brian P Gerkey and Maja J Mataric. A formal analysis and taxonomy of task allocation in multi-robot systems. _The International Journal of Robotics Research_, 23(9):939-954, 2004.
* [4] Jan Vondrak. Optimal approximation for the submodular welfare problem in the value oracle model. In _Proceedings of the fortieth annual ACM symposium on Theory of computing_, pages 67-74. ACM, 2008.
* [5] John R Spletzer and Camillo J Taylor. Dynamic sensor planning and control for optimally tracking targets. _The International Journal of Robotics Research_, 22(1):7-20, 2003.
* [6] Onur Tekdas and Volkan Isler. Sensor placement for triangulation-based localization. _IEEE transactions on Automation Science and Engineering_, 7(3):681-685, 2010.
* [7] Pratap Tokekar, Volkan Isler, and Antonio Franchi. Multi-target visual tracking with aerial robots. In _Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on_, pages 3067-3072. IEEE, 2014.
* [8] Amarjeet Singh, Andreas Krause, Carlos Guestrin, and William J Kaiser. Efficient informative sensing using multiple robots. _Journal of Artificial Intelligence Research_, 34:707-755, 2009.
* [9] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions-i. _Mathematical programming_, 14(1):265-294, 1978.
* [10] Marshall L Fisher, George L Nemhauser, and Laurence A Wolsey. An analysis of approximations for maximizing submodular set functionsii. In _Polyhedral combinatorics_, pages 73-87. Springer, 1978.
* [11] Huanyu Ding and David Castanon. Multi-agent discrete search with limited visibility. In _Decision and Control (CDC), 2017 IEEE 56th Annual Conference on_, pages 108-113. IEEE, 2017.
* [12] Ryan K Williams, Andrea Gasparri, and Giovanni Ulivi. Decentralized matroid optimization for topology constraints in multi-robot allocation problems. In _Robotics and Automation (ICRA), 2017 IEEE International Conference on_, pages 293-300. IEEE, 2017.
* [13] Anthony D Wood and John A Stankovic. Denial of service in sensor networks. _computer_, 35(10):54-62, 2002.
* [14] Philip Dames, Pratap Tokekar, and Vijay Kumar. Detecting, localizing, and tracking an unknown number of moving targets using a team of mobile robots. _The International Journal of Robotics Research_, 36(13-14):1540-1553, 2017.
* [15] David Kempe, Jon Kleinberg, and Eva Tardos. Maximizing the spread of influence through a social network. In _Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining_, pages 137-146. ACM, 2003.
* [16] Amanda Prorok. Supermodular optimization for redundant robot assignment under travel-time uncertainty. _arXiv preprint arXiv:1804.04986_, 2018.
* [17] Georg Ch Pflug. Some remarks on the value-at-risk and the conditional value-at-risk. In _Probabilistic constrained optimization_, pages 272-281. Springer, 2000.
* [18] R Tyrrell Rockafellar and Stanislav Uryasev. Optimization of conditional value-at-risk. _Journal of risk_, 2:21-42, 2000.
* [19] Fan Yang and Nilanjan Chakraborty. Algorithm for optimal chance constrained linear assignment. In _Robotics and Automation (ICRA), 2017 IEEE International Conference on_, pages 801-808. IEEE, 2017.
* [20] Fan Yang and Nilanjan Chakraborty. Algorithm for optimal chance constrained knapsack with applications to multi-robot teaming. In _Robotics and Automation (ICRA), 2018 IEEE International Conference on_, to appear.
* [21] Harry Markowitz. Portfolio selection. _The journal of finance_, 7(1):77-91, 1952.
* [22] JP Morgan. Riskmetrics technical document. 1996.
* [23] Anirudha Majumdar and Marco Pavone. How should a robot assess risk? towards an axiomatic theory of risk in robotics. _arXiv preprint arXiv:1710.11040_, 2017.
* [24] R Tyrrell Rockafellar and Stanislav Uryasev. Conditional value-at-risk for general loss distributions. _Journal of banking & finance_, 26(7):1443-1471, 2002.
* [25] Takanori Maehara. Risk averse submodular utility maximization. _Operations Research Letters_, 43(5):526-529, 2015.
* [26] Naoto Ohsaka and Yuichi Yoshida. Portfolio optimization for influence spread. In _Proceedings of the 26th International Conference on World Wide Web_, pages 977-985. International World Wide Web Conferences Steering Committee, 2017.
* [27] Bryan Wilder. Risk-sensitive submodular optimization. In _Proceedings of the 32nd AAAI Conference on Artificial Intelligence_, 2018.
* [28] Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, and David Heath. Coherent measures of risk. _Mathematical finance_, 9(3):203-228, 1999.
* [29] Evan Ackerman and Eliza Strickland. Medical delivery drones take flight in east africa. _IEEE Spectrum_, 55(1):34-35, 2018.
* [30] K. J. Obermeyer and Contributors. VisiLibity: A c++ library for visibility computations in planar polygonal environments. [http://www.VisiLibity.org](http://www.VisiLibity.org), 2008. R-1.
* [31] Michele Conforti and Gerard Cornuejols. Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the rado-edmonds theorem. _Discrete applied mathematics_, 7(3):251-274, 1984.
* [32] Stefan Jorgensen, Robert H Chen, Mark B Milam, and Marco Pavone. The team surviving orienteers problem: routing teams of robots in uncertain environments with survival constraints. _Autonomous Robots_, 42(4):927-952, 2018.
* [33] Alexander Schrijver. _Combinatorial optimization: polyhedra and efficiency_, volume 24. Springer Science & Business Media, 2003. | We study the problem of incorporating risk while making combinatorial decisions under uncertainty. We formulate a discrete submodular maximization problem for selecting a set using Conditional-Value-at-Risk (CVaR), a risk metric commonly used in financial analysis. While CVaR has recently been used in optimization of linear cost functions in robotics, we take the first stages towards extending this to discrete submodular optimization and provide several positive results. Specifically, we propose the Sequential Greedy Algorithm that provides an approximation guarantee on finding the maxima of the CVaR cost function under a matroidal constraint. The approximation guarantee shows that the solution produced by our algorithm is within a constant factor of the optimal and an additive term that depends on the optimal. Our analysis uses the curvature of the submodular set function, and proves that the algorithm runs in polynomial time. This formulates a number of combinatorial optimization problems that appear in robotics. We use two such problems, vehicle assignment under uncertainty for mobility-on-demand and sensor selection with failures for environmental monitoring, as case studies to demonstrate the efficacy of our formulation. | Provide a brief summary of the text. | 224 |
arxiv-format/1912_09809v3.md | Twin stars and the stiffness of the nuclear equation of state: ruling out strong phase transitions below \\(1.7n_{0}\\) with the new NICER radius measurements
Jan-Erik Christian
[email protected] Institut fur Theoretische Physik, Goethe Universitat Frankfurt, Max von Laue Strasse 1, D-60438 Frankfurt, Germany
Jurgen Schaffner-Bielich
[email protected] Institut fur Theoretische Physik, Goethe Universitat Frankfurt, Max von Laue Strasse 1, D-60438 Frankfurt, Germany
November 4, 2021
## I Introduction
A well established hypothesis for the equation of state of compact stars is the possibility of hybrid stars [1; 2; 3; 4; 5; 6; 7; 8], which feature a hadronic mantle and a quark matter core. In contrast to pure hadronic EoSs, which generate a single stable branch in a mass-radius relation, these hybrid EoSs can generate a second stable branch. This can lead to so called twin stars, where two stars have the same mass, but different radii [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19].
The possibility of pure quark stars is not ruled out either [1; 2; 20; 21; 22; 23; 24]. A widely used approach to describe the hadronic matter in neutron star is the relativistic mean field model [25; 26; 27; 28; 29; 30; 31; 32].
The chirp mass \\(\\mathcal{M}\\) and the weighted tidal deformability \\(\\tilde{\\Lambda}\\) can be measured from the inspiral of two neutron stars. This makes gravitational wave data useful in constraining the EoS for neutron stars [33; 34; 35; 36]. Specifically, hybrid star EoS, due to their high compactness, fit well with the low values of tidal deformability measured for GW170817 [36; 37; 38; 39; 40]. Another important constraint is the maximal observed mass of a neutron star. Currently the highest measured mass for a neutron star is about \\(2M_{\\odot}\\)[41; 42; 43] or slightly higher at \\(2.14^{+0.10}_{-0.09}M_{\\odot}\\)[44].
In the light of the recent first measurement of a neutron star merger (GW170817 [33]) we explore the influence of the stiffness and transition parameters of a hadronic EoS featuring a first order phase transition to quark matter. To this end we employ the parameterizable relativistic mean field equation of state by Hornick et al. [32], which enables us to vary the effective nucleon mass. The effective nucleon mass is linked to the stiffness of the EoS [45], see also Yasin et al. [46]. The phase transition and quark matter EoS is modeled after the constant speed of sound parametrization presented by Alford et al. [47]. The parameters for the phase transition are chosen according to the four categories of twin stars outlined in [18]. We find, that the presence of a phase transition can have significant influence on the compatibility of the underlying hadronic EoS with the GW170817 data, making EoSs previously considered to be too stiff viable. However, a soft EoS might not be capable of generating a second branch in the mass-radius relation. This way certain assumptions of a phase transition for a known stiffness of the nuclear EoS can be excluded. The recent measurements by NICER [48; 49; 50] of the pulsar J0030+0451 can be used to constrain the EoS further. Riley at al. state a mass of \\(1.34^{+0.15}_{-0.16}M_{\\odot}\\) with a radius of \\(12.71^{+1.14}_{-1.19}\\)km [48], while Miller et al. state \\(1.44^{+0.15}_{-0.14}M_{\\odot}\\) with a radius of \\(13.02^{+1.24}_{-1.06}\\)km [49]. This constraint rules out a strong phase transition at densities of \\(n\\lesssim 1.7\\,n_{0}\\). We show that the NICER data [48; 49; 50] provides an indication, that an extremely soft nuclear equation of state and a strong phase transition are mutually exclusive.
## II Theoretical framework
### Equation of State
#### ii.1.1 Hadronic Equation of State
The relativistic parametrization introduced by Todd-Rudel et al. [51] (see also: [32; 52]) is a generalized relativistic mean field approach with the main advantage, that the slope parameter \\(L\\), the symmetry energy \\(J\\) and the effective nucleon mass \\(m^{\\star}/m\\) can be easily adjusted. Taking into account \\(\\sigma\\), \\(\\omega\\) and \\(\\rho\\) mesons, the interaction Lagrangian can be written as:
\\[\\mathcal{L}_{\\text{int}} =\\sum_{N}\\overline{\\psi}_{i}\\left[g_{\\sigma}\\sigma-g_{\\omega}\\gamma^ {\\mu}\\omega_{\\mu}-\\frac{g_{\\rho}}{2}\\gamma^{\\mu}\\vec{\\tau}\\vec{\\rho}_{\\mu} \\right]\\psi_{i} \\tag{1}\\] \\[-\\frac{1}{3}bm\\left(g_{\\sigma}\\sigma\\right)^{3}-\\frac{1}{4}c \\left(g_{\\sigma}\\sigma\\right)^{4}\\] \\[+\\Lambda_{\\omega}\\left(g_{\\rho}^{2}\\vec{\\rho}_{\\mu}\\vec{\\rho}^{ \\mu}\\right)\\left(g_{\\omega}^{2}\\omega_{\\mu}\\omega^{\\mu}\\right)+\\frac{\\zeta}{4! }\\left(g_{\\omega}^{2}\\omega_{\\mu}\\omega^{\\mu}\\right)^{2}\\]
The last two terms describe a density dependence via the \\(\\sigma-\\omega\\) coupling term \\(\\Lambda_{\\omega}\\) and the quadratic self coupling \\(\\zeta\\) of the \\(\\omega\\) mesons [51; 30; 53]. The \\(g_{\\sigma}\\) and \\(g_{\\omega}\\) couplings can be used to determine the density of the ground state \\(n_{0}\\), as well as the binding energy per particle \\(E/A(n_{0})\\).
If one wants to determine the values of \\(E/A(n_{0})\\), \\(b\\), \\(c\\) and \\(\\Lambda_{\\omega}\\) one needs to fix certain parameters. Hornick et al. [32] followed the approach by Chen et al. [52] to do so. Apart from \\(n_{0}\\), \\(E/A(n_{0})\\), incompressibility \\(K(n_{0})\\), the parameters \\(J\\), \\(L\\) and \\(m^{*}/m\\) have to be fixed. The value of \\(\\zeta\\) is set to zero [52] in the following to achieve the stiffest possible EoS. \\(K\\) is fixed to \\(K=20\\)MeV [32]. Hornick et al. additionally constrain the fixed parameters using the constraints from the analysis of \\(\\chi\\)EFT for densities up to \\(1.3\\,n_{0}\\)[54]. By comparing the different EoSs with the allowed band from \\(\\chi\\)EFT they find, that only values of \\(40\\leq L\\leq 60\\) are possible, when \\(30\\leq J\\leq 32\\) also holds true.
We fixed the values \\(L=60\\,\\text{MeV}\\) and \\(J=32\\,\\text{MeV}\\) while varying the effective mass \\(m^{*}/m\\). These values of \\(L\\) and \\(J\\) allow for the greatest allowed range in effective mass values, see [32]. We note that the mass-radius relation does not depend significantly on the choices of \\(L\\) and \\(J\\)[32]. The softness of an EoS corresponds to the value of \\(m^{*}/m\\), as only \\(m^{*}/m\\) controls the high-density behavior [45]. Lower values of \\(m^{*}/m\\) generate a softer EoS, while high values generate a stiffer EoS.
#### ii.1.2 Phase Transition
We assume, that at high baryonic densities a first order phase from hadronic to quark matter takes place. This behavior is modeled with a Maxwell construction. The hadronic matter is described by the parameterized EoS (see [32]), while the constant speed of sound approach [13; 47; 55] in the form used by Alford et al. [47] is employed for the quark matter. This means, the entire EoS is given as:
\\[\\epsilon(p)=\\begin{cases}\\epsilon_{HM}(p)&p<p_{trans}\\\\ \\epsilon_{HM}(p_{trans})+\\Delta\\epsilon+c_{QM}^{-2}(p-p_{trans})&p>p_{trans}\\end{cases} \\tag{2}\\]
where \\(p_{trans}\\) is the pressure at which the transition takes place and \\(\\epsilon\\) the corresponding energy density. The discontinuity in energy density at the transition is \\(\\Delta\\epsilon\\). For the speed of sound in the stars core, a value of \\(c_{QM}=1\\) is assumed, using natural units.
### Classification of Twin Stars
A first order phase transition gives rise to the phenomenon of \"twin stars\", which are neutron stars with identical mass, but different radii [10; 11; 12; 13; 14; 15; 16; 18]. In order to investigate twin star equations of state it can be useful to classify the twin star solutions into four distinct categories, as described in [18]. In this subsection a short summary of the four categories is provided. We refer to the maximum of the hadronic branch as the first maximum and the maximum of the hadronic branch as the second maximum in a twin star mass-radius relation. In [18] we showed that the mass value of the first and second maximum can be related to values of \\(p_{trans}\\) and \\(\\Delta\\epsilon\\) respectively. The shape of the second branch is governed by the value of \\(p_{trans}\\), while its position is strongly influenced by the value of \\(\\Delta\\epsilon\\). High values of \\(p_{trans}\\) lead to high masses in the first maximum and flat second branches. Low values of \\(\\Delta\\epsilon\\) lead to a second branch near the discontinuity (i.e. a high mass at the second maximum). With this in mind the twin star categories can be defined as follows:
**I:**: Both maxima exceed \\(2M_{\\odot}\\), which implies high values of \\(p_{trans}\\) and a nearly flat second branch.
**II:**: Only the first maximum reaches \\(2M_{\\odot}\\), which again requires a high value of \\(p_{trans}\\).
**III:**: The first maximum is in the range of \\(2M_{\\odot}\\geq M_{max_{1}}\\geq 1M_{\\odot}\\), while the second maximum exceeds \\(2M_{\\odot}\\). Accordingly, the transitional pressure is lower than in the previous categories and the second branch becomes steeper.
**IV:**: Like category III the second maximum exceeds \\(2M_{\\odot}\\), however the first maximum is below even \\(1M_{\\odot}\\). The second branch is at its steepest slope here.
### Tidal deformability
The observation of gravitational waves from compact star mergers, as demonstrated for GW170817 detected by the LIGO and Virgo observatories [33], can be used to constrain the EoSs of compact stars, because they contain information on the tidal deformability and chirp mass of the participating neutron stars during the inspiral phase.
The chirp mass can be measured to a very high precision and is closely related to the total mass \\(M_{total}\\) via:
\\[\\mathcal{M}=\\left(\\frac{q}{(1+q)^{2}}\\right)^{\\frac{3}{3}}M_{total} \\tag{3}\\]
where \\(q\\) is the mass-ratio of the participating stars. For GW170817 the chirp mass was measured as \\(\\mathcal{M}=1.186^{+0.001}_{-0.001}M_{\\odot}\\)[56].
The tidal deformability \\(\\lambda\\) measures the quadrupole deformation \\(Q_{ij}\\) of an object in response to the external tidal field \\(\\mathcal{E}_{ij}\\)[57; 58] in the following form:
\\[Q_{ij}=-\\lambda\\mathcal{E}_{ij} \\tag{4}\\]
where \\(\\lambda\\) is related to the more commonly used parameter \\(\\Lambda\\) in the following way:
\\[\\Lambda=\\frac{2k_{2}}{3C^{5}} \\tag{5}\\]
with \\(k_{2}=\\frac{3}{2}\\lambda R^{-5}\\) and \\(C=M/R\\) being the compactness of the star.
The most interesting aspect of \\(\\Lambda\\) for our purposes is, that it is dependent on the EoS of the compact star, that is being deformed, and that it can be easily calculated [57; 58; 59]. This can be used to compare the calculated values with the gravitational wave measurement. However, the inspiral of two compact stars with masses \\(M_{1}\\geq M_{2}\\) can only reveal a combined value of the tidal deformabilities \\(\\bar{\\Lambda}\\). For this reason \\(\\Lambda_{1}-\\Lambda_{2}\\) plots are common, where every value of \\(\\Lambda_{1}\\) is assigned a fitting value of \\(\\Lambda_{2}\\) based on the precisely measured chirp mass. Depending on the EoS this can lead up to three thin lines in the plot. These lines are a neutron-neutron (NN), neutron-hybrid (NH) and hybrid-hybrid star line (HH) (see for more detail: refs. [38; 39]). Each dot in these plots indicates a possible pair of merging neutron stars. The gravitational wave data can then be used to constrain the area in the \\(\\Lambda_{1}-\\Lambda_{2}\\) plot from which the measured signal would have originated. Due to the high mass values of all hybrid stars in category I the LIGO measurement excludes the participation of a category I hybrid star in the GW170817 event (see [38]). Category I EoSs might still be viable. However, using GW170817 data they are identical to the purely hadronic case and will thus not be discussed separately.
## III Tidal deformability from equations of state with varying stiffness
In the following we present the \\(\\Lambda_{1}-\\Lambda_{2}\\) plots for a selection of EoSs from the categories II - IV, as well as the pure hadronic case, described in [18], with varying \\(m^{*}/m\\) using the chirp mass and credibility limits from GW170817 as constraints. The effective mass starts at \\(m^{*}/m=0.55\\) and is increased in steps of \\(m^{*}/m=0.05\\) to \\(m^{*}/m=0.75\\). A slope parameter of \\(L=60\\,\\mathrm{MeV}\\) and a symmetry energy of \\(J=32\\,\\mathrm{MeV}\\) are fixed. We start with the pure hadronic case in figure 1. On the left plot are the mass-radius relations and on the right one are the corresponding \\(\\Lambda_{1}-\\Lambda_{2}\\) plots. The 90% and 50% credibility levels by LIGO are added into the graphic as a dashed and a dotted black line respectively [56]. Like Hornick et al. [32] we find, that effective masses of \\(m^{*}/m\\geq 0.65\\) are compatible with GW170817 data.
Ideally one would keep the parameters of \\(p_{trans}\\) and \\(\\Delta\\epsilon\\) identical for all variations of \\(m^{*}/m\\) within a category, in order to investigate the effect of a varied stiffness in isolation. However, in order to find category II solutions high values of \\(p_{trans}\\) and \\(\\Delta\\epsilon\\) are necessary and due to the high transitional pressure it is not possible to find a single value of \\(p_{trans}\\) that can generate a phase transition for all investigated values of \\(m^{*}/m\\). For this reason the \\(p_{trans}\\) and \\(\\Delta\\epsilon\\) parameters are chosen to be as close together as possible while still generating a category II solution. For a hadronic EoS as soft as the \\(m^{*}/m=0.75\\) case it is not possible to find a category II solution at all. The mass radius relations (left) and the \\(\\Lambda_{1}-\\Lambda_{2}\\) plots (right) from category II are shown in figure 2.
Only \\(m^{*}/m=0.65\\) and \\(m^{*}/m=0.70\\) generate NN pairs within the credibility limit, as is the case in the purely hadronic scenario. For all category II EoSs the NH pairs are close to the y-axis. This is caused by the high mass values of the hybrid stars in this category. Stiffer hadronic EoSs seem to generate their corresponding NH pairs at higher values of \\(\\Lambda_{2}\\). However, even for the softest EoS with \\(m^{*}/m=0.70\\) the NH pairs are still above the 90% credibility level. This means, that the compatibility of a category II EoS with GW170817 depends entirely on the hadronic EoS, since only NN combinations are within the LIGO credibility level.
The values \\(p_{trans}=43\\,\\mathrm{MeV}/\\mathrm{fm}^{3}\\) and \\(\\Delta\\epsilon=350\\,\\mathrm{MeV}/\\mathrm{fm}^{3}\\) can generate category III solutions for all values of \\(m^{*}/m\\) considered. This is depicted in figure 3. The stiffest EoS is completely outside of the LIGO credibility level. However, even for the stiffest case the NH pairs are closer to the credibility limit than the pure NN case. The \\(m^{*}/m=0.60\\) EoS is the first case, where the phase transition improves the compatibility of an EoS with the LIGO measurement, by moving some NH combinations into the 90% credibility area, where the pure NN case would be outside of it.
In our previous publication [38] we found a special case for a transition at values of \\(p_{trans}=43\\,\\mathrm{MeV}/\\mathrm{fm}^{3}\\) and \\(\\Delta\\epsilon=350\\,\\mathrm{MeV}/\\mathrm{fm}^{3}\\), where NN, NH and HH combinations were generated by a single EoS. The NH pairs are located in two areas, one above the \\(\\Lambda_{1}=\\Lambda_{2}\\) limit and
Figure 1: On the left side the mass-radius relation for an EoS with \\(J=32\\,MeV\\) and \\(L=60\\,MeV\\), with varied values of \\(m^{*}/m\\) is displayed. On the right side the corresponding possible neutron star combinations are shown.
one below. The latter case is generated by so called rising twins, where the more massive twin star has a larger radius [11]. The hadronic EoS in that case was the DD2 equation by Typel et al. [31], which has an effective nucleon mass of \\(m^{*}/m=0.6255\\). A similar special case can be found for the EoS covered in this work, for an effective mass of \\(m^{*}/m=0.65\\). However, if so desired a special case can be realized for any category III EoS, if the transition parameters are chosen accordingly (see figure 4). The NN pairs of the \\(m^{*}/m=0.65\\) case are already at the border of the credible area and the NH pairs can move even further into it. The HH pairs of the \\(m^{*}/m=0.65\\) reach below even the 50% credibility limit. The \\(m^{*}/m=0.70\\) case does not exhibit NN combinations, but the NH pairs are located nearly completely in the credibility limit, while the HH pairs are below the 50% credibility limit. The \\(m^{*}/m=0.75\\) case is missing, because it is not possible reach the \\(2M_{\\odot}\\) requirement with a CIII phase transition.
In contrast to the previous categories it is not difficult to find an EoS in category IV, that produces combinations inside the 50% credibility limit. This is because the early phase transition makes the quark matter equation of state more dominant and this EoS was chosen specifically to be the most stiffest possible equation consistent with causality. Due to the identical quark matter EoS in all cases we chose to depict different phase transition parameters in Fig. 5, as similar values would generate mass-radius relations that are nearly on top of each other. Still the resulting combinations in the the \\(\\Lambda_{1}-\\Lambda_{2}\\) plot are close together (see the right side of Fig. 5). By definition it is only possible to find HH lines in a category IV case.
In table 1 the compatibility of the four categories with GW170817 in dependence on the effective nucleon mass \\(m^{*}/m\\) is broadly summarized. In the table category I is written down as I/0, where 0 means \"no category\". The \"x\" symbol marks cases, where a phase transition fulfills the \\(2M_{\\odot}\\) constraint, but no combinations of neu
Figure 4: The special case for every considered effective mass is depicted. The phase transition is located at the necessary mass to generate HH, NH and NN combinations for every value of \\(m^{*}/m\\), which in the case of GW170817, is roughly \\(1.4M_{\\odot}\\). The parameters are written in the legend in the order (\\(p_{trans}/\\Delta\\epsilon\\)) in units of MeV/fm\\({}^{3}\\).
Figure 5: Cases of category IV phase transitions are depicted, the parameters are within behind the corresponding values of \\(m^{*}/m\\) in the order (\\(p_{trans}/\\Delta\\epsilon\\)) in units of MeV/fm\\({}^{3}\\). Category IV is dominated by the EoS describing quark matter. As a result the second branch is incredibly stiff and the effective mass has virtually no impact on the EoS. Since only hybrid stars can be combined with other hybrid stars to find the possible areas in the \\(\\Lambda_{1}\\)-\\(\\Lambda_{2}\\) plot on the right hand side the possible combinations are very close to each other, even though their mass-radius relations (on the left) appear to be very different.
Figure 3: Category III phase transitions with the parameters \\(p_{trans}=43\\,\\)MeV/fm\\({}^{3}\\) and \\(\\Delta\\epsilon=350\\,\\)MeV/fm\\({}^{3}\\) are depicted. There are fewer NN combinations than in the pure case (see Fig. 1), since the neutron star branch in the mass-radius relation contains fewer stars. However, the remaining NN combinations do not change their position. The NH combinations and the HH combinations move further into the LIGO credibility limit or closer to it. The case \\(m^{*}/m=0.65\\) is a special case, where a single EoS exhibits possible NN, NH and HH combinations.
Figure 2: A phase transition of the category II type is depicted, the parameters are written behind the corresponding values of \\(m^{*}/m\\) in the order (\\(p_{trans}/\\Delta\\epsilon\\)) in units of MeV/fm\\({}^{3}\\). The NN combinations are identical to the pure case, due to the late phase transitions. NH combinations close to the axis can be found for all cases. However, the NH combinations are not closer to the LIGO limit, than the NN combinations.
tron stars are located within the LIGO credibility limit. The \"y\" symbol marks cases, where any combination is located within the credibility limit. The \"o\" is used, when the most compact pairs are directly at the credibility limit. A phase transition of category I does not change the compatibility of any of the hadronic EoS with the GW170817 data, which means that only the cases \\(0.65\\leq m^{*}/m\\leq 0.75\\) are within the credibility limit, with \\(m^{*}/m=0.65\\) at its very border.
The same is true for a category II phase transitions. However, it is important to stress, that only effective nucleon masses of \\(m^{*}/m\\leq 0.70\\) can be realized with a category I or II phase transition. The \\(m^{*}/m=0.75\\) case is too soft to generate a stable second branch at the high values of \\(p_{trans}\\) required for the first two categories.
A phase transition of category III can lead to NH and HH combinations within the LIGO credibility limit for the cases \\(m^{*}/m\\leq 0.70\\). This means, that the case \\(m^{*}/m=0.70\\) is the only case, that can generate NN and NH pairs that are completely within the credibility limit. The \\(m^{*}/m=0.75\\) can not be realized with a phase transition, that generates a stable second branch. However, this configuration can not be considered a category III case, as the second branch can not reach \\(2M_{\\odot}\\).
It is only possible to find a \\(m^{*}/m=0.75\\) case, that generates a second branch and has a maximal mass that exceeds \\(2M_{\\odot}\\) if the first branch has its maximum below \\(1M_{\\odot}\\). This means, that all cases \\(m^{*}/m\\) can generate a stable second branch in the form of a category IV phase transition. Category IV phase transitions generate only HH combinations,these combinations are very compact and as a result all examined cases of \\(m^{*}/m\\) are within the 50% credibility limit.
However, due to the early phase transition the influence of \\(m^{*}/m\\) on the mass-radius relation is negligible. As a result no meaningful statement about the influence of the effective nucleon mass on a category IV phase transition can be made.
## IV A nicer view on twin stars
The recently released mass and radius measurements of the pulsar J0030+0451 by the NICER program [48; 49; 50] can be used to constrain the EoSs discussed previously. NICER measures neutron star radii by observing hotspots on the pulsars surface. Depending on the model used to place these hotspots, two different masses and radii are determined. Riley at al. find a mass of \\(1.34^{+0.15}_{-0.16}M_{\\odot}\\) with a radius of \\(12.71^{+1.14}_{-1.19}\\)km [48], while Miller et al. find \\(1.44^{+0.15}_{-0.14}M_{\\odot}\\) with a radius of \\(13.02^{+1.24}_{-1.06}\\)km [49]. The compactness is determined more precisely and in both cases given as: \\(MG/Rc^{2}=0.16\\pm 0.01\\).
In Fig. 6 a sample of category III EoSs is depicted, with the constraints from NICER shaded gray and the \\(2M_{\\odot}\\) constraint from J0740+6620 [44] shaded green. We find, that for the pure hadronic cases, all considered effective masses generate neutron stars, that fit within the mass-radius range determined by Miller et al. [49]. The cases \\(m^{*}/m\\geq 0.55\\) are within the range determined by Riley et al. [48] as well, only the \\(m^{*}/m=0.55\\) case is outside the range.
By definition, only category II - IV phase transitions can support hybrid stars, that fulfill the NICER constraints. Category I has to meet the constraints with its hadronic branch. However, category II phase transitions generate massive hybrid stars, which usually are at higher masses than the constraint as well. Category III phase transitions take place at a mass range, that is within the NICER likelihood. As a result hybrid stars and pure hadronic stars, that fit within the constraint can be found for all effective nucleon masses. The black straight lines in Fig. 6 indicate the maximum of the hadronic branch. The lowest maximal masses are generated by the lowest transitional pressures of the respective cases. The \\(m^{*}/m=0.75\\) case can not reach the \\(2M_{\\odot}\\) constraint, as mentioned previously, however the pure hadronic \\(m^{*}/m=0.75\\) case fits well with the Riley et al. mass and radius data [48].
Due to the comparatively small uncertainty in radius category IV phase transitions that generate neutron stars within the constraints from either Riley et al. or Miller et al. are impossible to find. The hadronic branch ends before the minimal mass is reached. The hybrid star branch would be located at smaller radii than required. This behavior can be seen for the earliest phase transitions of the category III examples in Fig. 6 as well. Therefore we can state, that a strong phase transition is only compatible with the NICER constraints if the maximal mass of the hadronic branch is greater than the minimal mass of the NICER measurement. This can be related to the transitional pressure and the density. We find, that strong phase transitions are not viable for densities below \\(n\\lesssim 1.7\\,n_{0}\\). We consider phase transitions \"strong\" if \\(\\Delta\\epsilon\\geq 350\\)MeV/fm\\({}^{3}\\). This value is the lowest value for of discontinuity in energy density that generates a visible difference between the hadronic maximum and the hybrid star minimum of about \\(0.1M_{\\odot}\\) for category IV cases. We used the explicit radii from Riley et al. [48] and Miller et al. [49] instead of the corresponding likelihood ellipses. When considering the \\(2\\sigma\\) likelihood ellipses [50] the constraints become weaker. However, a phase transition with parameters \\(n\\lesssim 1.4n_{0}\\) and \\(\\Delta\\epsilon\\geq 350\\)MeV/fm\\({}^{3}\\) is still outside the \\(2\\sigma\\) likelihood con
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|} \\hline Category & 0.55 & 0.60 & 0.65 & 0.70 & 0.75 \\\\ \\hline I/0 & x & x & o & y & y \\\\ II & x & x & o & y & n.a. \\\\ III & y & y & y & y & n.a. \\\\ IV & y & y & y & y & y \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: These are the cases of \\(m^{*}/m\\) supported by tidal deformability observation. The o denotes the cases where the line is at the 90% credibility limit, y is below and x is above.
straints from the NICER measurement.
## V Conclusion
By employing a parameterized relativistic mean field equation of state we explored how the stiffness of a hadronic EoS influences the tidal deformability of an EoS featuring a phase transition from hadronic to quark matter. This phase transitions are chosen to represent the four categories of twin stars [18]. As stated in our previous work [38] an EoSs with a phase transition can generate hybrid-hybrid, neutron-hybrid and neutron-neutron combinations, depending on the location of the phase transition in the mass-radius relation. We consider effective nucleon masses from \\(m^{*}/m=0.75\\) to \\(m^{*}/m=0.55\\), where a larger effective mass corresponds to a softer EoS. Like Hornick et al. [32], we find, that only the pure hadronic cases for \\(m^{*}/m\\geq 0.65\\) are compatible with the GW170817 data [56]. The \\(m^{*}/m=0.75\\) case is to soft to generate a stable second branch, that fulfills the requirement for a category I,II or III phase transition. Future measurement of neutron stars with masses above \\(2M_{\\odot}\\) might cause similar problems for the \\(m^{*}/m=0.70\\) case. At the time of this work the maximal masses of the \\(m^{*}/m=0.70\\) category II and III phase transitions are still within the margin of error of the most massive know neutron stars [41; 42; 43; 44].
Due to the higher compactness of hybrid stars, the cases \\(m^{*}/m\\leq 0.70\\) can generate neutron star pairs, from a category III phase transition, deeper within the credibility limit than they could without a phase transition. This means, that even the cases \\(m^{*}/m=0.55\\) and \\(m^{*}/m=0.60\\), which are on their own too stiff to allow for pairs of neutron stars with sufficiently small values of tidal deformability, can generate combinations within the credibility limit. The benefits of a phase transition in regards to an EoS's compatibility with the LIGO data have been shown before [36; 37; 38; 39].
Only category IV can be realized for all examined values of \\(m^{*}/m\\), this is because the resulting EoSs can be considered independent form \\(m^{*}/m\\) due to the early phase transition. However, the recent results form NICER [50] are incompatible with a category IV phase transition and furthermore exclude strong phase transitions at densities of \\(n\\lesssim 1.7\\,n_{0}\\), where a visible jump in mass of \\(\\Delta M\\geq 0.1M_{\\odot}\\) at the point of transition occurs.
###### Acknowledgements.
The authors thank Andreas Zacchi for helpful discussions. JS acknowledges support from the Helmholtz International Center for FAIR (HIC for FAIR). JEC is a recipient of the Carlo and Karin Giersch Scholarship of the Giersch foundation.
## References
* Ivanenko and Kurdgelaidze [1965]D. D. Ivanenko and D. F. Kurdgelaidze, Astrophys. **1**, 251 (1965).
* Itoh [1970]N. Itoh, Prog.Theor.Phys. **44**, 291 (1970).
* Alford _et al._ [2005]M. Alford, M. Braby, M. Paris, and S. Reddy, Astrophys.J. **629**, 969 (2005), arXiv:nucl-th/0411016 [nucl-th].
* Coelho _et al._ [2010]J. Coelho, C. Lenzi, M. Malheiro, J. Marinho, R.M., and M. Fiolhais, Int.J.Mod.Phys. **D19**, 1521 (2010), arXiv:1001.1661 [nucl-th].
* Chen _et al._ [2011]H. Chen, M. Baldo, G. Burgio, and H.-J. Schulze, Phys.Rev. **D84**, 105023 (2011), arXiv:1107.2497 [nucl-th].
* Masuda _et al._ [2013]K. Masuda, T. Hatsuda, and T. Takatsuka, Astrophys. J. **764**, 12 (2013), arXiv:1205.3621 [nucl-th].
* Yasutake _et al._ [2014]N. Yasutake, R. Lastowiecki, S. Benic, D. Blaschke, T. Maruyama, _et al._, Phys.Rev. **C89**, 065803 (2014), arXiv:1403.7492 [astro-ph.HE].
* Zacchi _et al._ [2016]A. Zacchi, M. Hanauske, and J. Schaffner-Bielich, Phys. Rev. **D93**, 065011 (2016), arXiv:1510.00180 [nucl-th].
* Kampfer [1981]B. Kampfer, J.Phys. **A14**, L471 (1981).
Figure 6: Mass-radius relations for category III phase transitions are depicted for all considered effective nucleon masses. The constraints for the J0030+0451 measurement by NICER are taken from refs. [48; 49] and are shaded gray. The \\(2M_{\\odot}\\) constraint from J0740+6620 [44] is shaded green. All cases of \\(m^{*}/m\\) can generate neutron stars and hybrid stars within the NICER likelihood, if the transition parameters are chosen accordingly. The case \\(m^{*}/m=0.75\\) does not meet the \\(2M_{\\odot}\\) constraint if a phase transition takes place. The black straight lines indicate the maximal mass of the hadronic branch for the respective transitional pressure.
* Glendenning and Kettner (2000)N. K. Glendenning and C. Kettner, Astron. Astrophys. **353**, L9 (2000), astro-ph/9807155.
* Schertler _et al._ (2000)K. Schertler, C. Greiner, J. Schaffner-Bielich, and M. H. Thoma, Nucl. Phys. **A677**, 463 (2000), astro-ph/0001467.
* Schaffner-Bielich _et al._ (2002)J. Schaffner-Bielich, M. Hanauske, H. Stocker, and W. Greiner, Phys. Rev. Lett. **89**, 171101 (2002), astro-ph/0005490.
* Zdunik and Haensel (2013)J. Zdunik and P. Haensel, Astron.Astrophys. **551**, A61 (2013), arXiv:1211.1231 [astro-ph.SR].
* Alford _et al._ (2015)M. G. Alford, G. F. Burgio, S. Han, G. Taranto, and D. ZappalA, Phys. Rev. **D92**, 083002 (2015), arXiv:1501.07902 [nucl-th].
* Blaschke and Alvarez-Castillo (2016)D. Blaschke and D. E. Alvarez-Castillo, _Proceedings, 11th Conference on Quark Confinement and the Hadron Spectrum (Confinement XI): St. Petersburg, Russia, September 8-12, 2014_, AIP Conf. Proc. **1701**, 020013 (2016), arXiv:1503.03834 [astro-ph.HE].
* Zacchi _et al._ (2017)A. Zacchi, L. Tolos, and J. Schaffner-Bielich, Phys. Rev. **D95**, 103008 (2017), arXiv:1612.06167 [astro-ph.HE].
* Alford and Sedrakian (2017)M. G. Alford and A. Sedrakian, Phys. Rev. Lett. **119**, 161104 (2017), arXiv:1706.01592 [astro-ph.HE].
* Christian _et al._ (2018)J.-E. Christian, A. Zacchi, and J. Schaffner-Bielich, Eur. Phys. J. **A54**, 28 (2018), arXiv:1707.07524 [astro-ph.HE].
* Blaschke _et al._ (2020)D. Blaschke, D. E. Alvarez-Castillo, A. Ayriyan, H. Grigorian, N. K. Lagrani, and F. Weber (2020) pp. 207-256, arXiv:1906.02522 [astro-ph.HE].
* Bodmer (1971)A. R. Bodmer, Phys. Rev. D **4**, 1601 (1971).
* Haensel _et al._ (1986)P. Haensel, J. L. Zdunik, and R. Schaeffer, Astron. Astrophys. **160**, 121 (1986).
* Alcock _et al._ (1986)C. Alcock, E. Farhi, and A. Olinto, Astrophys. J. **310**, 261 (1986).
* Fraga _et al._ (2002)E. S. Fraga, R. D. Pisarski, and J. Schaffner-Bielich, Nucl. Phys. **A702**, 217 (2002), nucl-th/0110077.
* Zacchi _et al._ (2015)A. Zacchi, R. Stiele, and J. Schaffner-Bielich, Phys. Rev. **D92**, 045022 (2015), arXiv:1506.01868 [astro-ph.HE].
* Johnson and Teller (1955)M. H. Johnson and E. Teller, Phys. Rev. **98**, 783 (1955).
* Duerr (1956)H.-P. Duerr, Phys. Rev. **103**, 469 (1956).
* Walecka (1974)J. D. Walecka, Ann. Phys. (N.Y.) **83**, 491 (1974).
* Boguta and Bodmer (1977)J. Boguta and A. R. Bodmer, Nucl. Phys. **A292**, 413 (1977).
* Serot and Walecka (1986)B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. **16**, 1 (1986).
* Mueller and Serot (1996)H. Mueller and B. D. Serot, Nucl. Phys. **A606**, 508 (1996), arXiv:nucl-th/9603037 [nucl-th].
* Typel _et al._ (2010)S. Typel, G. Ropke, T. Klahn, D. Blaschke, and H. H. Wolter, Phys. Rev. **C81**, 015803 (2010), arXiv:0908.2344 [nucl-th].
* Hornick _et al._ (2018)N. Hornick, L. Tolos, A. Zacchi, J.-E. Christian, and J. Schaffner-Bielich, Phys. Rev. **C98**, 065804 (2018), arXiv:1808.06808 [astro-ph.HE].
* Abbott _et al._ (2017)B. P. Abbott _et al._ (Virgo, LIGO Scientific), Phys. Rev. Lett. **119**, 161101 (2017), arXiv:1710.05832 [gr-qc].
* Annala _et al._ (2018)E. Annala, T. Gorda, A. Kurkela, and A. Vuorinen, Phys. Rev. Lett. **120**, 172703 (2018), arXiv:1711.02644 [astro-ph.HE].
* Bauswein _et al._ (2017)A. Bauswein, O. Just, H.-T. Janka, and N. Stergioulas, Astrophys. J. **850**, L34 (2017), arXiv:1710.06843 [astro-ph.HE].
* Paschalidis _et al._ (2018)V. Paschalidis, K. Yagi, D. Alvarez-Castillo, D. B. Blaschke, and A. Sedrakian, Phys. Rev. **D97**, 084038 (2018), arXiv:1712.00451 [astro-ph.HE].
* Alvarez-Castillo _et al._ (2019)D. E. Alvarez-Castillo, D. B. Blaschke, A. G. Grunfeld, and V. P. Pagura, Phys. Rev. **D99**, 063010 (2019), arXiv:1805.04105 [hep-ph].
* Christian _et al._ (2019)J.-E. Christian, A. Zacchi, and J. Schaffner-Bielich, Phys. Rev. **D99**, 023009 (2019), arXiv:1809.03333 [astro-ph.HE].
* Montana _et al._ (2019)G. Montana, L. Tolos, M. Hanauske, and L. Rezzolla, Phys. Rev. **D 99** (2019), arXiv:1811.10929 [astro-ph.HE].
* Sieniawska _et al._ (2019)M. Sieniawska, W. Turczanski, M. Bejger, and J. L. Zdunik, Astron. Astrophys. **622**, A174 (2019), arXiv:1807.11581 [astro-ph.HE].
* Demorest _et al._ (2010)P. Demorest, T. Pennucci, S. Ransom, M. Roberts, and J. Hessels, Nature **467**, 1081 (2010), arXiv:1010.5788 [astro-ph.HE].
* Antoniadis _et al._ (2013)J. Antoniadis, P. C. Freire, N. Wex, T. M. Tauris, R. S. Lynch, M. H. van Kerkwijk, M. Kramer, C. Bassa, V. S. Dhillon, T. Driebe, J. W. T. Hessels, V. M. Kaspi, V. I. Kondratiev, N. Langer, T. R. Marsh, M. A. McLaughlin, T. T. Pennucci, S. M. Ransom, I. H. Stairs, J. van Leeuwen, J. P. W. Verbiest, and D. G. Whelan, Science **340**, 6131 (2013), arXiv:1304.6875 [astro-ph.HE].
* Fonseca _et al._ (2016)E. Fonseca _et al._, Astrophys. J. **832**, 167 (2016), arXiv:1603.00545 [astro-ph.HE].
* Cromartie _et al._ (2019)H. T. Cromartie _et al._, (2019), 10.1038/s41550-019-0880-2, arXiv:1904.06759 [astro-ph.HE].
* Boguta and Stocker (1983)J. Boguta and H. Stocker, Physics Letters B **120**, 289 (1983).
* Yasin _et al._ (2018)H. Yasin, S. Schafer, A. Arcones, and A. Schwenk, (2018), arXiv:1812.02002 [nucl-th].
* Alford _et al._ (2014)M. G. Alford, S. Han, and M. Prakash, JPS Conf.Proc. **1**, 013041 (2014).
* Riley _et al._ (2019)T. E. Riley _et al._, Astrophys. J. Lett. **887**, L21 (2019), arXiv:1912.05702 [astro-ph.HE].
* Miller _et al._ (2019)M. C. Miller _et al._, Astrophys. J. Lett. **887**, L24 (2019), arXiv:1912.05705 [astro-ph.HE].
* Raaijmakers _et al._ (2019)G. Raaijmakers _et al._, Astrophys. J. Lett. **887**, L22 (2019), arXiv:1912.05703 [astro-ph.HE].
* Todd-Rutel and Piekarevicz (2005)B. G. Todd-Rutel and J. Piekarevicz, Phys. Rev. Lett. **95**, 122501 (2005), arXiv:nucl-th/0504034 [nucl-th].
* Chen and Piekarevicz (2014)W.-C. Chen and J. Piekarevicz, Phys. Rev. **C90**, 044305 (2014), arXiv:1408.4159 [nucl-th].
* Horowitz and Piekarevicz (2001)C. J. Horowitz and J. Piekarevicz, Phys. Rev. **C64**, 062802 (2001), nucl-th/0108036.
* Drischler _et al._ (2016)C. Dris | We explore the connection between the stiffness of an hadronic equation of state (EoS) with a sharp phase transition to quark matter to its tidal deformability. For this we employ a hadronic relativistic mean field model with a parameterized effective nucleon mass to vary the stiffness in conjunction with a constant speed of sound EoS for quark matter. We compute multiple scenarios with phase transitions according to the four possible cases of a hybrid star EoS with a stable second branch. We demonstrate at the example of GW170817 how the effective nucleon mass can be constrained by using gravitational wave data. We find, that certain values of the effective nucleon mass are incompatible with GW170817 and a phase transition simultaneously. By using the recent NICER measurements of J0030+0451 at the \\(1\\sigma\\) level we constrain our results further and find that strong phase transitions with a visible jump in the mass-radius relation are ruled out at densities below 1.7 times saturation density. | Write a summary of the passage below. | 204 |
arxiv-format/1304_1582v2.md | Interview with Warren Wiscombe on scientific programing and his contributions to atmospheric science tool making.
Piotr J. Flatau
#
This report can be cited us:
Flatau, P. J. 2013, Interview with Warren Wiscombe on scientific programming and his contributions to atmospheric science tool making, [http://arxiv.org/abs/1304.1582](http://arxiv.org/abs/1304.1582)
## Light scattering
[Piotr J. Flatau] You know Warren I still have a folder of your publications which I started to collect when I was a student at Colorado State University. You are one of a few researchers whose papers I've really read and I was studying your codes. I divided the talk into several sections: light scattering, radiative transfer and general issues. I do not think that we will be able to go over all of them. But let us begin. In January of 1973 you wrote a letter to Colonel John Perry. In this letter you say that it would be beneficial to provide Mie calculations in the form of tables which would be accessible to general community. Before you van de Hulst and others struggled with Mie calculations. In that time it was a difficult problem. I would like you to set the stage and describe the status of light scattering calculations in the 1970s. I would like people to understand how heroic it was at that time.
January 19, 1973
AIR MAIL
Colonel John Perry
DARPA/IPT
1400 Wilson Boulevard
Arlington, VA 22209
Dear Colonel Perry:
There are a large number of research groups in this
country (many of whom participate in the DARPA network)
who perform Mie scattering calculations. There is great
interest in problems of radiative transfer through aero-
sols and clouds, both in the technical community and in
the community at large, because of their potential climatic
impact (SCEP report, SMIC report, etc.). Therefore, the
quantity of Mie calculations performed in the future will
most likely escalate.
This would be of little concern if such calculations
were inexpensive, but, in fact, Mie computations are
notorious for the quantity of computer time which they
gobble up. There is, therefore, a tremendous wastage of
computing funds (almost entirely at government expense)
when, as is the case, identical computations are performed
at many different facilities.
This waste could, to a large extent, by alleviated
by providing central tables of Mie quantities available
throughout the DARPA network. We have made estimates of
the amount of data such tables would need to contain, and
it runs to many billions of words, far beyond the storage
capacity of current-generation computers. Probably the
only feasible place for such tables to reside would be the
laser store of ILLIAC IV. I believe, after numerous conver-
sations with people in the field, that such tables would be
heavily and productively used, for Mie computations are now
so standardized that there is no glory for a researcher in Colonel John Perry -2- January 19, 1973
writing his own code - it is merely a chore, and a costly one as far as the government is concerned.
I understand that ILLIAC IV is proceeding considerably behind schedule, However, I do not believe that preliminary studies on the creation of Mie tables are premature. A consensus from authorities in the field on the exact structure of such tables will be needed, and the computer code to generate the tables can be written in advance of the actual availability of ILLIAC IV. Naturally, since we have made numerous Mie computations ourselves and are familiar with the radiation community, we believe that we are eminently qualified to undertake the task.
If this idea interests you, we would be happy to discuss it further by telephone.
Cheers,
Warren J. Wiscombe
Environmental Sciences Dept.
WJW:prx
CC: Dr. Lawrence Roberts
Director, IPT
[Warren Wiscombe] Right, Perry was the head of ARPA climate dynamics program. It is amazing that you've got this. I am not sure I have it.
#### You do.
Of course the community was very small and everyone used in that time Dave's codes. He was an odd bird at the IBM Research Lab, in the days when IBM would support a lot of research at their lab. We were happy to use Dave's codes at the beginning. They were certainly better than anything available; they were the only codes available. He did a very careful job of looking at where you cut off series that you are summing and make sure that you have accurate results and so forth. Over the years that I used his codes I kept finding areas where, when you poked too hard, the code would break down. I kept the list of the things and I have fixed them as time allowed. I developed a log of fixes of the Dave code. His coding was opaque at best, although typical of the times. I tried to develop a more flowing style that was elegant and well documented both inside and outside the code. Dave's code was well documented outside of the code, he published IBM reports and I followed his example in that respect for both Mie codes and DISORT. But it reached the point when I felt that the Mie code he distributed was simply not adequate for the kind of work problems that we were attacking. It wasn't that it was slow, computers were slow in those days, and you could spend hours doing Mie calculations when you integrated over the drops sizes in the cloud, for example. It was simply that sometimes he summed his series too far and he would add terms that would be wrong. Mie series is kind of weird series. It doesn't actually converge. It converges, and then diverges when you keep going because calculations you use to calculate some of the functions there, some of the Bessel functions, start to diverge. One has to cut it off very precisely. In fact one of the results people love the most of the improvements I made was that I have said that the Mie code should stop at x+4 (x to the 1/3 power). That result propagated around the world. Everybody was using it. In fact, it was from some work that Nussenzweig had done.
Report No. 320-3237
May 1968
SUBROUTINES FOR COMPUTING THE PARAMETERS OF THE
ELECTROMAGNETIC RADIATION SCATTERED BY A SPHERE
J. V. Dove
IBM Scientific Center
Polo Alto, California
A listing of these subroutines along with sample program outputs are provided in the appendices.
**Yes, I remember. After you there was a book by Bohren and Huffman and they used it.**
It became universal criterion. It was one of many improvements I have made. Some of them were smaller, less noticeable, but they were important. Like, Dave's code did not work as the refractive index approached one, and yet bacteria were in that regime. You did not want to have code which failed in that regime if you were biologist. That is the story of the Mie code. He published his in 1969 and I published mine in 1979 both as a paper and as a report and I tried to use ideas and scientific software of the time and a little bit of software engineering although that came around much more strongly later. I became aware of this vast software engineering literature that wasn't all stupid. Scientists have a tendency to think that it is stupid that they are just twiddling knobs, and it is not really important, but actually I found number of very important ideas in that community and I brought it back. But I think that in the time of the Mie code, say in 1979, I haven't brought a lot of these ideas. I kind of developed my own style. Things were well documented, there were a lot of comments, the code writing was not opaque, and one could pretty much follow it. I was on my way towards the style that I have eventually perfected. The tables, it looks silly now. But you may remember back in late 60's and early 70's everybody was using tables from the reports by Deirmendjian [11]. Do you remember the Deirmendjian C.1 Cloud?
**Yes I remember. I also remember seeing a huge book in the library, it was newspaper size.**
Yeah. These were Coulson [12] tables for Rayleigh scattering including polarization. Those were enormous. But the Deirmendjian book was smaller but it had tables of phase functions. Everybody was using phase function Haze L and Cloud C.1; or they used the Henyey-Greenstein function which came from astrophysics. Nobody paid much attention to Mie calculations simply because they were too damn expensive. If you put a realistic drop distribution with cloud drops going up to 30 microns in size you would be looking at 1/3 to 1 hour computer time to get Mie properties, for 200 wavelengths in shortwave. It was serious computation. Now, you snap your fingers, this is how fast it is done. Another thing about Mie computations were resonances, those spikes that occur. Those were also scary because if you happen to land on one it would really distort your integral over the sizes. Of course, you can hardly avoid landing on resonances because they are all over the place. People would do these integrations over the drop sizes and would then double the number of points for the quadrature and they would get different results and they would quadruple the number of points and get another result and so on and so on and it would never converge. The reason you do not converge is that you land on resonances, maybe not right on the peak but on the shoulder. It was very maddening for me. I remember tearing up sheets of computer paper. We never got it to converge properly, not in any mathematician sense.
**Mie Scattering Calculations:**
**Advances in Technique and**
**Fast, Vector-Speed Computer Codes**
Warren J. Wiscombe
**This was all at NCAR?**
Yes it was all done at NCAR. It was all sideline activity for me. I was really beating on the doubling method and calculations such as delta Eddington at that time.
**What was the NCAR environment as far as supporting coding?**
NCAR was like a dream. It was like you wish that scientific institutions were like that. It is not like that anymore. You had your freedom; you could do what you wanted. They were all these facilities available including this CRAY computer which we all were mesmerized by because we could vectorize our code and speed it up by a factor of seven. We all learned how to write a code so it would vectorize properly.
**Did you have help at NCAR who would help you programming or it was you who were the leader in that time?**
Certainly the people who were supporting the core NCAR software they were aware of these issues. But I think rather dimly. I was not their leader. But I think I was leading in terms of ideas,in the level of detail that you need to go into to document scientific software properly. It is not enough to just list variable names. I would add long paragraphs of text into documentation and say - look here is where you can get into trouble, don't worry about these variables, I tried to get some insight, so people where not just swimming with the sharks, they actually had some guidance or even do something about the code. That was what was missing in those days. The kind of documentation they were doing was cold and sterile; it didn't have any human touch to it. It took big modelers a long time to get into documenting their codes. They simply felt that they were changing the code all the time so there was not a point of documenting anything. I later realized that there was just a total disconnect with software engineering. In software engineering of course, whether it is large code such as those which run a telephone system or small ones - these codes they are changed too, but they still are documented and they have configuration control and they are very careful about changes. The more frequently you change the code the longer it will take to perfect it. In those days modelers did not believe in it or know about it. Everybody was madly typing at key punch machines, changing their computer cards all the time. Models were moving targets, so why document them? It wasn't as easy to document in those days either. You had to have your secretary to type up the documentation. You could punch cards and cards would serve as some kind of documentation. It was a different world. Eventually big modelers came around, and I credit them, even though their realization came late, by the early 90's they finally got the message that you can't just have a Wild West attitude. You really need to be careful writing scientific software, you have to document it properly, you have to configure and control it, and you can't just randomly make changes to it when you find a bug. Even when you find a bug, don't just go and fix it because if you find one, you will find another one. The philosophy was, oh gosh I have found the bug and if I fix it, everything will be wonderful. But the software engineers were right often there were more bugs, more subtle ones, so they go over this endless cycle of core dumps, looking for the error and fixing another bug. It did not occur to them that there was a more rational way to develop scientific programs.
You wrote the NCAR tech memo in 1979 about MIEV0, MIEV1 and revised it in 1996 and these programs are still widely used. There are 700-800 references to one of your papers on Mie code alone. In that report you list several goals: maximum speed, generality, reliability, avoidance of numerical instability, portability, accuracy, and as simple and straightforward as possible. This was in 1979. Which of these are you still finding appealing to you and why? To which extent \"elegance\" of the program was of importance in your work?**
Did I write it? It is beautiful. I think I know what you are driving at. It is certainly unique to me. I migrated to the field from applied mathematics and I had a mathematician's take on things. Mathematicians are big on elegance. The big accomplishment is to prove something in fewer steps. I felt that those same ideas can apply to computer programs and I did the best I could to make them simple and elegant. It was not something which was around at the time. If I said something like that - people would laugh. Computer programs were just a means to an end. Not something beautiful in themselves. I never took that attitude. I have always thought - make them as beautiful as you can. It was my background which made me trend towards elegance. Not often something you hear. Software engineers do talk about it, maybe not in such reverential tones as I do.
**Do you think that elegance helps programing? Makes it less cumbersome to debug the program?**
Yes definitely. Often I would make cosmetic changes to programs. At least in my view it would make the program more readable, more robust against people introducing errors into it. This is always the danger that I was aware of. People can take your lines of code and introduce errors and I was always aware of it. I was aware that maybe if you make it simple enough maybe they will not introduce errors when they start fiddling with it. To me this was something deeper.
**I have asked you also about maximum speed, generality, reliability, avoidance of numerical instability, portability, accuracy. I have a reason for it. We are often driven by hardware, in that time it was vector processing.**
Maximum speed is not on my list anymore. I guess that reliability and robustness are the most important. I was very much influenced by the guy in software engineering who said that one has to try to break the software. He was very much of the philosophy that we are too soft on software. We should be tough on software and make it break, because if do not try it we will never know how robust it is. For me reliability is very important. Elegance would be secondary priority. If the code is reliable, well documented, and of course not having numerical instabilities, but that is almost a given.
My interest was to take given equations and to do the very best job with them. I wanted to produce the very best result for those equations. When I worked with Nussenzveig part of the work was to undermine Mie calculations because he was saying that for larger particles one can use other formulas. But even there I was stickler for detail. Getting Airy functions just right, getting Bessel functions just right. I would write my own routines. I would not necessarily rely on numerical libraries. I wanted all special functions tested and reliable and failure tested. I pushed them as far as I could and they did not break. Or maybe they would break and then I would fix them.
**What was your involvement in research with Nussenzveig and software for spheres with very large size parameter? What was the motivation for this research?**
I was frustrated. Mie scattering calculations for large size particles were just horrendously expensive and most people just defaulted to geometric optics which would kick in at size parameter of 1000, although you can argue 500. Really there was a no man's land of size parameter between 100 and 1000 where the calculations were just bloody expensive if you were doing integration over sizes so I was always on lookout for ways to do that better. Just at that time, I became aware of a student of Nussenzveig named Khare. So I went to Bob Dickinson and said, let's invite this guy Khare. He said sure, invite him. When he arrived to NCAR he said - oh, by the way, my thesis advisor Nussenzveig is in the country visiting from Brazil. So I went to Bob Dickinson and he said - invite him too. We paid for their travel and the rest is history. In the 1970s at NCAR you had an almost unlimited travel budget; you only had to justify it to your branch chief. This was immensely productive and kept the atmospheric community tightly coupled.
Nussenzveig and I hit it off and I became his super programmer. I brought everything I knew about scientific programs. He needed that because he did not program at all. He was an ivory tower guy. He derived these long formulas, but he did not have a clue how to calculate them. He and I were just perfect for each other. I knew how to calculate stuff and he knew how to derive stuff. I was not entirely uninvolved with it, I would argue with him. Can we improve this, can we improve that, we had some interactions on derivations but he was definitely leader on that. I was entirely responsible for writing programs and I was very careful. We produced a string of papers. The last one we did was on resonances which bring us full circle because he actually developed formulas which were predicting where resonances were. You could avoid them if you want to. I am not sure if the calculations of where they are would be so expensive that it would not be worth it, but in principle you could avoid them.
Wiscombe and Nussenzveig, NASA, 1999
**One of the theories which were used was called CAM - Complex Angular Momentum. Could you comment on this approximation?**
CAM theory was an offspring of a theory of 1950' which was called Regge pole theory [Regge]. Nussenzveig took a look at Regge pole theory which was quantum mechanics theory and he just grabbed it and dragged it to classical physics and applied it to Mie scattering problems. He also brought some work from Debye's thesis [Debye]. Debye said that there is an alternative to Mie expansion; you can expand in a different way. Nussenzveig learned to calculate each term of the Debye expansion in his new Complex Angular Momentum theory and it was novel, everybody agrees. This made him such a great physicist. It was really cool. He married his own work with the work of Debye which just sat there. Van de Hulst mentioned it in his book, of course. But it kind of sat there, and he really expanded this work and made it really relevant. Now, we were actually able to calculate terms in the Debye expansion, which converge very quickly because it amounted to expansion in terms of multiple internal reflections and there were these pesky surface waves terms which we had to add in. Those were where most of the trouble was; above edge and below edge rays - they kind of skitter along the edge of the sphere. It was a very physical way of looking at scattering. Mie series themselves doesn't give you much physical insight except in Rayleigh limit. Debye expansion gives you quite a little bit of insight, although when you get to surface waves you still feel like you skate on thin ice. But they are there, you can measure them. It is a beautiful phenomenon.
### Was the CAM theory practically applicable?
Definitely. I could not have done the large droplets paper without CAM theory.
#### Oh, really.
Even then in the early 80's Mie calculations were so expensive. I remember one night I used eight hours of CRAY time and I was called on the carpet by Chuck Leith who was the head of NCAR science at the time and he asked \"What were you doing burning eight hours of CRAY time on Mie scattering calculations\" and I said but it just takes that much time because particles are large and series is long, there is a lot of different sizes you have to integrate over. I don't think he was convinced. Anyway, it was hard to do large drops. By which I mean drops bigger than 10 microns (the favorite size of radiation people) but smaller than drizzle droplets. Sort of a no man's land of drop sizes. Those were very hard to calculate.
Did you use it when you were working on \"The effects of very large drops on absorption\" paper? This is the first entry I have in my folder of your papers. It is in front of me.
Yes. My earliest papers with Nussenzveig came out in the early 1980's and after that we collaborated on several other ones thanks to NASA Postdoctoral Fellowships for Nussenzveig at two different times. We even did a paper on bubbles, which you were interested in. Now they are coming back - people are studying bubbles in the ocean for geoengineering.
Nussenzveig handwritten derivations.
### Oh they are?
Yes, people are talking about modifying clouds by making huge bubble clouds in the ocean, which eventually produce cloud condensation nuclei when they come to the surface and pop.
**Interesting. I actually wrote a paper about bubble clouds in which I have classified them into bubble cumulonimbus, bubble stratocumulus and I calculated their radiative properties. Anyway, let's go back to you. You put together a paper on \"Scattering by Chebyshev particles\" with Alberto Mugnai who was in that time also in Fort Collins.**
Yes, and he was at NCAR for a while as a postdoc. He seemed bright to me. I was curious about the phenomena of non-spherical scattering and I realized that Mie theory was limited to spheres or near-spheres and I thought let us fly in the dark and try to adapt this Waterman's method which at that time was not called T-matrix. It was called EBCM - extended boundary condition method. How do I even remember it? Alberto was really good with computer code like me. He basically took some old EBCM code, may be got one from Waterman or may be not. He developed it like crazy, he made it really good. We used this code much, we just beat it to death, all the work we did after that. I was really interested in the effect of concavity, what happens when particles are concave, are there trapping modes? Is the radiation preferentially absorbed when there are these concavities? What is going on? Maxwell equations ought to be able to provide answer to that. It turned out to be devilishly difficult because if you make particles too concave the EBCM method doesn't converge. You realize that it is probably not converging series. It is an asymptotic series. It is series which when you sum it up to certain point it is good, but when you sum beyond that point it gets worse. We never could do anything too concave but we did manage to get mild concavity using this Chebyshev shape. The Chebyshev thing came from the fact that I was an applied mathematician and I have loved special functions. I looked at my Abramowitz and Stegun [AbramowitzStegun] and looked around for a function which would make sense to apply to non-spherical particles and I said let us do the Chebyshev thing. This is how it happened.**
**Very recently I have used your Chebyshev ideas. I worked with a student at Scripps Institution of Oceanography on modeling of internals of a realistic biological cell. I remembered your Chebyshev particles and I told him to use it. It was like a year ago. I gave him your paper.**
Oh, that's amazing.
### How long it lasted, your love with non-spherical particles?
It went on over 10 years. The reason was partly personal. Alberto and I got along quite well and I really enjoyed going to Italy. At the time I did not have anybody funding this research. Alberto managed to provide the money. He paid all my travel and NASA said go. I would go over there for like a month and we would do a paper together. He would make me teach something when I was there. One time when I went over I taught scientific software which relates to our conversation. I said to him that I am just bored with all that Mie scattering, I want to teach something different. That was the first time I ever did it, in Italy. It is funny how things develop; as a result of going there to him I developed my thoughts in writing, although I have never published a book. I am very sad about it. I think that I should have written a book about scientific software. I had an extensive set of notes which I distributed for years. It was called - writing scientific software. A lot of people seem to still know about it, at least as of 10 years ago people would still ask me about copies of it, and it all came out from this course with Alberto.
Alberto Mugnai. He was in Rome. I think I visited that place once. I knew Gianni Dalu who was from there and he is my friend. It was not University of Rome; it was more like academy of science, right?Yes, it was a research institute and I was teaching there. He just rounded up his colleagues and made them come to my lectures on scientific software. I told them that it was good for them, that they probably wrote crappy code.
**What is your view on the role of computers in light scattering applications? Without them we would not be able to calculate many interesting phenomena such as resonances. On the other hand we would probably have many theoretical developments such as the anomalous diffraction theory of van de Hulst or CAM? In atmospheric sciences we have similar dilemma. For example, one wonders if we would have semi-geostrophic approximation if we had powerful computers at that time. What is your take on this?**
I tend to agree that computers undermine thinking about simple approximations; that is certainly a loss. On the other hand you may say we had many years. One can start the clock from Maxwell equations, but say start the clock from Mie theory. We had more than 50 years to the time of van de Hulst's book. May be they pretty much exhausted what was possible. The best new approximation was Complex Angular Momentum. Those came along in the 50's and even those were rather complex. This was first breakthrough that came along in several years. Perhaps there weren't any more simple approximations. Maybe computers did not destroy the progress of simple approximations. The same thing has happened in quantum mechanics. You can't calculate analytically anything beyond some simple systems, say helium. I don't think that anybody in that field thinks that if they had withheld computers they would now calculate analytically lithium atoms. Computers were the next logical step. Another thing which I would like to add is something about which I disagreed with my thesis advisor Gerald Whitham of Caltech. He believed in closed form solutions. Say, solutions to partial differential equations. But those exact solutions would be triple sums over a bunch of Bessel functions and other things. I would say to him, look if I can compute this directly using difference methods, how is that worse than trying to sum up this triple sum of Bessel functions? We never agreed on that. But I think my attitude was more modern. Either way you have to do a boat load of computing to get an answer even if you have a so called analytic solution.
It is sad that scientists these days don't get as good an applied mathematics education as they need. I got a pretty good education in applied math, you did, and many people we worked with had a good applied math education. That seems to be dying now. I think that not having a good applied math education, not knowing your way around the modern version of Abramowitz and Stegun (the one authored by Frank Olver and others), is a loss. Depending too much on numerical solutions without knowing what is under the hood will lead to scientists who know how to pull the levers but who don't know how the machine works.
I agree.I tend to tell my students that there is nothing magical even about sin(x). After all it is calculated using a series approximation. The simplicity of sin(x) is artificial. Sometimes triple series are more complicated to calculate than an integral representation which can be done by quadrature. It is interesting to hear you to say all these things. It is almost as if they come from my mouth.
My final comment is that blending of computational and analytical points of view are the most powerful. One example EBCM; now everyone is calling it the T-matrix method. This is a nice blending of these two methods. It is not just a brute force solution. It is clever. That is where we should be heading. When I was developing my codes I made use of my applied mathematics knowledge, about series, convergence, asymptotic series, and all this stuff which I learned in grad school.
Your light scattering publications have been referenced hundreds of times. What do you think is your lasting legacy in attacking light scattering computational problems? Are we just passing the torch and there will be better methods after us, or perhaps our legacy is not purely driven by hardware speed?
It is a very deep question. As far as my legacy I will let other people to talk about. But I can talk about my philosophy. I did not just graduate with a Ph. D. with this philosophy, it took years to develop one. I was influenced by many people, notably by Freeman Dyson. I have the tool making philosophy. I believe that I created some pretty good tools that other people used to good effect. That makes me happy. I would regard these tools as part of my legacy. What Freeman Dyson argued, and I don't know anybody who refuted this, is that the role of tools in science is larger than the role of new ideas. He says that in any given point in time there are a lot of new ideas kicking around. There is never a shortage of ideas. But there is always a shortage of good tools. He said that if you look at the history of science it takes great leaps when new tools come along. Of course, we all know the story of the telescope and Galileo but he goes on to tell other stories such as the electron microscope, the original microscope that allowed Pasteur to discover the bacteria. His point was that we vastly underrate the importance of tools in the development of science and we overrate the value of ideas. I am very much in the tool development business. I like to develop tools. In fact I look at ARM as a tool. I helped to develop ARM and while I would never tell ARM folks that - I looked at it as a tool. We provided a tool to the community. They used it pretty well in my view. It gets back to the software tool philosophy. There are even books on these topics. Remember \"Software Tools\" [SoftwareTools] written by the guy who invented UNIX [Kernighan] Kernighan.
Good Programming is not learned from
generaries. but by seeing how significant
programs can be made clean, easy to read.
easy to maintain and modify, human-
engineered, efficient, and reliable, by the
application of common sense and good
programming practices. Careful study
and imitation of good programs
leads to better writing.
Kernigan
Palager
I have read it.
They wrote two books that influenced me. One was called Software Tools in Pascal, but the
principles are very general, not just applicable to Pascal which no one seems to remember.
I do, I remember them.
The other one they wrote was The Elements of Programming Style which is a very thin book. I read it, I pored over it, scribbled on every line, and it is like a Bible. I learned many of the things which I subsequently applied from that book. It is a very nice book.
#### You think that in light scattering your role was that of a toolmaker?
Yes. I was a facilitator. I facilitated a large amount of other research. That pleases me. I like that. I would be happy to be called the toolmaker, among other things.
#### We have already talked for an hour. Are you OK or are you tired?
I am OK. By the way I have stolen some of the words from the Software Tools book. I just see that on the title page it mentions some of the words we discussed before. I was a disciple of those guys.
From left to right: Istvan Laszlo, Si-Chee Tsay, Knut Stamnes, warren Wiscombe. DISORT gang.
## Radiative transfer
[Piotr J. Flatau] **Let us move on to your other achievement, this time let us talk about your radiative transfer contributions. Your 1976 delta Eddington paper was followed by a series of reports and publications providing us with two stream radiative transfer solvers. I think these opened a way for the 1980 review by Meador and Weaver and even current two stream codes which benefited from your publications.**
[Warren Wiscombe] This was my first big hit. I do think that we sparked a lot of activity in that area. It was kind of a dead field. What I saw was climate coming along very strongly as a subject. This was in the days when dynamics dominated the field. Of course there were not that many of us. I was strong believer that eventually they have to recognize that radiation drives climate and that they will need a better radiation package. The radiation packages they had in their models were awful or nonexistent. They had radiation packages in which Sun never rose, they just used diurnal averages. They were very dismissive of the radiation. I knew that they will need better radiation and sure enough not long after delta Eddington came along there was really quite a move on the part of climate modelers to get better radiation packages in the shortwave but in the longwave too. When they were working with shortwave they really needed something like delta Eddington and I made it easy for them to implement it by publishing a report of how to do it numerically and a code as well. I do think that I stroked a cord there. This frantic activity up until 1980-1990 and the Meador-Weaver paper was just the realization of other people that it was needed. There was a lot of competition. You would not have been aware of that. Like the French. They thought they had a better approximation.**
### The Lenoble group in Lille. Tell me more about 1976 paper in terms of numerics.
Joseph came to visit NCAR. He had been in Wisconsin working with Jim Weinman and he was again kind of a Nussenzveig kind of character. He was from the older generation. It is humorous for me to say it now. He did not do computers. He just did not take to it the way we did. I was perfect for him. I said, you have an idea but an idea is worth nothing unless you test it. Let us testthis thing the way no one ever tested it. Let us compare it with the exact calculation which I was able to do because of ATRAD. Let us test delta Eddington, see if we can make it break. We did huge amount of calculations, only a few appeared in the paper. We tried to break it and we determined what its limit of applications is. That was breakthrough. We did not just publish a few formulas and say, well we think it might be a better approximation but we really tested it thoroughly. This is why we got so much attention. Because people could really put some trust in it. It was adopted in many GCMs over the time. Now GCMs have a variety of things, whether it is Fu-Liou. There was a time when delta Eddington pretty much dominated GCMs. Tony Slingo adapted it for the British Met Office.
In your report you used a pentadiagonal solver. Was that new or in that time people knew how to apply tridiagonal and pentadiagonal solvers to two stream?
They were aware of how to do it. I was lucky that I started my work when numerical analysis was just exploding. LINPACK, NetLib, the whole idea of sharing codes was coming into being as contrasted to previous times when people did not share codes. I had an applied math background so I was comfortable working with matrix theory. It was like mother's milk to me. I was a perfect fit. I hammered on the math - formed it to shape and hopefully made it stable. But I still remember instabilities I couldn't control - there were some IF statements in the delta Eddington code. Since I didn't know what to do, I just said \"If this condition occurs, stop\".
This is a more scientific question, but I am curious. Were you able to do molecular properties or at that time you were mostly concentrating on the two stream solver itself?
I was interested in actual fluxes because I was always climate oriented. For me fluxes were the number one item. I was not really into remote sensing for quite a while. I was a flux person and I hated the name change from \"flux\" to \"irradiance\" because it implied that radiation was somehow a different kind of energy than sensible or latent heat.
As far as molecular properties, you may remember, the only thing around was LOWTRAN. We all used LOWTRAN. It was not very good compared to what we have today but it was nice in that everyone used it (and cursed it at one time or another). In fact I rewrote LOWTRAN from scratch. I got the code in 1972 from Bob McClatchey at AFCRL. The code was so terrible that I went to my supervisor at that time, Burt Freeman, and I said - I can't use this, it is just awful, so I need to rewrite it. He said, go ahead. He gave me a couple of months and I rewrote LOWTRAN from scratch and added a lot of documentation in the form of comments. Actually I distributed it for a while because people became aware that I had a version of LOWTRAN which was robust and it had documentation and it was not full of GO TO statements and weird COMMON blocks. I was a secret, under the table, LOWTRAN distributor for a while.
I did not know about this one.I should dig it out at some time. Even my version would look pretty pathetic by modern standards. The original version was what we called \"spaghetti code\" - so tightly wrapped around itself that you couldn't find your way out unless you left a trail of popcorn.
In 1976 you published the Delta-Eddington report. How well was it received?NCAR was quite good at distributing it. In those days an NCAR report meant something. It got good distribution. It hit the market that was ready for it.
Do you feel that current two-stream codes benefited from your work?There was friendly, sometimes not so friendly competition. You have to remember that Fu-Liou is a spectrally integrated code. I never attempted to do it. My philosophy was to provide a good tool that is good for one wavelength. Let other people figure out how to integrate over wavelength. That is a different problem. I did the same with the Mie code. Even though I had a very nice code which integrated over a size distribution - I never provided it. I almost did at one point and I backed out - because people need to exercise their own creativity. Others developed codes that integrated over the wavelength and mine never did. ATRAD did, but ATRAD never got any wide distribution.
**I was the only one who was using it?**
No. Laszlo used it and Steve Warren used it.
**By the way I have now got an answer to my question about the whole package. You were never interested in the whole code for the two stream.**
Partly because I did not want to get sidetracked, there was a big problem for radiation people in those days, namely that dynamicists tried to pigeonhole you as a \"parameterization person\". That meant, aside from the diminution in your status, that you had to integrate over wavelength since climate models did not care about just one wavelength. I never wanted to get pigeonhole that way. I did not want to become a servant to some big GCM model. People would wind up in that capacity and maybe it wasn't so bad for them, since it seemed to guarantee lifetime employment, but I would have died of boredom. So I avoided developing GCM parameterizations like the plague. I was content to let other people to do that.
**That brings us to ATRAD itself. I asked you many years ago why you moved away from ATRAD and I know the answer, but I would like people to understand it. In that time you worked on adding and doubling. There were papers by Grant and Hunt [HuntGrant] and I recall that Graeme Stephens, who was my advisor, was using their approach as well in his Ph. D. thesis. Tell me more about ATRAD and why you moved from ATRAD to DISORT.**
ATRAD was funded by the ARPA Climate Dynamics Program. That program was ably led by Colonel John Perry. He was the one who very generously funded the development of ATRAD in the early days. It started in 1971. He supported me for 3 years until I went to NCAR. He was tough. He was asking me why are you doing this, why are you doing that. But he was satisfied with my answers and so he let me go on. I worked also on the AIDJEX sea-ice program and the stuff I did on multiple reflection between sea ice and cloud was all done with ATRAD.
John Perry was the real hero in terms of supporting ATRAD development. He gave me time to do it, to learn the field, to meet the big name players in atmospheric science and paleoclimate, to be able to make ATRAD comprehensive and bulletproof. ATRAD was quite a comprehensive model. Of course I loved the Grant and Hunt version of adding-doubling because it guaranteedpositive radiances. Once you go over to DISORT, complaints still come in to this day \"why do I get negative radiances?\" The Grant Hunt method was really beautiful mathematically. I remember working through all the equations by hand, the whole thing from beginning to end, and I still have those handwritten derivations. When I implemented Grant and Hunt I did it from my own notes. I loved that method. I still think that it is a great method. But Stamnes slowly destroyed the arguments against Chandrasekhar's discrete ordinate method, one by one. There were many reasons why that was a bad method. I remember Liou in the mid-seventies, who was trying to use the discrete ordinate method and he published a paper basically saying that it is crap. He couldn't do anything with it. It may sound good on paper but it just doesn't work. Stamnes relentlessly demolished the barriers and made the method useful and that was very impressive work, with his various students. It just happened somehow that he invited me to come to Alaska in 1986 and I spent month and a half there working with him on the code and I convinced him that the code was something worth spending time on. He had a certain code and Si-Chee worked on it as a graduate student and it looked awful. It was really a mess. I said to him that he shouldn't just throw it all away and say that the papers are the only important thing. This code should be cleaned up and made available because it represents a really high level of development. We rolled up our sleeves and developed that thing. Later Istvan Laszlo showed up from Hungary. He came to visit me. He said, I would like to work for you, what I can do. I said - you are now in DISORT service. I trained him as well. The four of us worked to write the code. I did most of documentation and cleaning it up and developing the extensive test case suite. The other three they really did not know how to clean up the code. They just let me do it. At that time, it was rather unique to provide programs which actually tested DISORT so that when you changed it, you could determine if you did anything bad. Those testing routines took me quite a while. I am proud of them. They were nice routines. I made the whole package available for public distribution. I convinced them to do it and the rest is history. It was a very worthwhile activity. It was not that adding and doubling was bad. In fact I have often thought that I will go back, clean it up and run it again. Ha, I never got around to that. On my desk I have a circular wooden coin called a Round Tuit; on it is written \"someday I hope to get a Round Tuit\".
**That brings another question. Do you think that having dominating code is good for the field? On one hand it is positive to have DISORT around. On the other hand it blocksdevelopment of competition. ATRAD is an example. I was always upset about you that you let it die. Do you think that dominance of one code may stifle the field, make it less progressive?**
I can't say that I did it for that reason, but I am not unhappy about it. I do believe that people should not take codes as black boxes. What they should take are pieces. The way you assemble a jigsaw puzzle. It is OK to take the Mie scattering piece. Not have to write it yourself. It is OK to take the Airy function piece and not to write it yourself because, I can tell you, it will take you a month. It is nice that these kids these days can assemble pieces, but they should learn to assemble a jigsaw puzzle together. If they don't ever do that, I don't think they ever get the idea of what modeling is all about. I am not unhappy that ATRAD did not get public distribution. I think it is OK. I am content with that.
**Well I am not. In 1989 Frank Kerr wrote a letter hiring Si-Chee Tsay. I remember Si-Chee from Fort Collins, we even wrote a paper together. In 1988 there was the DISORT paper. How has this collaboration developed in the last 20 years?**
One thing we have learned that I would tell everyone is, the better you document it the fewer questions you get. If you put the code out, you will save yourself a humongous amount of time if you document it. What we wound up getting was actually very few questions. People could just pick it up and use it, like a knife or fork. It was intuitive: you know what to do with the documentation. We created a package that we are very proud we did not get very many questions even though we knew that thousands of people were picking it up, downloading it. May be some of them were actually using it. That was one of the earliest things that we learned. Nevertheless there was fallout in terms of a steady drumbeat of questions that would come up over the years. They were not all dumb questions like why do I get negative radiances. Sometimes they were very subtle and deep questions. At that point I was involved with ARM and did not have time to deal with it. I got Laszlo to do it. Laszlo is the unsung hero of DISORT because starting in 1992 when ARM was just totally taking off and absorbing 110% of my time Laszlo stepped in and he supported DISORT. He would answer these questions. He would research what was wrong with the code; he would do small fixes. He was just great. I can't say enough about him. He deserves more credit than he got. He came from Hungary, hung around in NASA for a while, he could not even get in for a while so I would go out and meet him outside the gates and he worked with me for a while. He worked with Rachel Pinker at the University of Maryland and eventually he got a civil servant job at NOAA where he is now. In fact on Friday I will go to have lunch with him.
### Four of you - were you meeting from time to time?
Knut was in Alaska. I had one time in Alaska and I went one more time. We worked for a couple of weeks and that was it as far as face to face with Knut. Si-Chee is of course at Goddard so I could see him more and Istvan was local so I could see him more often. We were early users of the internet. We would send versions of the code to each other.
**In January of 2012 there was 50th anniversary of Applied Optics. We met this year at the AGU conference in San Francisco and you mentioned that DISORT is now the most referenced paper published in Applied Optics. The code I developed DDSCAT is one of the most referenced papers in Journal of the Optical Society of America. In fact I would like to beat you and have more references in the future. It is funny that two atmospheric scientists have these very successful papers in the premier optics journals in the world. Is this a source of satisfaction for you?**
Yes. That was one of the high points in my life when I heard that we are the most cited article in 50 years. I never in my wildest dreams would have imagined that. That is beyond cool.
#### This is satisfaction of a toolmaker?
Yes, I am very content about that.
## General issues
[Piotr J. Flatau] My first question is about public domain. Both of us were successful in some sense. Both of us were interested in public domain releases, not many people were in that time. We got our prize but it took 20-30 years and it took a lot of effort. We had several codes which were successful and some of them were not. For example we wrote together a thermodynamics package [THERMOS] that nobody ever used or referenced. What prompted you to write public domain codes?
[Warren Wiscombe] It was partly that Dave had made his code publically available. That was very impressive especially from the private company like IBM. That was kind of an example for all of us. I have always thought that that was wonderful thing. The second thing was development of the open source movement. That started back in early 1970' with invention of UNIX and C. I was swept out in that. In fact I became known as a distributor of open source software and I was once invited to the meeting in California on open source software and Richard Stallman [Stallman] was there. He spoke and I spoke about my experiences of sharing software. The movement to share software was in the air but it was not universally accepted. A lot of people thought that I was nuts to share my codes. They said you should keep them so you can publish papers. I said no, I think the best science happens when we share our stuff. I was very much in agreement with the open source movement that was going on. It was not clear that it was going to lead to a great career. A lot of people said that you will never have any great career by just publishing codes; you will never have any respect for that. Fortunately, I did not listen to them.
**I agree. I remember those times even now I feel that there are similar divisions on occasion. In 1996 you revised the MIEV programs and the NCAR note starts with \"I didn't really want to do this. I have never liked backtracking and revisiting the same terrain twice.\" Is this what prompted you to attack so many diverse problems? Is it curiosity or getting bored?**
Yes, I would get bored. I have never liked to be pinned down. One of my favorite stories was about Richter, the guy who invented the Richter scale. Somebody mentioned to him the Richter scale once and he went ballistic. The lesson was that you don't want to be pigeonholed. You don't want to be known as Mie scattering guy or Richter scale guy [Richter]. It was always my thing, I wanted to be a variety of things, have a variety of experiences and try a lot of subjects. I wandered around some. It was not typical. Most people settled into a groove and they stayed in that groove for thirty years. You have seen it and of course I have seen it. I was always related to radiation somewhat and clouds, if it was not radiation.
**Your ftp site [ftp://climate1.gsfc.nasa.gov/wiscombe/](ftp://climate1.gsfc.nasa.gov/wiscombe/) contains many of your codes. The site says - this is a work in progress and is not, nor ever will be, complete I recall that similarly Donald Knuth, who wrote TeX, would have releases 3.1, 3.14, 3.141, 3.1415 because he believed that his software will be perfect one day. Do you feel that some things are done?**
They are done as far as I am concerned. Other people are welcomed to develop them further. I kind of moved on mentally. I had an epiphany in the late 80's and early 90's that the radiation field was just dying in computer codes even though I was responsible for part of it. I used to say that we need to get these radiation people out from behind their computers and out into the field. I became notorious for saying this. I became this big advocate for experimental programs and ARM being the most visible among the ones I had my hand in. I kind of moved away from writing software even though I was teaching this software course in mid 90's. These were retrospective courses, sort of I lived this life, that is what I have learned. You can do better scientific software than you think you can and here is some ways you might be able to do that.
I would like to check on you one of my theories. Often, when I review papers, and I was an associate editor of JAS for 7 years, I wanted to see the code. I wanted authors to provide the well documented code so I can check what they say. It was impossible of course. But what is your take on repeatability of scientific results these days?
We used to discuss the repeatability of science. I think we lost it. We can whine about it but basically a lot of work which is published now with computer codes is not repeatable. You cannot really ask people to share their code. They may have 2-3 more papers to be published with this code and they don't want somebody else to take it. It would be difficult to expect reviewers to run the code. We lost the repeatability battle already. We can't get it back. I think it is sad. The way we try to replace it is with intercomparisons. We have these model intercomparisons, even measurement intercomparisons when we bring different instruments. We bring different models and run them with the same initial conditions and see if they give the same results. They don't usually, and we analyze the hell out of the differences. I think this is how we deal now with repeatability issue. Everyone knows it is a loss. Computer codes changed the game. They made repeatability and the old way of thinking obsolete. It is just gone.
It is something that I did not think about, something new that I have learned from you just now. We both are old timers and we mostly write in FORTRAN, I think. Both of us migrated through various FORTRAN releases Fortran66, Fortran77, Fortran90. You told me recently that you went to a Python class which I did also several months ago. Some people like to move to new languages, some don't. For example Bruce Draine, my collaborator of DDSCAT code, is very conservative and doesn't like change. What is your feeling about computer languages?
I feel that we should be open to new ideas. I have always studied other languages. I have found languages which I liked more, for example Mathematica, and languages which I liked less, for example C. C was too close to the bare metal of the computer, and too dangerous, and I could never see the need for C++ and its objects, at least not for hard-core numerical analysis and solution of physics and chemistry equations. Python looks interesting. The guy giving the Python tutorial said that it is an analysis language like Matlab and IDL, but free, and can't really replace FORTRAN. FORTRAN almost died of course. Until FORTRAN90 came along FORTRAN was really on a death spiral. It would not have lasted. FORTRAN90 breathed new life into the language. I think it will go on for many years now. It is very readable. It doesn't have weird symbols in it. When you use array syntax, the code kind of looks like the equations you write, especially when they are matrix equations.
### Did you switch your codes to FORTRAN90?
I haven't been consistent on it. I played around but I was never consistent about it, shame on me. People volunteered to do it. I said - be my guest, but they have never finished their job. I haven't seen anything which is super superior to FORTRAN. Nothing that would make me want to switch. But I do think that other languages have their uses. Mathematica is fun. You can play around, plot Julia sets and plot all kinds of cool stuff which you would not imagine doing in FORTRAN.
In 1979 you listed the FORTRAN code in the back of the Technical Note. It was common to do it. In 1996, in your revised version of the NCAR technical report the code was gone. We don't do this anymore. It is easy now to exchange code. What has changed in code writing in your scientific career which strikes you the most? Ease of software distributions, attitudes of the younger generation to distribute public domain codes?
I would remind people that we used to punch cards. I have shipped to someone 2000 cards in a box and I have shipped 4000 cards. That was the way we exchanged codes. God help you if the card reader on the other end would not read your cards. The invention of the internet and ftp sites was just a revolution in terms of not shipping boxes of cards across the country. When I first started we used to copy the code. We would get listings of code and there was no way to get an electronic version - we would just copy it.
**I remember. You were running ARM which was very strong on data exchanged and codes which are exchangeable.**
It is like night and day. Attitudes changed; now people want to share codes. Attitudes have flipped. When we were younger people did not want to share the code. You and I were the exceptions. Now it is the rule. But it required changing the attitudes. It is easy to forget that. Everybody knows the technology. In that time LINPACK was a phenomenal success.
**When I was younger I would spend a lot of time on optimization of the code. Suddenly CRAY disappeared and with it vector processing. What appeared next was parallel processing and I stopped worrying about optimization because I wanted to concentrate on science, not on programing for hardware. Do you have a similar experience?**
Yes. I think that parallel revolution changed everything. Scientist tools quite readily to
vectorization. We have never adapted to parallelization. There were some horror stories in the early days. Like they rewrote the Mintz-Arakawa model for a computer called Illiac 4 and it was a disaster. It never worked. A huge amount of hours to rewrite it in some obscure language was
wasted. Parallel machines were a watershed for scientists. Some crossed over; some did not. I did not. I dropped out partly because I was not able to cross to parallel machines. It required too
much specialized coding. I believed in codes which are widely shareable which do not rely on specialized coding.
Wiscombe keep riding these big drops. Artwork by Graeme Stephens.
**You have a PowerPoint presentation about \"scientific revolutions\" and mention in it that \"sometimes the unresponsiveness of my colleagues to new ideas left me quite depressed\". I know that you were discussing physical parameterizations. Do you think that codes can be an object of art, go beyond being tools and be revolutionary?**
Yes. They are, when they are elegant they are absolutely beautiful. Your last question is wandering into the realm of philosophy. Maybe I was depressed that my colleagues are not sensitive to my new ideas - in particular the large drop idea which I now like to crow about because they are finding drizzle drops in all kinds of clouds. At the time when I said that there might be a lot of drizzle drops in every cloud, cloud physicists were outraged. I still have a letter from a cloud physicist who dressed me down for suggesting drizzle could exist as a steady state condition for many hours without destroying the cloud. Who was little me to say that there are drizzle drops in all clouds or in many clouds? Now ARM radars and Cloudsat are showing drizzle drops in vast numbers.
The reason I pointed out this quote of yours is not to say that you are depressed. The reason was really related to programs. Do you feel that they can be revolutionary? For Mathematica was revolutionary in symbolic computing because all programs before just did not work.Yes, FFT is probably the best example. It is now in computer chips. It does stuff we even don't know about. It is beautiful code and look what it is doing.What about DISORT? Did it push the field beyond?Some codes show what perfection could be like in those times. Certainly it would not look as perfect now. Standards do improve. But it gave an example of perfection. Everyone needs to see these kinds of examples. I was reading you from the Software Tools in Pascal. I did not read you the last line. \"Careful study and imitation of good programs leads to better writing.\" This is our long term gift. We show people, what at the time were perfect programs and they learn to write better programs as a result.Warren. It is 5pm on your time and we have talked for 2 hours now. I think we should finish. Thanks for all the answers. I agree with many of the things you said. There were some things you said I was surprised by.Thanks. Let me make my final points. The most important one is that you can actually write zero defect software. At that time for a scientist it was a revolutionary idea because we have always assumed that our software was buggy. There is no reason we can't write zero defect software. It is an idea from software engineering that we could learn a lot from. Another idea is that good software can live almost forever. In science our attitude always was that it is a throwaway. I have seen in my 30-40 years watching the field is that it is really true. Really good pieces of software have almost infinite life time. They just keep getting used.
OK we are done. But just to make you happy. I am writing a paper with Jerry Schmidt about altocumulus and we observe drizzle size droplets there. We are reading your 1984 paper now about large drops and plan to reference it.
Wiscombe as fractal. Artwork by Graeme Stephens.
## Acknowledgements
I would like to thank Steven Warren, Istvan Laszlo, Aleksander Marshak, Graeme Stephens for providing additional graphics material and some suggestions.
## References and notes
[AbramowitzStegun][http://en.wikipedia.org/wiki/Abramowitz](http://en.wikipedia.org/wiki/Abramowitz) and Stegun
[Coulson1960] Coulson, K. L., Dave, J. V., & Sekera, Z. 1960, Tables Related to Radiation Emerging from a Planetary Atmosphere with Rayleigh scattering (Berkeley, CA: Univ. California Press)
[Dave1968] Mie FORTRAN code for scattering computation by spheres is listed in J. V. Dave, _Subroutines for Computing the Parameters of the Electromagnetic Radiation Scattered by Spheres_, Report No. 320-3237, May 1968. It is still available from IBM Palo Alto Scientific Center, 1530 Page Mill Road Palo Alto, California 94304, USA)
[Debye][http://en.wikipedia.org/wiki/Peter_Debye](http://en.wikipedia.org/wiki/Peter_Debye)
[Dei1962] D. Deirmendjian and R. J. Clasen, 1962, Light scattering on partially absorbing homogeneous spheres of finite size, RAND Corporation.
[DISORT] \"Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media\" Knut Stamnes, S-Chee Tsay, Warren Wiscombe, and Kolf Jayaweera, Applied Optics, Vol. 27, Issue 12, pp. 2502-2509 (1988).
[HuntGrant] Hunt, G. E. and Grant, I. P.: Discrete space theory of radiative transfer and its application to problems in planetary atmospheres, J. Atmos. Sci., 26, 963-972, 1969.
[Kernighan][http://en.wikipedia.org/wiki/Brian](http://en.wikipedia.org/wiki/Brian) Kernighan* [Regge][http://en.wikipedia.org/wiki/Regge](http://en.wikipedia.org/wiki/Regge) theory
* [Richter][http://en.wikipedia.org/wiki/Charles_Francis_Richter](http://en.wikipedia.org/wiki/Charles_Francis_Richter)
* [SoftwareTools] Software tools, 1976, Brian Kernighan and P. J. Plaugher
* [SoftwareTools] Software tools in Pascal, 1981, Brian Kernighan and P. J. Plaugher
* [Stallman][http://en.wikipedia.org/wiki/Richard_Stallman](http://en.wikipedia.org/wiki/Richard_Stallman)
* [WarrenWiscombe][http://scholar.google.com/citations?user=q01DGp8AAAJ&hl=en](http://scholar.google.com/citations?user=q01DGp8AAAJ&hl=en) | On March 11, 2013 I talked with Warren Wiscombe about his contributions to scientific computer programing, atmospheric science and radiative transfer. Our conversation is divided into three parts related to light scattering, radiative transfer and his general thoughts about scientific programing. There are some reflections on how radiative transfer parameterizations gradually sneaked in to modern Global Circulation Models. Why some software programs such as light scattering code MIEV and DISORT are very successful and why some of them fizzle. We talked about the role of tools in modern science, open source movement, repeatability of scientific results, computer languages, computer programs as objects of arts, and even if programs can be revolutionary. | Write a summary of the passage below. | 143 |
arxiv-format/2110_10884v1.md | # Detecting Lagrangian coherent structures
from sparse and noisy trajectory data
Saviz Mowlavi
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Mattia Serra
[email protected] School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
Enrico Maiorino
Channing Division of Network Medicine, Harvard Medical School, Boston, MA 02115, USA
L Mahadevan
[email protected] Department of Physics, Harvard University, Cambridge, MA 02138, USA
## 1 Introduction
Coherent material structures are ubiquitous at all length scales, from oceanic and atmospheric processes (Haller, 2015; Serra et al., 2017, 2020a) to biological systems (Serra et al., 2020b). There exists a plethora of physical and biological systems defined by the dynamics of an ensemble of particles in space and time; examples include dritter trajectories in the ocean (Lumpkin and Pazos, 2007), cell motion in living systems (Hogan, 1999), bacteria or other constituent agents in active fluids (Marchetti et al., 2013; Morozov, 2017), and so on. In general, the coordinated motion of material parcels can be visualized by the organized patterns that tracer particles form over time (Merzkirch, 2012). However, simply observing instantaneous particle positions and velocities provides an incomplete and deceptive picture - for instance, a simple change to a rotating reference frame suffices to alter the observed coherent patterns. This motivated the development of the theory of Lagrangian Coherent Structures (LCSs) (Shadden, 2012; Haller, 2015), as well as their infinitesimally short time analogs called Objective Eulerian Coherent Structures (Serra and Haller, 2016;Nolan et al., 2020), which provide a frame-invariant framework for identifying the flow structures shaping observed patterns.
The computation of LCSs requires Lagrangian particle trajectories over a finite time interval, and they may be classified into two main groups - hyperbolic and elliptic LCSs, illustrated in Figure 1. Hyperbolic LCSs are surfaces along which the local separation rate between neighboring particles is maximized or minimized, leading to their binary classification as repelling or attracting LCSs, respectively. Elliptic LCSs are surfaces enclosing regions of coherent global dynamics, that is, regions inside of which particles move together over time. Although these regions often consist of vortical flow structures, we also include elongated structures such as jets, originally classified as parabolic LCSs (Haller, 2015), in our definition of elliptic LCSs. Together, hyperbolic and elliptic LCSs provide a complete picture of coordinated motion in fluid flows and other physical or biological systems.
Early attempts to compute LCSs from flow data (reviewed in Hadjighasem et al., 2017) required knowledge of a dense set of particle trajectories. In practice, this is only obtainable through numerical integration of a well-resolved velocity field, from e.g. computational simulations or particle image velocimetry (PIV) of experimental data, but not from particle tracking velocimetry (PTV) or single-cell tracking, which provide sparser trajectories. Therefore, extending computational tools for LCS identification to sparse and possibly noisy data would open the door to a wider array of potential applications, leading to a better understanding of the structure and properties of a broad class of particle-based physical systems.
Hyperbolic LCSs are based on the separation rate between initially-close material parcels, making their identification using sparse and noisy data difficult. Their computation usually relies on the finite-time Lyapunov exponent (FTLE), which measures the sensitivity of final particle positions with respect to their initial positions (Haller, 2001; Shadden et al., 2005). However, the accurate calculation of this quantity requires the trajectories of a dense and regular array of initial particle
Figure 1: Sketch of the different types of Lagrangian Coherent Structures (LCS) that are considered in this paper. While the flow field pictured here is time-invariant to ease visualization, these structures also apply to general time-dependent flows. Note that we include elongated structures such as jets, originally classified as parabolic LCSs (Haller, 2015), in our definition of elliptic LCSs.
positions, rendering its extension to sparse datasets nontrivial. In fact, the only existing methods for computing hyperbolic LCSs that accept irregular initial positions are the finite-time entropy (FTE) framework of Froyland and Padberg-Gehle (2012), the set-oriented redefinition of the FTLE proposed by Tallapragada and Ross (2013), and the trajectory-stretching exponent (TSE) defined in Haller et al. (2021), all of which bypass the calculation of the true FTLE. The first two methods partition the spatial domain into a set of boxes, evaluate a discrete transfer operator that quantifies the probability of particles transferring between any two boxes over the time interval of interest (Froyland and Padberg, 2009), and compute different diagnostic quantities to measure local stretching. Nevertheless, the robustness of these approaches to sparsity and noise in the data has not been established. The third method computes the TSE, a brilliant measure of local material stretching using only individual trajectories, and is therefore well-suited to sparse data despite being only quasi-objective.
Elliptic LCSs are global features that characterize the overall behavior of a connected set of material parcels. Taking advantage of the fact that they can be identified indirectly through the coherent sets that they enclose (Froyland et al., 2010; Hadjighasem et al., 2016), several techniques for identifying elliptic LCSs from sparse data have recently been proposed and are reviewed in Hadjighasem et al. (2017). Methods applicable to three-dimensional flows fall broadly into two categories. The first category comprises methods based on the notion of coherent set introduced in Froyland et al. (2010) and Froyland (2013), which are region that minimize mixing with surrounding material elements in the presence of diffusion. These sets are calculated from the transfer operator of Froyland and Padberg (2009) using a variety of techniques (Ser-Giacomi et al., 2015; Froyland and Junge, 2015; Williams et al., 2015; Banisch and Koltai, 2017; Froyland et al., 2019). In the second category, individual trajectories are interpreted as points in an abstract space endowed with a certain notion of distance, and various clustering tools from computer science and statistics (Fortunato, 2010; Everitt et al., 2011) are used to group trajectories that are close together into separate clusters. Such techniques differ from one another not only by the clustering algorithm they employ, but also by their definition of distance between particle trajectories. They include the application of fuzzy clustering (Froyland and Padberg-Gehle, 2015), spectral graph partitioning (Hadjighasem et al., 2016; Padberg-Gehle and Schneide, 2017; Banisch and Koltai, 2017; Wichmann et al., 2020; Vieira et al., 2020), spectral graph drawing (Schlueter-Kuck and Dabiri, 2017), and density-based clustering (Schneide et al., 2018; Wichmann et al., 2021). However, all of the aforementioned methods struggle to determine consistently the correct number of clusters, even in simple analytical flows such as the Bickley jet (Hadjighasem et al., 2017). The number of clusters is either required as a heuristic input to the algorithm that often fails as the data gets sparse and/or noisy, or is dependent on the specific choice of parameters for the method.
Here, we introduce two techniques for computing hyperbolic and elliptic LCSs using sparse and noisy trajectory datasets that solve the aforementioned issues. Our approach to compute hyperbolic LCSs follows the widely used FTLE-based definition and relies on a local least squares fit of the flow map gradient, a tensorial quantity from which the FTLE is calculated. By feeding pairwise stretching information from all particles in a small neighborhood around the location of interest, the method is able to alleviate the effects of both sparsity and noise while retaining enough spatial resolution to resolve the ridges in the resulting FTLE field, which locate hyperbolic LCSs. Next, our procedure to identify elliptic LCSs utilizes the same clustering algorithm as in Schneide et al. (2018) - density-based spatial clustering of applications with noise (DBSCAN) - due to its many advantages. First, it is able to tell apart trajectories belonging to coherent structures from those that do not, a feature that partition-based methods struggle to achieve (Froyland et al., 2019; Wichmann et al., 2021). Second, it identifies both compact structures such as vortices and elongated structures such as jets, because it assigns trajectories to clusters based on their local proximity to neighboring trajectories in the cluster rather than all of them, as in spectral clustering (Hadjighasem et al., 2016). Third, its implementation is straightforward due to the many scientific libraries that it is implemented in, and it does not require a subsequent clustering step like many of the aforementioned approaches do (Hadjighasem et al., 2016; Schlueter-Kuck and Dabiri, 2017b). Here we propose a consistent procedure for selecting the clustering parameters in the DBSCAN algorithm. Overall, we demonstrate the robustness of our methods to different systems using both analytical and experimental data, polluted by various levels of noise and sparsity. We also provide user-friendly, lightweight Python codes implementing these techniques on any dataset of particle trajectories.
The paper is organized as follows. In Section 2 we describe our methods for identifying hyperbolic and elliptic LCSs from sparse and noisy trajectory datasets. In Section 3 these methods are applied to benchmark analytical flows as well as experimental datasets, and we conclude with some general remarks in Section 4.
## 2 Methods
We consider a discrete set of \\(N\\) particles enclosed by a time-dependent body \\(\\Omega(t)\\) and following the trajectories \\(\\{\\mathbf{x}^{i}(t)\\}_{i=1}^{N}\\in\\Omega(t)\\subset\\mathbb{R}^{d}\\), where time \\(t\\in[t_{0},t_{f}]\\) and \\(d\\) is the spatial dimension of the system. In the following, we will find convenient to introduce the flow map \\(\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})=\\mathbf{x}^{i}(t)\\), which takes the initial position \\(\\mathbf{x}_{0}^{i}\\) of particle \\(i\\) at time \\(t_{0}\\) to its current position \\(\\mathbf{x}^{i}(t)\\) at time \\(t\\).
### Hyperbolic coherent structures
Hyperbolic structures can be divided into two categories - repelling (attracting) LCSs are surfaces along which the separation (attraction) rate between neighboring particles on either side of the surface at initial (final) time is maximized. While a number of different theories have been proposed to identify hyperbolic LCSs in fluid flows (for reviews, see Haller, 2015; Allshouse and Peacock, 2015), we propose an adaptation to sparse datasets of the original technique based on the finite-time Lyapunov exponent (FTLE) (Haller, 2001; Shadden et al., 2005), widely used due to its simplicity.
#### 2.1.1 Dense trajectory or velocity datasets
We first review the definition of the FTLE field in the case where the flow map \\(\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0})=\\mathbf{x}(t)\\) is known for every initial position \\(\\mathbf{x}_{0}\\in\\Omega(t_{0})\\). In practice, this condition is met when one has access to a continuous or discretized version of the entire velocity field of the system (over space and time), which can be used to numerically integrate the trajectory \\(\\mathbf{x}(t)\\) of any particle.
For two particles initially located at \\(\\mathbf{x}_{0}\\) and \\(\\mathbf{x}_{0}+d\\mathbf{x}_{0}\\), the separation between them at time \\(t\\) is
\\[\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}+d\\mathbf{x}_{0})-\\mathbf{F}_{t_{0}}^{t} (\\mathbf{x}_{0})=\
abla\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0})d\\mathbf{x}_{0}+ O(|d\\mathbf{x}_{0}|^{2}), \\tag{1}\\]
where \\(\
abla\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0})\\) is the gradient of the flow map, usually computed using finite-difference techniques (Haller, 2001; Shadden, 2012). In the limit \\(|d\\mathbf{x}_{0}|\\ll 1\\), the ratio \\(\\lambda\\) of initial and final distances between these two particles is
\\[\\lambda(\\mathbf{x}_{0};d\\mathbf{x}_{0})=\\frac{|\\mathbf{F}_{t_{0}}^{t}(\\mathbf{ x}_{0}+d\\mathbf{x}_{0})-\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0})|}{|d\\mathbf{x}_{0}|} \\simeq\\frac{|\
abla\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0})d\\mathbf{x}_{0}|}{|d \\mathbf{x}_{0}|}. \\tag{2}\\]
The largest possible stretch ratio \\(\\lambda\\) over all infinitesimal segments \\(d\\mathbf{x}_{0}\\) is equal to the largest singular value of the flow map gradient, which typically grows exponentially (Wiggins, 2003). The forward-time FTLE is defined as the ratio of the exponent of this growth to the time interval,
\\[\\Lambda_{t_{0}}^{t}(\\mathbf{x}_{0})=\\frac{1}{t-t_{0}}\\ln\\left[\\max_{d\\mathbf{x}_{ 0}}\\lambda(\\mathbf{x}_{0};d\\mathbf{x}_{0})\\right]=\\frac{1}{t-t_{0}}\\ln\\left[s_ {1}(\\mathbf{x}_{0})\\right], \\tag{3}\\]
where \\(s_{1}(\\mathbf{x}_{0})\\) denotes the largest singular value of \\(\
abla\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0})\\). Since \\(\\Lambda_{t_{0}}^{t}(\\mathbf{x}_{0})\\) quantifies the rate of local material deformation, one can define the initial position of a repelling hyperbolic LCS over the time interval \\([t_{0},t]\\) as the ridges of the scalar field \\(\\Lambda_{t_{0}}^{t}(\\mathbf{x}_{0})\\) (which correspond to curves in the plane and surfaces in space). Conversely, attracting hyperbolic LCSs can be thought of as repelling LCSs in backward time; therefore, the final position of an attracting LCS over \\([t_{0},t]\\) can be defined as the ridges of the backward-time FTLE scalar field \\(\\Lambda_{t}^{t_{0}}(\\mathbf{x})\\), where \\(\\mathbf{x}=\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0})\\).
#### 2.1.2 Sparse and noisy trajectory datasets
The flow map gradient \\(\
abla\\mathbf{F}_{t_{0}}^{t}\\), which enters the usual definition (3) of the FTLE field, is inaccessible when one only has access to the trajectories of \\(N\\) particles \\(\\{\\mathbf{x}^{i}(t)\\}_{i=1}^{N}\\). In this case, we introduce an approximation of the flow map gradient, \\(\\tilde{\
abla}\\mathbf{F}_{t_{0}}^{t}\\), which we calculate at every particle initial position \\(\\mathbf{x}_{0}^{i}=\\mathbf{x}^{i}(t_{0})\\) as follows. First, we draw a ball of radius \\(\\delta\\) around the initial position \\(\\mathbf{x}_{0}^{i}\\) of a given particle \\(i\\), and call \\(\\mathcal{N}\\) the set of all particles \\(j\\), including \\(i\\) itself, such that \\(|\\mathbf{x}_{0}^{j}-\\mathbf{x}_{0}^{i}|<\\delta\\), as pictured in Figure 2. The approximate flow map gradient at \\(\\mathbf{x}_{0}^{i}\\) is then defined such that the relation
\\[\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{k})-\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_ {0}^{j})\\simeq\\tilde{\
abla}\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})[ \\mathbf{x}_{0}^{k}-\\mathbf{x}_{0}^{j}] \\tag{4}\\]
holds for all pairs \\(j,k\\in\\mathcal{N},j\
eq k\\). Assuming that there are \\(M\\) such pairs, the approximate flow map gradient can then be obtained by minimizing the square error between both sides of (4), resulting in the least squares problem
\\[\\tilde{\
abla}\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})=\\arg\\min_{\\mathbf{A} }\\ \\frac{1}{M}\\sum_{\\begin{subarray}{c}j,k\\in\\mathcal{N}\\\\ j\
eq k\\end{subarray}}\\left\\|\\mathbf{A}[\\mathbf{x}_{0}^{k}-\\mathbf{x}_{0}^{j}] -[\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{k})-\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x }_{0}^{j})]\\right\\|_{2}^{2}+\\beta\\|\\mathbf{A}-\\mathbf{I}\\|_{F}^{2}, \\tag{5}\\]
Figure 2: Identification of hyperbolic repelling LCSs. The flow map gradient is approximated as the least-square fit of the tensor that maps the initial positions of segments relating \\(\\mathbf{x}_{0}^{i}\\) and its neighbors in a neighborhood of radius \\(\\delta\\) to their final positions.
where \\(\\|\\cdot\\|_{2}\\) and \\(\\|\\cdot\\|_{F}\\) denote the Euclidian and Frobenius norms, respectively, and \\(\\beta\\) is a regularization parameter that biases \\(\\tilde{\
abla}\\mathbf{F}_{t_{0}}^{t}\\) towards the identity in the absence of data, which corresponds to the classical Tikhonov regularization in inverse problems (Kaipio and Somersalo, 2006). As shown in Appendix A, the solution of the above minimization problem is given by
\\[\\tilde{\
abla}\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})=(\\mathbf{Y}\\mathbf{X} ^{\\mathsf{T}}+\\beta M\\mathbf{I})(\\mathbf{X}\\mathbf{X}^{\\mathsf{T}}+\\beta M \\mathbf{I})^{-1}, \\tag{6}\\]
where the matrices \\(\\mathbf{X},\\mathbf{Y}\\in\\mathbb{R}^{d\\times M}\\) are defined as
\\[\\mathbf{X}=\\big{[}\\dots|\\,(\\mathbf{x}_{0}^{k}-\\mathbf{x}_{0}^{j} )\\,|\\,\\dots\\big{]}, \\tag{7}\\] \\[\\mathbf{Y}=\\big{[}\\dots|\\,(\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0} ^{k})-\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{j}))\\,|\\,\\dots\\big{]}, \\tag{8}\\]
for all pairs \\(j,k\\in\\mathcal{N},j\
eq k\\). While our approach is reminiscent of the work of Lekien and Ross (2010) to compute FTLEs on unstructured meshes, important differences include the specific set of particle pairs considered in the least squares problem (5), and the addition of a regularization term. We now replace the continuous singular value \\(s_{1}(\\mathbf{x}_{0})\\) in (3) with \\(\\tilde{s}_{1}(\\mathbf{x}_{0}^{i})\\), the largest singular value of the approximate flow map gradient \\(\\tilde{\
abla}\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})\\) obtained from (6), which yields the approximate forward-time FTLE at the particle position \\(\\mathbf{x}_{0}^{i}\\):
\\[\\tilde{\\Lambda}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})=\\frac{1}{t-t_{0}}\\ln\\big{[} \\tilde{s}_{1}(\\mathbf{x}_{0}^{i})\\big{]}\\,. \\tag{9}\\]
Finally, the approximate initial positions of repelling LCSs over \\([t_{0},t_{f}]\\) can be defined as the ridges of the forward-time discrete FTLE \\(\\tilde{\\Lambda}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})\\). Conversely, the approximate final positions of attracting LCSs over \\([t_{0},t_{f}]\\) can be defined as the ridges of the backward-time discrete FTLE \\(\\tilde{\\Lambda}_{t}^{t_{0}}(\\mathbf{x}_{t}^{i})\\), where \\(\\mathbf{x}_{t}^{i}=\\mathbf{x}^{i}(t)\\) are the final particle positions, and the set \\(\\mathcal{N}\\) now comprises all particles \\(j\\), including \\(i\\) itself, such that \\(|\\mathbf{x}_{t}^{j}-\\mathbf{x}_{t}^{i}|<\\delta\\).
We note that (9) could in principle be calculated at an arbitrary spatial location \\(\\mathbf{x}\\), instead of an initial particle position \\(\\mathbf{x}_{0}^{i}\\). Indeed, replacing \\(\\mathcal{N}\\) with the set of all particles \\(j\\) such that \\(|\\mathbf{x}_{0}^{j}-\\mathbf{x}|<\\delta\\) would yield a continuous representation of the approximate FTLE field, \\(\\tilde{\\Lambda}_{t_{0}}^{t}(\\mathbf{x})\\). Here we simply compute the FTLE at initial particle positions \\(\\mathbf{x}_{0}^{i}\\) due to ease of implementation.
Our approximate scheme requires us to choose the parameters \\(\\beta\\) and \\(\\delta\\). Although both parameters play a role in reducing the effect of noise, they do so in different ways - large \\(\\beta\\) values give less weight to the data-dependent term in (5), while large \\(\\delta\\) values result in a larger set \\(\\mathcal{N}\\) of particles. In practice, we have found that \\(\\delta\\) is much more effective than \\(\\beta\\) at countering the effect of noise in the data. Thus, denoting with \\(r\\) the average initial distance between neighboring particles, we choose a value \\(\\beta\\ll\\sqrt{r}\\) that only serves to regularize the least squares solution (6) in the degenerate case where \\(\\mathbf{X}\\mathbf{X}^{\\mathsf{T}}\\) is ill-conditioned. In the examples to follow, we select \\(\\beta=10^{-10}\\). For \\(\\delta\\) we start from a value on the order of \\(r\\), and slowly increase it until clean ridges emerge from the computed FTLE field. This will be illustrated with examples in Sections 3.1, 3.2, and 3.3.
### Elliptic coherent structures
Elliptic LCSs are surfaces enclosing regions of coherent motion in which particles remain close together. Here, we consider such coherent regions to comprise both vortex-type structures and elongated ones such as jets. Similarly to hyperbolic LCSs, there exists many different techniques for identifying elliptic LCSs in fluid flows, usually through the coherent regions that they enclose (for a review, see Hadjighasem et al., 2017).
As mentioned in the introduction, we focus on the class of methods that interpret individual trajectories as points in an abstract space endowed with a certain notion of distance, before usinga clustering tool to identify coherent regions as clusters of trajectories that remain close over the considered time interval. We define the distance \\(d_{ij}\\) between two trajectories \\(\\mathbf{x}_{i}(t)\\) and \\(\\mathbf{x}_{j}(t)\\) as the time average
\\[d_{ij}=\\frac{1}{t-t_{0}}\\int_{t_{0}}^{t}\\|\\mathbf{x}^{i}(t^{\\prime})-\\mathbf{x} ^{j}(t^{\\prime})\\|dt^{\\prime}, \\tag{10}\\]
following Hadjighasem et al. (2016), who called it the dynamical distance. Treating trajectories like individual points, we then apply the density-based spatial clustering of applications with noise (DBSCAN) algorithm introduced by Ester et al. (1996), which was first utilized to detect coherent sets of trajectories in Schneide et al. (2018).
The DBSCAN algorithm is implemented in many scientific libraries, making it quick and easy to run, and can be abstracted as shown in Algorithm 1(Schubert et al., 2017). Given a set of points representing individual particle trajectories, separated by the distance defined in (10), the algorithm assigns points to clusters based on their local proximity with a given number of other points in the cluster rather than all of them. This produces groups defined by a minimum density throughout rather than proximity between all member points. The minimum density is set by the parameters minPts and eps: a minimum number minPts of points must belong to the same ball of radius eps in order to be part of the same cluster. As a result, not only is DBSCAN able to identify both compact and elongated structures such as vortices and jets, but it also has an inherent ability to designate points that do not belong to any cluster, lying alone in low-density regions, as noise. The resulting clusters identify groups of particles with coherent motion and enclosed by elliptic LCSs, with noise particles in between separating them, as illustrated in Figure 3. In practice, we use the scikit-learn implementation of the DBSCAN algorithm in Python (Pedregosa et al., 2011).
```
1 Define a neighborhood of distance eps around every point, and identify as core points those that have at least minPts neighbors Create separate clusters for each group of neighboring core points; that is, core points within a distance eps of each other Assign each non-core point to a cluster if it is in the neighborhood of a core point; otherwise, label it as noise
```
**Algorithm 1**DBSCAN (abstract form)
A critical aspect of any clustering method consists in the choice of the parameters, which in this case are minPts and eps. Although our approach based on the DBSCAN algorithm is similar to that of Schneide et al. (2018), we provide a consistent methodology for the choice of the parameter eps, given a value for minPts. We will describe this methodology in Section 3.1, along with its application to the Bickley jet, and verify that it produces consistent results in other systems in Sections 3.2 and 3.4.
## 3 Results
We now apply our algorithms for the detection of hyperbolic and elliptic LCSs to four different systems. The first two, the Bickley jet and ABC flow, have analytic velocity fields and are frequently used as benchmark problems for the detection of coherent structures (Hadjighasem et al., 2016, 2017). We use the given velocity fields to calculate trajectory data from randomly-seeded particles with artificially-introduced noise, and we compare the LCSs identified by our methods with the known ground truth. The third and fourth systems, moving cells in a developing chicken embryo and ocean tracers in the Gulf of Mexico, correspond to experimental data from developmental biology and oceanography respectively; and highlight the capability of our methods to identify structures from sparse and noisy trajectory data.
### Bickley jet
The Bickley jet is an analytical model of a two-dimensional meandering zonal jet separating counter-rotating vortices in the Earth's atmosphere (del Castillo-Negrete and Morrison, 1993; Rypina et al., 2007). The time-dependent flow field is described by the streamfunction \\(\\psi(x,y,t)=\\psi_{0}(y)+\\psi_{1}(x,y,t)\\), where
\\[\\psi_{0}(y) =c_{3}y-UL\\tanh(y/L), \\tag{11a}\\] \\[\\psi_{1}(x,y,t) =UL\\,\\mathrm{sech}^{2}(y/L)\\sum_{n=1}^{3}\\epsilon_{n}\\cos(k_{n}(x- \\omega_{n}t)). \\tag{11b}\\]
In order to facilitate comparison, we use the same parameter values as in Schlueter-Kuck and Dabiri (2017a); that is, \\(U=62.66\\,\\mathrm{m}\\,\\mathrm{s}^{-1}\\), \\(L=1.77\\cdot 10^{6}\\,\\mathrm{m}\\), \\(k_{n}=2n/r_{0}\\) where \\(r_{0}=6.371\\cdot 10^{6}\\,\\mathrm{m}\\) is the radius of the Earth, \\(\\epsilon=[0.0075,0.15,0.3]\\), \\(c=[0.1446,0.205,0.461]U\\), and \\(\\omega=c-c_{3}\\). The flow is computed in the \\(x\\)-periodic domain \\(\\Omega=[0,\\pi r_{0}]\\times[-3,3]10^{6}\\,\\mathrm{m}\\). In the following, all distance units are in \\(10^{6}\\,\\mathrm{m}\\) and time units are in days. The streamlines produced at \\(t=0\\) by these parameter values are depicted in Figure 4\\((a)\\), and show the existence of two pairs of counter-rotating vortices separated by a meandering horizontal jet. The streamfunction and parameter values considered here produce the same flow field as that studied in Hadjighasem et al. (2017), but viewed in a reference frame translating at the constant speed of the vortices.
We first compute repelling LCSs in the Bickley jet over the time window \\(t\\in[0,40]\\), visualized by the ridges of the forward FTLE field at initial time. As a benchmark, Figure 4\\((b)\\) displays
Figure 3: Identification of elliptic LCSs. The DBSCAN algorithm is applied to a set of points representing individual particle trajectories and separated by the distance defined in (10). It then identifies elliptic LCSs as clusters of trajectories that are close to each other.
Figure 4: Hyperbolic repelling LCSs in the Bickley jet. \\((a)\\) Streamlines at \\(t=0\\), \\((b)\\) exact continuous FTLE field over the time window \\(t\\in[0,40]\\), and discrete FTLE field computed over the same time window using 6000 particles with \\((c)\\)\\(\\sigma=0\\), \\(\\delta=0.3\\), \\((d)\\)\\(\\sigma=0\\), \\(\\delta=0.5\\), \\((e)\\)\\(\\sigma=0.2\\), \\(\\delta=0.3\\), \\((f)\\)\\(\\sigma=0.2\\), \\(\\delta=0.5\\). Here, \\(\\sigma\\) refers to the standard deviation of observation noise, and \\(\\delta\\) is the neighborhood radius used in the least squares fit of the flow map gradient. The colorbar is shared between plots \\((b\\)–\\(f)\\), and values above 0.16 are shown in the same color.
the reference FTLE field \\(\\Lambda_{0}^{40}\\), computed through a finite-difference approximation of the flow map gradient (Shadden, 2012) using a set of particles initialized on a \\(300\\times 90\\) rectangular grid. Particle trajectories were integrated from the velocity field using a 4th order Runge Kutta method with tolerance \\(10^{-6}\\), as will be the case for all trajectory calculations in the remainder of the paper. We note that for the purpose of evaluating the flow map gradient, we let trajectories leave the initial domain \\(\\Omega\\) despite the periodicity in the \\(x\\)-direction. Indeed, the flow map gradient relates the initial and final values of the separation vector between infinitesimally close trajectories. Since this separation vector lives in the so-called 'tangent space' of \\(\\Omega\\), which is devoid of any periodicity, the flow map gradient is better approximated by evaluating the difference between unbounded trajectories.
When working with particle trajectories from experimental data that is both sparse and associated with random initial locations, we need to modify the finite-difference methodology utilized to deduce the results in Figure 4\\((b)\\). To simulate such a scenario using the Bickley jet, we compute trajectories for 6000 particles whose initial positions are randomly assigned through a spatially-uniform distribution function, and we add Gaussian noise of standard deviation \\(\\sigma\\) to each spatio-temporal measurement to simulate observation noise in real data. Figures 4\\((c\\)-\\(f)\\) show the discrete FTLE field \\(\\tilde{\\Lambda}_{0}^{40}\\) for different values of \\(\\sigma\\) and of the neighborhood radius \\(\\delta\\). In a manner similar to the continuous FTLE field in \\((b)\\), these discrete FTLE fields also display the two ridges separating the meandering jet from the vortices. In the absence of noise, a smaller value of \\(\\delta\\) enables a finer delineation of the ridges. However, in the presence of noise, increasing \\(\\delta\\) leads to an effective averaging that reduces the pollution caused by the noise and yields cleaner ridges, due to the additional data used in the least squares fit of the flow map gradient at every point. Therefore, \\(\\delta\\) controls a trade-off between the ridges' width and their pollution from noise. Because its optimal value depends on the inherent spatial resolution and noisiness of the data, \\(\\delta\\) should in practice be slowly increased until clean ridges emerge from the computed FTLE field, but not made too large that it will eventually smear the ridges.
We turn to the computation of elliptic LCSs over \\(t\\in[0,40]\\). The FTLE in Figure 4\\((b)\\) suggests the existence of such structures enclosing seven regions of the flow - each of the six vortices as well as the meandering jet, which has been confirmed by distinct methods (Hadjighasem et al., 2017; Vieira et al., 2020; Wichmann et al., 2021). We consider 1080 particles initialized on a rectangular grid and pollute their trajectories with Gaussian noise of standard deviation \\(\\sigma\\). We then calculate \\(d_{ij}\\) accounting for the periodicity of the domain in the \\(x\\)-direction, and then apply DBSCAN. Because the periodicity of the domain sets an upper bound on \\(d_{ij}\\), this makes it more difficult for the clustering algorithm to discriminate between particles belonging to different groups of coherent motion1. Some of the previous attempts at calculating elliptic LCSs using clustering algorithms did not account for such periodicity in the calculation of pairwise distances (Hadjighasem et al., 2016; Schlueter-Kuck and Dabiri, 2017). Finally, we note that the clustering results are largely insensitive to small perturbations in the initial positions of the particles; thus a rectangular grid was chosen simply in order to aid the visualization of the computed structures.
Footnote 1: For instance, the value of \\(d_{ij}\\) between a particle trajectory in the meandering jet and another in one of the vortices will be much smaller than if we did not account for domain periodicity due to the much faster horizontal velocity of the jet. This smaller value would then be closer to the value of \\(d_{ij}\\) between two particles trajectories in the same vortex.
Beginning with no observation noise, Figure 5\\((a)\\) shows the number \\(N_{i}\\) of particles in each of the ten largest groups identified by the DBSCAN algorithm as a function of the parameter eps for minPts \\(=10\\). The algorithm classifies all particles as noise (i.e., not part of any coherent group) for eps \\(=0\\), and lumps all particles into one unique group for large eps. This behavior is expected given that DBSCAN creates groups out of core particles with a minimum of minPts neighboring Figure 5: Elliptic LCSs in the Bickley jet. \\((a,b)\\) Number \\(N_{i}\\) of particles in each of the ten largest groups identified by the DBSCAN algorithm as a function of the parameter eps for minPts \\(=10\\). The data consists of 1080 particles trajectories advected over \\(t\\in[0,40]\\) days and artificially polluted by observation noise of standard deviation \\((a)\\)\\(\\sigma=0\\) and \\((b)\\)\\(\\sigma=2\\). \\((c\\)–\\(h)\\) Coherent groups identified by DBSCAN for \\((c,e,g)\\)\\(\\sigma=0\\), \\((d,f,h)\\)\\(\\sigma=2\\), and different values of eps shown as the dotted lines in \\((a,b)\\). Movie1 shows the Lagrangian evolution of the particles in \\((e)\\).
particles in a neighborhood of radius eps. As eps is varied in between these two extremes, we observe two categories of groups. Groups 1 to 7 contain a rather stable number of particles over a wide range of eps, while groups 8 to 10 repeatedly appear and disappear under minute changes of eps. The first category contains physically meaningful groups, while the second category contains spurious groups that we henceforth discard in the following plots. Figures 5\\((c,e,g)\\) display the coherent groups identified by DBSCAN for the three different values of eps shown as the dotted lines in \\((a)\\). A small eps merely identifies the vortex cores while a large eps detects the meandering jets but not the vortices. Thus, eps sets the coherence length scale of the identified structures, as Wichmann et al. (2021) noticed with the OPTICS algorithm, itself a generalization of DBSCAN. We argue, however, that _an appropriate choice for eps is one that does not induce changes in the number of physically meaningful groups as its value is slightly perturbed_, which is a widespread philosophy in clustering applications (von Luxburg, 2010). Figure 5\\((a)\\) indicates that any eps in the range 3.5 to 14 generates the same number of clusters, which comprise both the meandering jet and the vortices as shown in \\((e)\\) for eps \\(=5\\). Thus, any value of eps in the range 3.5 to 14 and not too close to its boundaries is a valid choice. In particular, the clustering result obtained for eps \\(=5\\) identifies all known elliptic LCSs in the Bickley jet, and therefore validates our methodology for the choice of eps, which can be summarized as the following steps:
1. Discard spurious groups that repeatedly appear and disappear under very small changes of eps (in Figure 5(a), groups 8 and above),
2. Identify the range of eps in which the number of remaining physically meaningful groups does not change (in Figure 5(a), \\(3.5\\leq\\texttt{eps}\\leq 14\\)),
3. Select a value of eps in that range, preferably not too close to its boundaries.
We note that the few seemingly outlier particles away from the green and orange clusters in panel \\((e)\\) correctly belong to the assigned groups, as demonstrated in Appendix B. Moviel shows the Lagrangian evolution of the particles in \\((e)\\).
We then consider the effect of observation noise. Figures 5\\((b,d,f,h)\\) are the counterparts of \\((a,c,e,g)\\), with Gaussian noise of standard deviation \\(\\sigma=2\\) applied to each spatio-temporal data point in the computed particle trajectories. To facilitate visualization, the particles in \\((d,f,h)\\) are displayed at their true initial location without the applied noise. Despite the intensity of the noise, the coherent groups identified by the clustering algorithm are remarkably similar with their noise-free counterparts. The main difference is that similar clustering results correspond to higher values of eps in the presence of noise, but our methodology itself is robust to noise.
Finally, we present in Appendix C a similar analysis as carried out in this section, using fewer particle trajectories. The results demonstrate the robustness of our algorithms for the detection of hyperbolic and elliptic coherent structures to various amounts of noise and sparsity in the data.
### ABC flow
For our second analytical example, we consider the ABC (Arnold-Beltrami-Childress) flow, a family of exact three-dimensional solutions of Euler's equation given by the steady velocity field
\\[\\dot{x} =A\\sin z+C\\cos y, \\tag{12a}\\] \\[\\dot{y} =B\\sin x+A\\cos z,\\] (12b) \\[\\dot{z} =C\\sin y+B\\cos x. \\tag{12c}\\]We employ the same parameter values \\(A=\\sqrt{3}\\), \\(B=\\sqrt{2}\\), and \\(C=1\\) considered in previous studies (Dombre et al., 1986; Haller, 2001) and set the domain to be the three-dimensional torus \\(\\Omega=[0,2\\pi]^{3}\\) with periodic boundary conditions (Dombre et al., 1986; Froyland and Padberg, 2009). Figure 6\\((a)\\) shows three orthogonal Poincare sections of 100 trajectories initialized on each face of the cube and advected over \\(t\\in[0,2000]\\). These Poincare sections confirm the existence of six regions of coherent motion that have been discovered previously (Dombre et al., 1986; Budisic and Mezic, 2012; Hadjighasem et al., 2016).
We first compute the repelling LCSs in the ABC flow over the time window \\(t\\in[0,20]\\), visualized by the ridges of the forward FTLE field. Figure 6\\((d)\\) displays the reference FTLE field \\(\\Lambda_{0}^{20}\\) at initial time, computed by finite-differencing the flow map obtained through advection of a dense rectangular initial lattice of \\(200\\times 200\\times 200\\) particles. Similar to our treatment of the Bickley jet, we let the trajectories leave the initial domain \\(\\Omega\\) when evaluating the flow map gradient. As expected, the FTLE ridges bound the regions of coherent motion revealed by the Poincare sections in Figure 6\\((a)\\). We then simulate sparse experimental data by computing the trajectories of 200 000 particles randomly initialized in \\(\\Omega\\) following a spatially-uniform distribution, and we apply a Gaussian noise of standard deviation \\(\\sigma\\) to each resulting spatio-temporal measurement. Figures 6\\((b,c,\\)\\(e,\\)\\(f)\\) show the discrete FTLE field \\(\\tilde{\\Lambda}_{0}^{20}\\) for different values of \\(\\sigma\\) and of the neighborhood radius \\(\\delta\\). As we have observed with the Bickley jet, a smaller value of \\(\\delta\\) enables a finer delineation of the ridges in the absence of noise. However, under the presence of noise, increasing \\(\\delta\\) reduces the pollution caused by the noise and leads to cleaner ridges. These results confirm the important role played by the parameter \\(\\delta\\), in the case of three-dimensional data as well.
We then turn to the computation of elliptic LCSs over \\(t\\in[0,20]\\), which should reveal the six coherent vortices that are seen in Figure 6\\((a)\\). We compute trajectories for a set of 15 625 particles initialized on a rectangular lattice, to which we add Gaussian noise of standard deviation \\(\\sigma\\). We then apply the clustering approach described in Section 2.2 to uncover the elliptic LCSs. Beginning with the case \\(\\sigma=0\\), Figure 7\\((a)\\) shows the number \\(N_{i}\\) of particles in each of the ten largest groups identified by the DBSCAN algorithm as a function of the parameter eps for minPts \\(=25\\). Following the strategy that we outlined in Section 3.1 for the choice of the parameter eps, we first discard groups 7 to 10 (and above) as they repeatedly appear and disappear under minute changes of eps. The number of remaining physically meaningful groups and their size remains stable for eps in the range 1 to 1.3. We thus pick eps \\(=1.1\\) and plot the corresponding clusters in Figure 7\\((c)\\). These identify each of the six coherent vortices that appear in Figure 6\\((a)\\), providing a validation of our methodology in the case of three-dimensional chaotic flows. Movie2 shows the Lagrangian evolution of the particles in Figure 7\\((c)\\), highlighting how trajectories belonging to the coherent groups behave very differently from those in the incoherent group, despite starting initially very close.
Thanks to the ability of the DBSCAN algorithm to scale up to a large number of particles, our approach does not require sparsification of the weighted graph as opposed to the spectral clustering methodology of Hadjighasem et al. (2016). For the number of particles we consider, the DBSCAN algorithm runs in a mere 5 seconds once the edge weights (10) are computed. Further, contrary to Hadjighasem et al. (2016), we account for the periodicity of the domain when calculating the pairwise distances in (10), which makes it more difficult for the clustering algorithm to discriminate between particles belonging to different groups of coherent motion. Finally, the corresponding results with Gaussian noise of standard deviation \\(\\sigma=0.5\\) are shown in Figures 7\\((b,d)\\). The particles in \\((d)\\) are displayed at their true initial noise-free location, to facilitate visualization. Once again, the remarkable similarity between the clusters obtained with and without noise exemplifies the robustness of both the algorithm and the parameter selection methodology.
Figure 6: Hyperbolic repelling LCSs in the ABC flow. (\\(a\\)) Poincare maps for integration time \\(t\\in[0,2000]\\), (\\(d\\)) exact continuous FTLE field over the time window \\(t\\in[0,20]\\), and discrete FTLE field computed over the same time window using 200 000 particles with (\\(b\\)) \\(\\sigma=0\\), \\(\\delta=0.2\\), (\\(c\\)) \\(\\sigma=0\\), \\(\\delta=0.3\\), (\\(e\\)) \\(\\sigma=0.2\\), \\(\\delta=0.2\\), (\\(f\\)) \\(\\sigma=0.2\\), \\(\\delta=0.3\\). Here, \\(\\sigma\\) refers to the standard deviation of observation noise, and \\(\\delta\\) is the neighborhood radius used in the least squares fit of the flow map gradient. The colorbar is shared between plots (\\(b\\)–\\(f\\)), and values below 0 and above 0.28 are shown in the same color.
Figure 7: Elliptic LCSs in the ABC flow. \\((a,b)\\) Number \\(N_{i}\\) of particles in each of the ten largest groups identified by the DBSCAN algorithm as a function of the parameter eps, for minPts\\(=25\\). The data consists of 15 625 particles trajectories advected over \\(t\\in[0,20]\\) and artificially polluted by observation noise of standard deviation \\((a)\\)\\(\\sigma=0\\) and \\((b)\\)\\(\\sigma=0.5\\). \\((c,d)\\) Coherent groups identified by DBSCAN for \\((c)\\)\\(\\sigma=0\\) and \\((d)\\)\\(\\sigma=0.5\\), and the value of eps shown as the dotted line in \\((a,\\!b)\\). Movie2 shows the Lagrangian evolution of the particles in \\((c)\\).
### Chicken embryo
For our third example, we consider the global deformation and flow of tissue in a developing chicken embryo. The experimental data, acquired by Rozbicki et al. (2015), consists of a velocity field tracking the coordinated motion of thousands of cells during a key embryonic phase known as gastrulation. This phase is characterized by the formation of the primitive streak, a structure that marks the onset of formation of multiple tissue layers and the establishment of the overall body plan during the early stages of development (Stern, 2004). The velocity field was obtained by imaging a live chicken embryo at regular intervals of time using light-sheet microscopy (LSM), and feeding the resulting high-resolution images of moving cells to a particle-image velocimetry (PIV) algorithm. This process was carried out over a period of 12 hours, spanning the whole formation of the primitive streak.
We begin by visualizing the tissue deformation during formation of the primitive streak. To this effect, we advect the square tissue domains shown at initial time in Figure 8\\((a)\\) under the PIV-derived velocity field, resulting in the deformed domains shown in Figure 8\\((b)\\) at final time. The domains are colored according to their percent volume change over the course of the 12 hours, with blue indicating contraction and red indicating expansion. The thin elongated vertical structure in \\((b)\\) is the primitive streak, which is formed by contraction of the deep blue domains towards it while moving away from the horizontal line of isolated red domains in \\((a)\\)(Rozbicki et al., 2015). Such deformation underlies the existence of a horizontal repelling LCS at initial time, and a vertical attracting LCS at final time (Serra et al., 2020). An additional circular repelling LCSs separates the extra-embryonic region (outer red region) from the embryonic one \\((a,b)\\).
The existence of these LCSs is confirmed in Figures 8(\\(c\\),\\(e\\)), in which we employ the standard finite-difference approximation of the flow map gradient to compute \\((a)\\) the forward FTLE at initial time, and \\((b)\\) the backward FTLE at final time. One circular and one horizontal forward FTLE ridges mark hyperbolic repelling LCSs at the initial time. The circular ridge separates embryonic and extra-embryonic areas. At the same time, particles on either side of the horizontal ridge are drawn towards the anterior and the posterior of an attracting LCS, revealed by the backward FTLE ridge.
A continuous velocity field might not always be available, so we now consider the case where one only has discrete trajectory data for a small subset of cells. We simulate such a scenario by randomly initializing 6000 cells and advecting them under the PIV-derived velocity field. Using this single set of trajectory data, we utilize our discrete approach described in Section 2.1.2 with \\(\\epsilon=100\\,\\mu\\)m to compute the discrete forward FTLE shown in Figure 8\\((d)\\) at initial time, and the discrete backward FTLE shown in Figure 8\\((f)\\) at final time. The discrete results agree extremely well with their continuous counterparts. Most importantly, the ridges delineating the repelling and attracting LCSs are still clearly visible, despite the sparsity of the discrete dataset. Such agreement validates the applicability of our methodology for the computation of hyperbolic LCSs to sparse and noisy real-life experimental datasets.
### Gulf of Mexico
In our last example, we analyze the trajectories of a set of 152 acoustically-tracked isobaric floats in the Gulf of Mexico (Hamilton et al., 2016). The floats were ballasted for depth 1500 m and deployed over a two-year period starting in 2011, with each float recording its position over a duration varying between a week and 1.5 year. Based on a probabilistic Markov chain analysis (Froyland et al., 2014) of these trajectories, Miron et al. (2019) found that the deep circulation dynamics consist of two weakly-interacting provinces of near-equal area, in each of which driftersFigure 8: Hyperbolic LCSs in the coordinated flow of cells belonging to a developing chick embryo, captured through light-sheet microscopy (Rozbicki et al., 2015). \\((a,b)\\) Deformation of an initially-rectangular grid advected by the flow of cells over 12 hours. Each quadrilateral is colored according to its percent volume change between initial and final times. \\((c)\\) Exact continuous forward-time FTLE field at initial time and \\((d)\\) its discrete counterpart, calculated from 6000 random uniformly distributed particles at initial time and advected with the flow of cells. \\((e)\\) Exact continuous backward-time FTLE field at final time and \\((f)\\) its discrete counterpart, calculated from the same randomly distributed particles as in \\((d)\\). The parameter value \\(\\epsilon=100\\,\\mu\\)m is used in the calculation of the discrete FTLEs in \\((d,\\!f)\\). The white scale bar corresponds to \\(500\\,\\mu\\)m in all plots, and the colorbar is shared between \\((a,\\!b)\\), \\((c,\\!d)\\), and \\((e,\\!f)\\), respectively. FTLE values are in min\\({}^{-1}\\).
had a residence time of 3.5 to 4.5 years.
From the 152 float trajectories, we first select 35 of them that overlap during the same 610 days. Using the pairwise distance definition (10), we then apply the DBSCAN algorithm with \\(\\mathtt{minPts}=3\\). Figure 9\\((a)\\) shows the number \\(N_{i}\\) of particles in each of the ten largest resulting groups. Following our previous methodology, we discard groups 3 to 10 (and beyond) as they repeatedly appear and disappear, or change their size by multiple factors, under minute changes of eps. The size of the remaining two physically meaningful groups remains constant for eps in the range \\(2^{\\circ}\\) to \\(2.5^{\\circ}\\). We thus pick eps = \\(2.2^{\\circ}\\) and plot the corresponding clusters in Figure 9\\((b)\\) by coloring the trajectories (lines), initial (squares) and final (circles) positions of the floats according to their cluster membership. We recover the two equally-sized regions of coherent motion that were identified in Miron et al. (2019). Notably, this was achieved at much lower human effort and computational cost, since the latter study requires the construction of a spatial grid over which the matrix of transition probabilities is calculated. By contrast, our approach is meshless, straightforward to implement and takes less than a second of computational time in this particular case.
## 4 Conclusions
We have introduced objective computational techniques for detecting hyperbolic and elliptic Lagrangian Coherent Structures (LCSs) in complex systems based solely on knowledge of sparse and noisy particle trajectories. These techniques were validated using two benchmark problems defined by analytical velocity fields: the two-dimensional Bickley jet and the chaotic three-dimensional ABC flow. Our methodology produced accurate results at a low computational cost in both cases and exhibited good robustness to measurement noise. We then applied our techniques to two experimental datasets: the global flow of cells in a developing chicken embryo and the trajectories
Figure 9: Elliptic LCSs in the deep Gulf of Mexico, using trajectories produced over 610 days by 35 acoustically-tracked isobaric floats at depth 1500 m. \\((a)\\) Number \\(N_{i}\\) of floats in each of the ten largest groups identified by the DBSCAN algorithm as a function of eps, for \\(\\mathtt{minPts}=3\\). \\((b)\\) The two coherent groups identified by DBSCAN for \\(\\mathtt{eps}=2.2^{\\circ}\\) are shown in blue and orange, in terms of the initial (squares) and final (circles) positions of the floats as well as their trajectories (lines). The black dotted line indicates the 1500 m isobath.
of ballasted floats in the Gulf of Mexico, identifying the key LCSs organizing these complex flows. The complexity, computational requirements and sensitivity of most existing techniques for LCSs detection have prevented their broad use in engineering, physical and biological problems. To this end, we have provided a simple, lightweight and user-friendly Python code that implements our methods on any dataset of particle trajectories.
The present work opens the door to several avenues of future research. In our approach to detect hyperbolic structures, the least-squares problem (5) gives equal weights to all particle pairs contained within a ball of a given radius around the point of interest. Weighting each pair according to its distance from the point of interest may improve the accuracy of the fit. Likewise, the clustering results of DBSCAN are heavily dependent on the definition (10) of the distance between two trajectories, which currently measures the time average of the instantaneous pairwise distance. Taking inspiration from Schlueter-Kuck and Dabiri (2017a), one can use metrics such as the temporal standard deviation of the pairwise distance or alternative synchronization measures. Finally, we envisage extending our methods to account for incomplete and or newly added trajectories. Examples include ocean drifters that break down or temporarily lose GPS tracking, cells dividing into two, and dying cells.
## Acknowledgments
We acknowledge Manli Chuai and Cornelis J. Weijer for providing the chicken embryo dataset. M.S. acknowledges partial support from the Schmidt Science Fellowship and the Postdoc Mobility Fellowship from the Swiss National Foundation. L.M. thanks the NSF-Simons Center for Mathematical and Statistical Analysis of Biology Award 1764269, NIH 1R01HD097068, the Simons Foundation, and the Henri Seydoux Fund for partial support.
## Code accessibility
Python codes implementing the methods described are shared in an online repository at [https://github.com/smowlavi/CoherentStructures](https://github.com/smowlavi/CoherentStructures) and can readily be applied to any experimental or computational dataset of particle trajectories.
## Appendix A Solution of the least squares problem
Here, we describe how to solve the least-squares problem defined by (5), which includes Tikhonov regularization (Kaipio and Somersalo, 2006). With the matrices \\(\\mathbf{X},\\mathbf{Y}\\in\\mathbb{R}^{2\\times m}\\) introduced in (8), one can rewrite (5) in matrix form as follows:
\\[\\tilde{\
abla}\\mathbf{F}_{t_{0}}^{t}(\\mathbf{x}_{0}^{i})=\\arg\\min_{\\mathbf{A} }J(\\mathbf{A})\\equiv\\arg\\min_{\\mathbf{A}}\\ \\frac{1}{M}\\
Applying standard matrix calculus identities, the gradient of the objective function \\(J(\\mathbf{A})\\) with respect to matrix \\(\\mathbf{A}\\) is then given by
\\[\\frac{\\partial J(\\mathbf{A})}{\\partial\\mathbf{A}}=\\frac{2}{M}\\mathbf{A}\\mathbf{X} \\mathbf{X}^{\\mathsf{T}}-\\frac{2}{M}\\mathbf{A}\\mathbf{Y}\\mathbf{X}^{\\mathsf{T}} +2\\lambda\\mathbf{A}-2\\beta\\mathbf{I}. \\tag{16}\\]
Setting the above derivative to zero and solving for \\(\\mathbf{A}\\), one finally obtains the solution given in (6).
## Appendix B Apparent outliers in the Bickley jet
We analyse the trajectories of two particles labeled as belonging to the green cluster in Figure 5(e) for the Bickley jet with \\(\\mathtt{minPts}=10\\), \\(\\mathtt{eps}=5\\) and \\(\\sigma=0\\), despite being located away from the rest of the cluster at initial time. The red and black lines in Figure 10(a) show the trajectories of these two particles with their initial positions indicated by circles, together with the trajectory, in blue, of a third particle that is initially located near the middle of the green cluster. We also display the entire clusters obtained in Figure 5(e) at \\(t=0\\) day. The trajectories reveal that the two apparent outlier particles in fact rapidly converge toward the green cluster, justifying their membership in the latter. Figure 10(b) depicts the final positions at \\(t=40\\) days of these two outliers as the red and black circles, together with the final position of the particle that started near the middle of the green cluster as the blue circle. The entire clusters are also displayed at \\(t=40\\) days, showing that all three particles are located in the interior of the green cluster. Thus, the DBSCAN algorithm is correct in assigning these two apparent outliers to the green cluster. A similar analysis shows that the two apparent outliers of the orange cluster rapidly converge towards the latter, justifying their labeling as well.
## Appendix C Bickley jet results with varying number of tracer particles
In this appendix, we demonstrate the ability of our methods to handle various levels of sparsity by revisiting the Bickley jet example investigated in Section 3.1, but using fewer particles this time. Starting with repelling LCSs, we compute trajectories for 2000 particles with random initial
Figure 10: Fate of two seemingly outlier particles in the green Bickley jet cluster obtained in Figure 5(e) using \\(\\mathtt{minPts}=10\\), \\(\\mathtt{eps}=5\\) and \\(\\sigma=0\\). (\\(a\\)) Initial positions (red and black circles) and trajectories (red and black lines) of these two apparent outliers superimposed with the position at \\(t=0\\) day of the entire clusters obtained in Figure 5(e). We also show the trajectory (blue line) of a particle that is initially located near the middle of the green cluster. (\\(b\\)) Final positions (red and black circles) of the same two apparent outliers superimposed with the clusters displayed at \\(t=40\\) days, together with the final position (blue circle) of the particle that started near the middle of the green cluster.
positions and we apply a Gaussian noise of standard deviation \\(\\sigma\\) to each measurement, as we did before. Figures 11(\\(a\\)-\\(d\\)) show the discrete FTLE field computed using the least squares technique described in Section 2.1.2 for different values of \\(\\sigma\\) and of the neighborhood radius \\(\\delta\\): (\\(a\\)) \\(\\sigma=0\\), \\(\\delta=0.4\\), (\\(b\\)) \\(\\sigma=0\\), \\(\\delta=0.6\\), (\\(c\\)) \\(\\sigma=0.2\\), \\(\\delta=0.4\\), (\\(d\\)) \\(\\sigma=0.2\\), \\(\\delta=0.6\\). Although the ridges of the FTLE field are coarser than in the presence of more data, we can still delineate the two that separate the meandering jet from the vortices. As we have observed before, increasing the parameter \\(\\epsilon\\) leads to cleaner ridges, especially in the presence of measurement noise. Next, we apply the clustering approach described in Section 2.2 to compute elliptic LCSs using a set of 480 particle trajectories initialized on a grid and polluted by Gaussian noise of standard deviation \\(\\sigma\\). Similar to Section 3.1, we account for the periodicity of the domain in the \\(x\\)-direction when calculating the pairwise Euclidian distance entering (10). Figures 12(\\(a\\),\\(b\\)) show the number \\(N_{i}\\) of particles in each of the ten largest groups identified by the DBSCAN algorithm as a function of the parameter eps with minPts = 7 and for (\\(a\\)) \\(\\sigma=0\\) and (\\(b\\)) \\(\\sigma=2\\). The lower value that we use here for minPts compared with Section 3.1 reflects the increased sparsity of the particle trajectories. Nevertheless, the results in Figures 12(\\(a\\),\\(b\\)) are remarkably similar to their counterparts in Figures 5(\\(a\\),\\(b\\)). Following the methodology outlined in Section 3.1, we discard spurious groups and select eps in a range where the number and size of the remaining, physically-meaningful groups remains stable. The corresponding clusters are plotted in Figures 12(\\(c\\),\\(d\\)), and they identify both the meandering jet and the vortices that were obtained in Section 3.1. Altogether, the results presented in this appendix illustrate the robustness of our detection algorithms to the simultaneous presence of noise and sparsity in the data.
Figure 11: Hyperbolic repelling LCSs in the Bickley jet with fewer particles. Discrete FTLE field computed over the time window \\(t\\in[0,40]\\) using 2000 particles with (\\(a\\)) \\(\\sigma=0\\), \\(\\delta=0.4\\), (\\(b\\)) \\(\\sigma=0\\), \\(\\delta=0.6\\), (\\(c\\)) \\(\\sigma=0.2\\), \\(\\delta=0.4\\), (\\(d\\)) \\(\\sigma=0.2\\), \\(\\delta=0.6\\). Here, \\(\\sigma\\) refers to the standard deviation of observation noise, and \\(\\delta\\) is the neighborhood radius used in the least squares fit of the flow map gradient. The colorbar is the same as in Figure 4.
Figure 12: Elliptic LCSs in the Bickley jet with fewer particles. \\((a,b)\\) Number \\(N_{i}\\) of particles in each of the ten largest groups identified by the DBSCAN algorithm as a function of the parameter eps for minPts = 7. The data consists of 480 particles trajectories advected over \\(t\\in[0,40]\\) days and artificially polluted by observation noise of standard deviation \\((a)\\)\\(\\sigma=0\\) and \\((b)\\)\\(\\sigma=2\\). (\\(c\\),\\(d\\)) Coherent groups identified by DBSCAN for \\((c)\\)\\(\\sigma=0\\) and \\((d)\\)\\(\\sigma=2\\), and the value of eps shown as the dotted line in (\\(a\\),\\(b\\)).
## References
* Allshouse and Peacock [2015] M. R. Allshouse and T. Peacock. Lagrangian based methods for coherent structure detection. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 25(9):097617, 2015.
* Banisch and Koltai [2017] R. Banisch and P. Koltai. Understanding the geometry of transport: Diffusion maps for lagrangian trajectory data unravel coherent sets. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 27(3):035804, 2017.
* Budisic and Mezic [2012] M. Budisic and I. Mezic. Geometry of the ergodic quotient reveals coherent structures in flows. _Physica D: Nonlinear Phenomena_, 241(15):1255-1269, 2012.
* del Castillo-Negrete and Morrison [1993] D. del Castillo-Negrete and P. Morrison. Chaotic transport by rossby waves in shear flow. _Physics of Fluids A: Fluid Dynamics_, 5(4):948-965, 1993.
* Dombre et al. [1986] T. Dombre, U. Frisch, J. M. Greene, M. Henon, A. Mehr, and A. M. Soward. Chaotic streamlines in the abc flows. _Journal of Fluid Mechanics_, 167:353-391, 1986.
* Ester et al. [1996] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In _Kdd_, volume 96, pages 226-231, 1996.
* Everitt et al. [2011] B. S. Everitt, S. Landau, M. Leese, and D. Stahl. _Cluster analysis_. John Wiley, 2011.
* Fortunato [2010] S. Fortunato. Community detection in graphs. _Physics reports_, 486(3-5):75-174, 2010.
* Froyland [2013] G. Froyland. An analytic framework for identifying finite-time coherent sets in time-dependent dynamical systems. _Physica D: Nonlinear Phenomena_, 250:1-19, 2013.
* Froyland and Junge [2015] G. Froyland and O. Junge. On fast computation of finite-time coherent sets using radial basis functions. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 25(8):087409, 2015.
* Froyland and Padberg [2009] G. Froyland and K. Padberg. Almost-invariant sets and invariant manifolds--connecting probabilistic and geometric descriptions of coherent structures in flows. _Physica D: Nonlinear Phenomena_, 238(16):1507-1523, 2009.
* Froyland and Padberg-Gehle [2012] G. Froyland and K. Padberg-Gehle. Finite-time entropy: A probabilistic approach for measuring nonlinear stretching. _Physica D: Nonlinear Phenomena_, 241(19):1612-1628, 2012.
* Froyland and Padberg-Gehle [2015] G. Froyland and K. Padberg-Gehle. A rough-and-ready cluster-based approach for extracting finite-time coherent sets from sparse and incomplete trajectory data. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 25(8):087406, 2015.
* Froyland et al. [2010] G. Froyland, N. Santitissadeekorn, and A. Monahan. Transport in time-dependent dynamical systems: Finite-time coherent sets. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 20(4):043116, 2010.
* Froyland et al. [2014] G. Froyland, R. M. Stuart, and E. van Sebille. How well-connected is the surface of the global ocean? _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 24(3):033126, 2014.
* Froyland et al. [2019] G. Froyland, C. P. Rock, and K. Sakellariou. Sparse eigenbasis approximation: Multiple feature extraction across spatiotemporal scales with application to coherent set identification. _Communications in Nonlinear Science and Numerical Simulation_, 77:81-107, 2019.
* Rue et al. [2014]A. Hadjighasem, D. Karrasch, H. Teramoto, and G. Haller. Spectral-clustering approach to lagrangian vortex detection. _Physical Review E_, 93(6):063107, 2016.
* Hadjighasem et al. [2017] A. Hadjighasem, M. Farazmand, D. Blazevski, G. Froyland, and G. Haller. A critical comparison of lagrangian methods for coherent structure detection. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 27(5):053104, 2017.
* Haller [2001] G. Haller. Distinguished material surfaces and coherent structures in three-dimensional fluid flows. _Physica D: Nonlinear Phenomena_, 149(4):248-277, 2001.
* Haller [2015] G. Haller. Lagrangian coherent structures. _Annual Review of Fluid Mechanics_, 47:137-162, 2015.
* Haller et al. [2021] G. Haller, N. Aksamit, and A. P. Encinas-Bartos. Quasi-objective coherent structure diagnostics from single trajectories!? a3b2 show [feature]??. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 31(4):043131, 2021.
* Hamilton et al. [2016] P. Hamilton, A. Bower, H. Furey, R. Leben, and P. Perez-Brunius. Deep circulation in the gulf of mexico: A lagrangian study. ocs study boom 2016-081, 289 pp. Technical report, Bureau of Ocean Energy Management, 2016. OCS Study BOEM 2016-081.
* Hogan [1999] B. L. Hogan. Morphogenesis. _Cell_, 96(2):225-233, 1999.
* Kaipio and Somersalo [2006] J. Kaipio and E. Somersalo. _Statistical and computational inverse problems_, volume 160. Springer Science & Business Media, 2006.
* Lekien and Ross [2010] F. Lekien and S. D. Ross. The computation of finite-time lyapunov exponents on unstructured meshes and for non-euclidean manifolds. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 20(1):017505, 2010.
* Lumpkin and Pazos [2007] R. Lumpkin and M. Pazos. Measuring surface currents with surface velocity program drifters: the instrument, its data, and some recent results. _Lagrangian analysis and prediction of coastal and ocean dynamics_, 39:67, 2007.
* Marchetti et al. [2013] M. C. Marchetti, J.-F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha. Hydrodynamics of soft active matter. _Reviews of Modern Physics_, 85(3):1143, 2013.
* Merzkirch [2012] W. Merzkirch. _Flow visualization_. Elsevier, 2012.
* Miron et al. [2019] P. Miron, F. J. Beron-Vera, M. J. Olascoaga, G. Froyland, P. Perez-Brunius, and J. Sheinbaum. Lagrangian geography of the deep gulf of mexico. _Journal of Physical Oceanography_, 49(1):269-290, 2019.
* Morozov [2017] A. Morozov. From chaos to order in active fluids. _Science_, 355(6331):1262-1263, 2017.
* Nolan et al. [2020] P. J. Nolan, M. Serra, and S. D. Ross. Finite-time lyapunov exponents in the instantaneous limit and material transport. _Nonlinear Dynamics_, 100(4):3825-3852, 2020.
* Padberg-Gehle and Schneide [2017] K. Padberg-Gehle and C. Schneide. Network-based study of lagrangian transport and mixing. _Nonlinear Processes in Geophysics_, 24(4):661-671, 2017.
* Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al. Scikit-learn: Machine learning in python. _the Journal of machine Learning research_, 12:2825-2830, 2011.
* Pedregosa et al. [2017]E. Rozbicki, M. Chuai, A. I. Karjalainen, F. Song, H. M. Sang, R. Martin, H.-J. Knolker, M. P. MacDonald, and C. J. Weijer. Myosin-ii-mediated cell shape changes and cell intercalation contribute to primitive streak formation. _Nature cell biology_, 17(4):397-408, 2015.
* Rypina et al. (2007) I. Rypina, M. G. Brown, F. J. Beron-Vera, H. Kocak, M. J. Olascoaga, and I. Udovydchenkov. On the lagrangian dynamics of atmospheric zonal jets and the permeability of the stratospheric polar vortex. _Journal of the Atmospheric Sciences_, 64(10):3595-3610, 2007.
* Schlueter-Kuck and Dabiri (2017a) K. L. Schlueter-Kuck and J. O. Dabiri. Coherent structure colouring: identification of coherent structures from sparse data using graph theory. _Journal of Fluid Mechanics_, 811:468-486, 2017a.
* Schlueter-Kuck and Dabiri (2017b) K. L. Schlueter-Kuck and J. O. Dabiri. Identification of individual coherent sets associated with flow trajectories using coherent structure coloring. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 27(9):091101, 2017b.
* Schneide et al. (2018) C. Schneide, A. Pandey, K. Padberg-Gehle, and J. Schumacher. Probing turbulent superstructures in rayleigh-benard convection by lagrangian trajectory clusters. _Physical Review Fluids_, 3(11):113501, 2018.
* Schubert et al. (2017) E. Schubert, J. Sander, M. Ester, H. P. Kriegel, and X. Xu. Dbscan revisited, revisited: why and how you should (still) use dbscan. _ACM Transactions on Database Systems (TODS)_, 42(3):1-21, 2017.
* Ser-Giacomi et al. (2015) E. Ser-Giacomi, V. Rossi, C. Lopez, and E. Hernandez-Garcia. Flow networks: A characterization of geophysical fluid transport. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 25(3):036404, 2015.
* Serra and Haller (2016) M. Serra and G. Haller. Objective eulerian coherent structures. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 26(5):053110, 2016.
* Serra et al. (2017) M. Serra, P. Sathe, F. Beron-Vera, and G. Haller. Uncovering the edge of the polar vortex. _Journal of the Atmospheric Sciences_, 74(11):3871-3885, 2017.
* Serra et al. (2020a) M. Serra, P. Sathe, I. Rypina, A. Kirincich, S. D. Ross, P. Lermusiaux, A. Allen, T. Peacock, and G. Haller. Search and rescue at sea aided by hidden flow structures. _Nature communications_, 11(1):1-7, 2020a.
* Serra et al. (2020b) M. Serra, S. Streichan, M. Chuai, C. J. Weijer, and L. Mahadevan. Dynamic morphoskeletons in development. _Proceedings of the National Academy of Sciences_, 117(21):11444-11449, 2020b.
* Shadden (2012) S. C. Shadden. Lagrangian coherent structures. _Transport and Mixing in Laminar Flows: From Microfluidics to Oceanic Currents_, pages 59-89, 2012.
* Shadden et al. (2005) S. C. Shadden, F. Lekien, and J. E. Marsden. Definition and properties of lagrangian coherent structures from finite-time lyapunov exponents in two-dimensional aperiodic flows. _Physica D: Nonlinear Phenomena_, 212(3-4):271-304, 2005.
* Stern (2004) C. D. Stern. _Gastrulation: from cells to embryoo_. CSHL Press, 2004.
* Tallapragada and Ross (2013) P. Tallapragada and S. D. Ross. A set oriented definition of finite-time lyapunov exponents and coherent sets. _Communications in Nonlinear Science and Numerical Simulation_, 18(5):1106-1126, 2013.
* Toth et al. (2016)G. S. Vieira, I. I. Rypina, and M. R. Allshouse. Uncertainty quantification of trajectory clustering applied to ocean ensemble forecasts. _Fluids_, 5(4):184, 2020.
* von Luxburg [2010] U. von Luxburg. Clustering stability: An overview. _Foundations and Trends in Machine Learning_, 2(3):235-274, 2010.
* Wichmann et al. [2020] D. Wichmann, C. Kehl, H. A. Dijkstra, and E. van Sebille. Detecting flow features in scarce trajectory data using networks derived from symbolic itineraries: an application to surface drifters in the north atlantic. _Nonlinear Processes in Geophysics_, 27(4):501-518, 2020.
* Wichmann et al. [2021] D. Wichmann, C. Kehl, H. A. Dijkstra, and E. van Sebille. Ordering of trajectories reveals hierarchical finite-time coherent sets in lagrangian particle data: detecting agulhas rings in the south atlantic ocean. _Nonlinear Processes in Geophysics_, 28(1):43-59, 2021.
* Wiggins [2003] S. Wiggins. _Introduction to applied nonlinear dynamical systems and chaos_, volume 2. Springer Science & Business Media, 2003.
* Williams et al. [2015] M. O. Williams, I. I. Rypina, and C. W. Rowley. Identifying finite-time coherent sets from limited quantities of lagrangian data. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 25(8):087408, 2015. | Many complex flows such as those arising from ocean plastics in geophysics or moving cells in biology are characterized by sparse and noisy trajectory datasets. We introduce techniques for identifying Lagrangian Coherent Structures (LCSs) of hyperbolic and elliptic nature in such datasets. Hyperbolic LCSs, which represent surfaces with maximal attraction or repulsion over a finite amount of time, are computed through a regularized least-squares approximation of the flow map gradient. Elliptic LCSs, which identify regions of coherent motion such as vortices and jets, are extracted using DBSCAN - a popular data clustering algorithm - combined with a systematic approach to choose parameters. We deploy these methods on various benchmark analytical flows and real-life experimental datasets ranging from oceanography to biology and show that they yield accurate results, despite sparse and noisy data. We also provide a lightweight computational implementation of these techniques as a user-friendly and straightforward Python code. | Give a concise overview of the text below. | 189 |
arxiv-format/1801_08706v1.md | # Cloud Detection from RGB Color Remote Sensing Images with Deep Pyramid Networks
## 1 Introduction
The presence of clouds due to climate factors limits the clear acquisition of content information from the Earth surface for almost all optical sensors. Ultimately, this reduces the visibility and affects adversely the processing of data for many remote sensing applications such as classification, segmentation and change detection etc. Hence, detection/elimination of cloudy coverages constitutes an important pre-processing step for remote sensing.
In particular, RGB color bands are more sensitive to these atmospheric scattering conditions compared to the high wavelength sensors (i.e. infrared/multi-spectral) [1]. Thus, this problem becomes even harder and the spatial content of the image needs to be leveraged rather than singly spectral properties of clouds as in multi-spectral/infrared sensors. For this reason, addressing the problem from the perspective of object segmentation and classification can yield more intuitive results. Moreover, more generalized solutions, i.e. instead of sensor-specific rules/thresholds, can be presented [2, 3].
In this paper, we tackle the cloud detection problem by presenting a framework based on deep pyramid network architecture (DPN) [4, 5]. Compared to the existing rule-based methods [6, 7, 8, 9], the proposed method exploits texture information exhibited from cloudy/non-cloudy pixels with high-level features. This improves classification decisions without the need of any specific spectral information, since a pre-trained encoder network is capable of extracting rich and distinct high-level representations for visual objects in the images. Moreover, due to the architecture, the network is concurrently optimized for both segmentation and classification phases. Lastly, since the ground truth cloud masks are quite noisy (i.e. achieving perfect pixel-level annotations is quite difficult [10]), use of a pre-trained model for the abstract representation of an input provides robustness to the overall segmentation and classification phases.
Rest of the paper is organized as follows: related works are reviewed in Section 2. Section 3 is reserved for the detail of the proposed method and the problem statement. Experimental results, dataset and baseline methods are explained in Section 4 and the paper is concluded in Section 5.
## 2 Related Work
In this section, we review the literature for RGB color satellite images as well as other optical sensors such as multi-spectral/infrared to demonstrate the complexity of the problem for visible domain.
The methods used for multi/infrared bands are frequently based on radiometric properties of clouds/surface as reflectance and temperature. [6, 7] exploit the variations of reflectance in thermal bands to distinguish clouds from the surface. Harb et. al. [8] propose a processing chain based on the thermal pattern of clouds with morphological filtrations. Similarly, Braaten et. al. [9] extend the assumption to multi-spectral data. However, these methods highly depend on sensor models (i.e. since they are rule-based methods) andthe derived solution cannot be generalized to different sensors by using similar assumptions for band information.
Differently, multi-temporal methods aim to detect clouds based on background changes in time by which data is acquired in different time-instances. Zhu et. al. [11] combine the thermal cloud patterns with time-series data to detect more accurate cloud masks. Moreover, the method [12] uses temporal data to estimate clouds with a non-linear regression algorithm. However, the main limitation of the methods is that the time series of data are assumed to possess smooth variations on ground surfaces while abrupt changes for clouds. Furthermore, recording such dense data practically increase the operational cost.
In order to generalize the solution, classification-based approaches learn a set of parameters from training samples to distinguish clouds from the surface. Hu et. al. [2] extract several low-level features such as color, texture features etc. to estimate pixel-level masks. Recently, [3] classifies locally sampled patches (i.e. by a Super-Pixel (SP) algorithm) with a Convolutional Neural Network (CNN) as cloud or non-cloud.
## 3 Cloud Detection With Pyramid Networks
As mentioned, since there is no explicit spectral/physical pattern for clouds in RGB color satellite images, we treat the problem as an object segmentation and classification problem in order to make a realistic problem formulation.
In particular, the texture details around cloudy regions indicate distinct visual patterns for detection/segmentation phases. Our aim is to extract high-level abstract representations from data and iteratively merge them to make pixel-level classification decisions. Moreover, the proposed method is able to compute segmentation and classification phases concurrently to optimize the network in an end-to-end learning manner, thus there is no need to employ these layers separately as in [3].
### Formulation
Suppose we are given a RGB color satellite image \\(\\mathbf{x}\\in\\mathbb{R}^{W\\times H\\times 3}\\) and the method aims to generate an image mask \\(\\mathbf{y}\\in\\mathbb{R}^{W\\times H\\times 2}\\) that implicitly corresponds to two channel classification decisions for ground surface and cloud/haze coverages.
Therefore, the main objective is to learn a set of parameters \\(\\theta_{c}\\) and \\(\\theta_{g}\\) for encoder and generator functions \\(C(.)\\) and \\(G(.)\\) such that the input-target error should be minimized for a set of training pairs \\(\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}\\) based on a loss function:
\\[\\mathcal{L}=-\\frac{1}{N}\\sum_{i}\\mathbf{y_{i}}\\log(\\mathbf{p_{i}}) \\tag{1}\\]
where \\(\\mathbf{p_{i}}=G(C(\\mathbf{x}_{i},\\theta_{c}),\\theta_{g})\\) is the mask prediction of the network for the input \\(\\mathbf{x}_{i}\\). The softmax cross-entropy loss in Eq. 1 maximizes the similarity of optimum input-target transformation. Moreover, \\(N\\) corresponds to the mini-batch size. In the inference stage, the decisions of cloudy/non-cloudy coverages are computed based on the outputs of these learned functions.
### Architecture
Our deep network architecture consists of two main filter blocks [4, 5]. First, encoder block \\(C(.)\\) extracts robust abstract representations from a RGB color image. Then, generator block \\(G(.)\\) computes pixel-level segmentation and classification masks according to the responses of the encoder block. The overall architecture is illustrated in Figure 1.
**Encoder**: Encoder block takes an image as input and iteratively computes abstract representations by down-sampling responses. Practically, the goal of the block is to unveil distinct patterns about data which assist the generator so as to obtain an optimal image mask. Moreover, information flows to the generator are maximized with skip-connections [13].
Throughout the paper, we experimented with two different encoder models:
* First, a model with 5 convolutional layers and random parameter initialization is used. At each layer, we employ a batch normalization layer and an activation function, i.e. ReLU, after a convolution layer. Later, we down-sample the responses with stride 2. However, we found out that the random initialization lacks to reach an optimal solution due to the fact that ground truth unwillingly contains noisy labels by omission and/or registration noise during labeling [10]. This ultimately affects adversely the parameters at the end and the parameters (i.e. \\(\\theta_{c}\\)) tend to generate false-alarms in the inference stage.
* We use the convolutional responses of a pre-trained model, i.e. 'conv1_2', 'conv2_2', 'conv3_2', 'conv4_2' and 'conv5_2' in VGG-19 [14], and no finetuning is allowed for the encoder layers. Eventually, this mitigates the problem and more confident responses are obtained for an input. Note that even if the parameters of the model are trained for a different object recognition problem, the studies have already shown that it is still capable of attaining best accuracies on several remote sensing applications [15, 16].
**Generator**: At each layer, the generator block fuses the abstract representations extracted by the encoder block by adding and up-sampling (with factor \\(2\\times\\)) recursively as illustrated in Fig. 2. Similarly, we use batch normalizations and
Figure 1: Deep Pyramid Network with Encoder and Generator filter blocks.
ReLU functions at the layers to speed up the optimization. Other advantage of these functions is to improve the sparsity of the responses as explained in [17] for remote sensing.
At the last layer, we utilize a softmax activation to produce classification decisions, i.e. cloud or ground surface, thus it is inclined to set the decisions to either 0 or 1 for the masks at the end of the learning stage.
### Implementation Details
As a pre-processing step, we first normalize each pixel in an image with the constant value computed in [14], even if a pre-trained encoder model is used or not. Ultimately, it centers data to zero-mean space and data becomes reproducible for the pre-trained model.
For the parameter optimization, Adam optimizer [18] with momentum \\(\\beta_{1}=0.9\\) and \\(\\beta_{2}=0.999\\) is used and the training rate is set to 0.0001. Moreover, the value of \\(N\\) is determined as 10 for \\(512\\times 512\\) RGB color images and maximum mini-batch iteration is set to 20K. Note that no data augmentation is utilized throughout the training stage. Lastly, all codes are implemented on Python using Tensorflow framework. The models are trained/evaluated on NVIDIA Tesla K40 GPU card.
## 4 Experiments
In this section, we mention the details of the dataset we used in the experiments. Later, we report/discuss the experimental results conducted on this dataset.
### Dataset
The dataset consists of 20 images acquired from low-orbit RASAT and Gokturk-21 satellites, and their RGB resolutions are 15.0 m and 5.0 m respectively [19]. In particular, we opt to use the outputs of two different sensors in the dataset to demonstrate the generalization capacity of the proposed method. Moreover, Level-1 processed data is utilized in the experiments to reduce the defects caused by platform motion and optical distortion. The ground truth masks are manually labeled by human experts. Lastly, all methods are trained on 15 images and the rest is reserved for the testing stage.
Footnote 1: [https://gezgin.gov.tr/](https://gezgin.gov.tr/)
### Experimental Results
To evaluate the success of the proposed method, we compare the method with two baselines, deep pyramid network (DPN) and the combination of CNN with Super-Pixel segmentation as in [3]. Moreover, performance is measured by three score metrics, namely _Accuracy_ (correctness of the prediction), _Precision_ (reliability of the prediction) and _Latency_ (inference time).
We report the performance scores in Table 1. From the results, the proposed method (i.e. DPN+VGG-19) achieves best accuracy and precision scores. Particularly, our method significantly improves the precision score. This stems by the fact that replacing a pre-trained parameter model at the encoder block provides robustness to noisy-labeled data in the learning phase and it ultimately reduces the false-alarm in the inference stage. Another reason is that the proposed method is able to achieve segmentation and classification phases concurrently, thus the parameters are optimized by this way to estimate best segmentation masks rather than employing these steps separately as in [3]. Lastly, this also provides some advantages in the computation time (i.e on CPU for 3583\\(\\times\\)3584 resolution) as reported in Table 1 (Note that [3] needs to generate a decision with CNN for each local patch).
Furthermore, we illustrate the classification masks of the proposed method and SP+CNN [3] for the test images in Fig. 32. Perceptually, our method obtains impressive results particularly for hard cases such as snowy mountains. Moreover, the method is also able to detect haze coverages (i.e. the last column in Fig. 3), even though there is a limited number of training samples for such haze type in the dataset. The reason is that the network exploits the texture around clouds rather than color information, since their patterns are more discriminative for clouds compared to snow/saturated cases.
Footnote 2: Note that you can find the results of all methods as well as ground truth masks in the project webpage with better visual quality.
## 5 Conclusion
In this paper, we propose a deep pyramid network to tackle cloud detection from RGB color images. The method is able to generate pixel-level decisions by exploiting spatial texture information about visual data. Moreover, we show that the integration of a pre-trained CNN model at the encoder layer improves the accuracy of classification masks, since more confident hidden representations are extracted from noisy labeled data. From the experimental results, the proposed methods quantitatively outperforms the baselines and obtains perceptually superior results on the dataset.
\\begin{table}
\\begin{tabular}{l c c c} \\hline \\hline Image Id & Accuracy & Precision & Latency \\\\ \\hline SP+CNN [3] & 0.9820 & 0.6676 & \\(\\approx\\)30 min. \\\\ DPN & 0.9815 & 0.7502 & \\(\\approx\\)1 min. \\\\ \\hline DPN+VGG-19 (ours) & **0.9874** & **0.8776** & \\(\\approx\\)1 min. \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Accuracy and Precision scores for three methods. In particular, DPN+VGG-19 significantly improves the precision score compared to other methods. Latency of the methods is also reported for the inference stage.
Figure 2: Detail visualization of Generator filter block at a single layer.
## 6 Acknowledgments
The authors are grateful to NVIDIA Corporation for the donation of Tesla K40 GPU card used for this research.
## References
* [1] Q. Cheng _et al._, \"Cloud removal for remotely sensed images by similar pixel replacement guided with a spatio-temporal mrf model.\" _ISPRS JPRS_, 2014.
* [2] X. Hu _et al._, \"Automatic recognition of cloud images by using visual saliency features.\" _IEEE GRSL_, 2015.
* [3] F. Xie _et al._, \"Multilevel cloud detection in remote sensing images based on deep learning.\" _IEEE JSTAR_, 2017.
* [4] O. Ronneberger _et al._, \"U-net: Convolutional networks for biomedical image segmentation.\" _MICCAI_, 2015.
* [5] T.-Y. Lin _et al._, \"Feature pyramid networks for object detection.\" _arXiv preprint_, 2016.
* [6] Z. Zhu and C. E. Woodcock, \"Object-based cloud and cloud shadow detection in landsat imagery.\" _Remote Sensing of Environment_, 2012.
* [7] R. R. Irish _et al._, \"Characterization of the landsat-7 etm+ automated cloud-cover assessment (acca) algorithm.\" _Photogrammetric engineering and remote sensing_, 2006.
* [8] M. Harb _et al._, \"Automatic delineation of clouds and their shadows in landsat and cbers (hrcc) data.\" _IEEE JSTAR_, 2016.
* [9] J. D. Braaten _et al._, \"Automated cloud and cloud shadow identification in landsat mss imagery for temperate ecosystems.\" _Remote Sensing of Environment_, 2015.
* [10] V. Mnih, \"Machine learning for aerial image labeling,\" _University of Toronto_, 2013.
* [11] Z. Zhu and C. E. Woodcock, \"Automated cloud, cloud shadow, and snow detection in multitemporal landsat data: An algorithm designed specifically for monitoring land cover change.\" _Remote Sensing of Environment_, 2014.
* [12] L. Gomez-Chova _et al._, \"Cloud masking and removal in remote sensing image time series.\" _Journal of Applied Remote Sensing_, 2017.
* [13] K. He _et al._, \"Deep residual learning for image recognition.\" _CVPR_, 2016.
* [14] K. Simonyan and A. Zisserman, \"Very deep convolutional networks for large-scale image recognition.\" _arXiv preprint_, 2014.
* [15] E. Maggiori _et al._, \"High-resolution semantic labeling with convolutional neural networks,\" _IEEE TGRS_, 2017.
* [16] D. Marmanis _et al._, \"Deep learning earth observation classification using imagenet pretrained networks.\" _IEEE GRSL_, 2016.
* [17] S. Ozkan _et al._, \"Endnet: Sparse autoencoder network for endmember extraction and hyperspectral unmixing.\" _arXiv preprint_, 2017.
* [18] D. Kingma and J. Ba, \"Adam: A method for stochastic optimization.\" _arXiv preprint_, 2014.
* [19] M. Teke, \"Satellite image processing workflow for rasat and gokturk-2.\" _JAST_, 2016.
Figure 3: Cloud masks for DPN+VGG-19 (second row) and SP+CNN [3](third row) with RGB color images (first row). | Cloud detection from remotely observed data is a critical pre-processing step for various remote sensing applications. In particular, this problem becomes even harder for RGB color images, since there is no distinct spectral pattern for clouds, which is directly separable from the Earth surface. In this paper, we adapt a deep pyramid network (DPN) to tackle this problem. For this purpose, the network is enhanced with a pre-trained parameter model at the encoder layer. Moreover, the method is able to obtain accurate pixel-level segmentation and classification results from a set of noisy labeled RGB color images. In order to demonstrate the superiority of the method, we collect and label data with the corresponding cloud/non-cloudy masks acquired from low-orbit Gokturk-2 and RASAT satellites. The experimental results validates that the proposed method outperforms several baselines even for hard cases (e.g. snowy mountains) that are perceptually difficult to distinguish by human eyes.
Savas Ozkan, Mehmet Efendioglu, Caner Demirpolat+ TUBITAK Space Technologies Research Institute
Remote Sensing Group
Ankara, Turkey
{savas.ozkan, mehmet.efendioglu, caner.demirpolat}@tubitak.gov.tr Cloud Detection, Deep Pyramid Networks | Provide a brief summary of the text. | 262 |
arxiv-format/1602_05497v1.md | Simulated Annealing Approach to the Temperature-Emissivity Separation Problem in Thermal Remote Sensing Part One: Mathematical Background
John A. Morgan
The Aerospace Corporation
P. O. Box 92957
Los Angeles, CA 90009
[email protected]
## I Introduction
The temperature-emissivity separation (TES) problem bedevils any attempt to extract spectral information from remote sensing in the thermal infrared. A variety of methods has been proposed for handling the temperature-emissivity separation (TES) problem [1], [2]. In most of them, simultaneous LST and band emissivity retrieval depends upon specifying an emissivity value in one or more reference bands. The MODIS Land Surface Temperature (LST) algorithm [3] seeks a pair of reference channels in a part of the thermal spectrum in which the emissivity of natural surfaces displays very limited variation, and may therefore be regarded as known with good confidence. Multiband emissivities inferred on this basis are called \"relative\" emissivities [4]. Other algorithms of this nature include the reference channel method [5], emissivity normalization [6], temperature-independent spectral index method [7], [8] and spectral ratios [9]. The study by Li et al. [4] shows that all of these relative emissivity retrieval algorithms are closely related, and argues that they may be expected to show comparable performance. The analysis of Multispectral Thermal Imager (MTI) data [10] depends on collection of radiance from a surface with looks at nadir and 60 degrees off-nadir, assuming a known angular dependence of emissivity, in order to balance equations and unknowns. The generalized split-window LST algorithm [11] likewise uses dual looks in a regression-law based approach. The \"grey body emissivity\" approach [12] exploits the slow variation of emissivity with wavelength for certain natural targets, while the physics-based MODIS LST algorithm [13] exploits observations taken at day and at night, on the assumption that band emissivites do not change over periods of a few weeks. A study with the Airborne Hyperspectral Scanner [14] compares multiple TES approaches.
We shall investigate a simulated annealing approach to the TES problem. The approach is an extension of earlier work on Bayesian TES [15], [16]. Simulated annealing cannot give a unique solution to this problem, but we shall prove that the average of a large number of simulated annealing TES solutions converges almost surely to a unique TES estimate.
This paper will concentrate on the mathematical basis of the algorithm and a proof of its convergence. A study of its performance will form the subject of a subsequent paper.
## II Background
Simulated annealing has traditionally been regarded as a preferred method of global solution for combinatorial optimization problems such as Traveling Salesman. In this paper, we adapt the Metropolis algorithm [17], [18], [19] to an optimization problem that lacks a unique global optimal solution: Temperature-emissivity separation. The under-determined temperature-emissivity separation (TES) problem, notoriously [1]-[14], has a continuous infinity of solutions that yield the identical optimum value for any cost or payoff function one cares to choose.
A key part of any simulated annealing algorithm is the choice of an annealing schedule that causes the posterior probabilities to transition from nearly uniform to very tight in such a way as to evade the risk of the MAP search from converging to a local, rather than a global, optimum. The look and feel of the justification for this approach is ergodic.
In what follows we shall mostly concern ourself with the existence of a solution to the simulated annealing TES problem, and shall simply assume that a suitable annealing schedule has been supplied. Factors that enter into the choice of annealing schedule are described in [18], [19]. Selection of the annealing schedule and sample TES retrievals will be the subject of a subsequent paper.
## III Simulated Annealing and the temperature-emissivity separation problem
### _Metropolis Algorithm Search for Maximum A-Posteriori Solution_
Suppose that we have in our possession prior knowledge that a target patch that forms part of the lower boundary of the atmosphere is composed of an intimate mixture of \\(m+1\\) spectral endmembers \\(\\{\\epsilon_{i}(k)\\}\\) at temperature \\(T\\). For later convenience, we shall require that spectral emissivities be bounded continuous functions of wavenumber with bounded first derivative with respect to wavenumber. The label \\(k\\) may, depending upon context, refer to wavenumber, or to a finite number of wavenumber-averaged spectral bands. Except in Section III-B2, we shall assume the band interpretation.
The spectral mixture amounts to a mapping into a geometric m-simplex whose vertices have spectral endmembers at a stipulated temperature \\(T\\) for coefficients. Suppose we have \\(m+1\\) distinct points \\(\\mathbf{y}_{0},\\mathbf{y}_{1},\\cdots\\mathbf{y}_{m}\\) in \\(\\mathbf{R}^{\\mathbf{m}}\\) chosen so that the vectors \\(\\mathbf{y}_{1}-\\mathbf{y}_{0},\\mathbf{y}_{2}-\\mathbf{y}_{0}\\cdots\\mathbf{y}_ {m}-\\mathbf{y}_{0}\\) are linearly independent. Then the set
\\[K_{m}\\equiv\\sum_{i=0}^{m}\\lambda_{i}\\mathbf{y}_{i} \\tag{1}\\]
with
\\[\\lambda_{i}\\geq 0,\\forall\\,i \\tag{2}\\]
and
\\[\\sum_{i=0}^{m}\\lambda_{i}=1 \\tag{3}\\]
is an m-simplex. [20] A spectral mixture with stipulated weights \\(\\lambda_{i}\\) corresponds to the vector 1
Footnote 1: Should the target patch contain an isothermal checkerboard mixture of end members, the weight \\(\\lambda_{i}\\) is to be interpreted as the fraction of the total surface area of the patch occupied by the \\(i\\)-th subregion \\(\\lambda_{i}=\\frac{\\lambda_{i}}{\\sum_{j=1,n}\\lambda_{j}}\\). The product \\(\\lambda_{i}\\epsilon_{i}(k)\\) is thus a normalized emissivity-area product for that subregion. We defer the case of endmembers with differing temperatures to a later date. We believe it, however, to be a straightforward extension of the reasoning in this paper.
\\[\\mathbf{x}=\\sum_{i=0}^{m}\\lambda_{i}\\mathbf{y}_{i}\\in\\mathbf{R}^{m}. \\tag{4}\\]
The _interior_ of \\(K_{m}\\) is the subset of \\(K_{m}\\) for which \\(\\lambda_{i}>0\\), that is the closure of its interior. The polyhedron of \\(K_{m}\\), denoted \\(|K_{m}|\\), is the set comprised of the points of \\(\\mathbf{x}\\in K_{m}\\) considered as a subset of \\(\\mathbf{R}^{m}\\), and is a convex compact subset of \\(\\mathbf{R}^{\\mathbf{m}}\\).
In the case \\(m+1=3\\), a familiar example of a 2-simplex is the ternary diagram used to classify phreatic igneous rocks. The double three-component diagram used in the QAPF classification [21] scheme is a union of two 2-simplices, and is an example of a simplicial complex.
For present purposes, the \\(i^{th}\\) pure endmember for the \\(n^{th}\\) trial is assigned to the \\(i^{th}\\) vertex of \\(K_{m}\\)
\\[\\mathbf{y}_{i}\\Leftrightarrow\\epsilon_{i}(k),0\\leq i\\leq m \\tag{5}\\]
with the spectral mixture corresponding to a point in the polyhedron of \\(K_{m}\\),
It is necessary to account for surface temperature in a somewhat different way. Let the minimum and maximum physically admissible surface temperatures be \\(T_{min}\\) and \\(T_{max}\\), respectively. Then the temperature of our target patch is given by
\\[T=(1-\\lambda_{m+1})T_{min}+\\lambda_{m+1}T_{max} \\tag{6}\\]
with
\\[0\\leq\\lambda_{m+1}\\leq 1. \\tag{7}\\]
Corresponding to \\(\\mathbf{x}\\) introduced already, we have from Eq. (6)
\\[x_{m+1}\\in I^{1}, \\tag{8}\\]
the unit interval, with
\\[x_{m+1}\\Leftrightarrow T_{n} \\tag{9}\\]
The quantity that appears in the forward model for the \\(n^{th}\\) trial is
\\[\\langle\\epsilon(k)B_{k}(T_{n})\\rangle=\\sum_{i=1}^{m}\\lambda_{i}\\epsilon_{i}(k )B_{k}(T_{n}). \\tag{10}\\]
\\(B_{k}(T_{n})\\) is the (band-integrated, as needed) Planck function at temperature \\(T_{n}\\). The parametrization of the choice \\(\\{T_{n},\\epsilon(k)\\}\\) in terms of the vector \\(x\\) is a mapping into the topological product
\\[H^{m+1}\\equiv I^{1}\\otimes|K_{m}| \\tag{11}\\]
of \\(I^{1}\\) and \\(|K_{m}|\\). The set \\(H^{m+1}\\) is not a simplex, nor is it necessarily a simplicial complex. It is, however, a convex polytope, and is the convex hull of its vertices \\(\\mathbf{x}_{i},0\\leq i\\leq m+1\\). 2
Footnote 2: Although we will not need it in what follows, \\(H^{m+1}\\) can be decomposed into either a simplicial complex or a union of simplices.
We score trial mixtures by that we most wish to maximize: The posterior probability for the observed spectral radiance to originate from a surface patch with temperature \\(T\\) and spectral emissivity \\(\\epsilon(k)\\). A standard argument [15], [16] gives the posterior probability in terms of a MAXENT estimator
\\[P(I\\mid T,\\epsilon,\\sigma)=exp\\left[-\\frac{(I-I_{FM})^{2}}{2\\sigma^{2}(T_{a} )}\\right]\\frac{dI}{\\sigma(T_{a})} \\tag{12}\\]in terms of a forward model
\\[I_{FM}=f\\left(\\sum_{i=1}^{m}\\lambda_{i}\\epsilon_{i}(k)B_{k}(T_{n})\\right)\\Leftrightarrow f (x) \\tag{13}\\]
that is some function of the \\(n^{th}\\) trial, in each spectral bin \\(k\\). We note that while the equation of transfer is linear, the dependence of its solution \\(I_{FM}\\) upon \\(\\epsilon_{i}(k)B_{k}(T_{n})\\) need not be. The assumed noise variance \\(\\sigma^{2}\\) is shown as having a formal dependence upon a parameter, the \"annealing temperature\" \\(T_{a}\\), which governs the annealing schedule for the search for a Maximum A-Posteriori solution. The joint posterior probability in \\(J\\) spectral bands is proportional to
\\[P(\\{I_{k}\\}\\mid T,\\epsilon,\\sigma)=\\prod_{k=1}^{J}\\,exp\\left[-\\frac{(I_{k}-I_{ FM}(k))^{2}}{2\\sigma^{2}(T_{a})}\\right]\\frac{dI}{\\sigma(T_{a})} \\tag{14}\\]
If radiance \\(I_{k}\\) in each of J bands originating from a patch on the Earth's surface has been detected at the top of the atmosphere (TOA), the posterior probability that the surface patch is at a temperature T given prior knowledge \\(K\\) is given by Bayes' theorem as
\\[P(T,\\epsilon_{i}(k)\\mid\\{I_{k}\\},K)=P(T,\\epsilon(k)\\mid K)\\frac{P(\\{I_{k}\\} \\mid T,\\epsilon_{i}(k),K)}{P(\\{I_{k}\\}\\mid K)}. \\tag{15}\\]
The noise variance is assumed known and the functional dependence of probabilities upon \\(\\sigma_{i}\\) is omitted. The prior probability \\(P(\\{I_{i}\\}\\mid K)\\) for the radiances \\(\\{I_{k}\\}\\) has no dependence upon \\(T\\) and for our purposes may be absorbed into an overall normalization. [22] Equation (15) is evaluated with aid of the prior probability for the surface to be at temperature T and have spectral emissivity \\(\\epsilon(k)\\), given available knowledge \\(K\\)[15],
\\[P(T,\\epsilon(k)\\mid K)\\,dT\\,\\propto\\prod_{k}d\\epsilon(k)\\frac{dT}{T}. \\tag{16}\\]
\\(P(T,\\epsilon_{i}(k)\\mid\\{I_{i}\\},K)\\) is the conditional probability for the hypothesis that the surface temperature is \\(T\\), and the spectral emissivity \\(\\epsilon_{k}\\), given observed radiances \\(\\{I_{i}\\}\\) and prior knowledge \\(K\\).
Each trial is thus scored according to the joint posterior probability for observed spectral radiance \\(I_{i}\\) to result from surface temperature \\(T\\) and spectral emissivity \\(\\epsilon_{k}\\),
\\[p_{n}=P(T_{n},\\epsilon(k)\\mid\\{I_{i}\\},K)\\equiv p_{n}(x) \\tag{17}\\]
where \\(x\\) stands for \\(\\{\\mathbf{x},x_{m+1}\\}\\). Thus, in going from the \\((n-1)^{th}\\) to the \\(n^{th}\\) trial, the \\(n^{th}\\) candidate mixture is selected by Monte Carlo draw and \\(p_{n}\\) for the new trial is compared to \\(p_{n-1}\\) for the last one. The probability that it is accepted is [17], [19]
\\[P=\\left\\{\\begin{array}{ll}1&\\mbox{if }p_{n}/p_{n-1}\\geq 1\\\\ P(T_{a})&\\mbox{otherwise}\\end{array}\\right. \\tag{18}\\]
where the probability \\(P(T_{a})\\) of taking a downward step in \\(p_{n}\\) is determined by the annealing schedule. The dependence of \\(P(T_{a})\\) on the annealing schedule is symbolized by the annealing \"temperature\" \\(T_{a}\\) which is taken to decrease systematically during the MAP search. The actual form \\(P(T_{a})\\) takes in practical calculations is determined empirically.
### _Convergence_
### Spectral mixtures comprising a finite number of endmembers
We now examine the question of convergence. Corresponding to the sequence of m-simplices \\(K_{m}\\) as the number of trials \\(n\\) increases without bound is a sequence of trials \\(\\{T_{n},\\epsilon_{i}(k)\\}\\) with associated loci \\(\\{x\\}\\in H^{m+1}\\).
As a closed bounded subset of \\(\\mathbf{R}^{m+1}\\), \\(H^{m+1}\\) is a compactum. Therefore, as \\(n\\rightarrow\\infty\\), the sequence of trials \\(x\\) contains a convergent subsequence, whatever the value of \\(m\\). Correspondingly, the sequence of posterior probabilities likewise has a convergent subsequence that, by construction, tends to the maximum value of the posterior probability, i.e., to a MAP solution for \\(T\\) and \\(\\epsilon(k)\\).
Consider the map \\(x^{\\prime}=\\Phi(x)\\) given by
\\[\\Phi(x)=\\left\\{\\begin{array}{ll}x^{\\prime}&\\mbox{if }p_{n}(x^{\\prime})-p_{n}(x) \\geq 0\\\\ x&\\mbox{otherwise}.\\end{array}\\right. \\tag{19}\\]
The mapping Eq. (19) gives the action of the Metropolis algorithm according to Eq. (18) at sufficiently late times in the annealing schedule that a transition to a state of decreased posterior probability occurs rarely; in the limit, almost never. We have noted that at a sufficiently late point in the annealing schedule, trials that decrease the posterior probability Eq. (17) will become infrequent. We may elide any such trials without affecting the convergence of the subsequence, which then takes the form
\\[x_{n+1}=\\Phi(x_{n}) \\tag{20}\\]
For all \\(n\\) greater than some \\(M\\), convergence of the subsequence implies the Cauchy condition
\\[d(x_{n},x_{n+1})=d(x_{n},\\Phi(x_{n}))<\\epsilon, \\tag{21}\\]
(with the Euclidean norm supplying a suitable metric for finite \\(m\\)) so that
\\[x\\rightarrow\\Phi(x). \\tag{22}\\]
The mapping Eq. (19) generates a sequence of trials \\(x\\) for which \\(p_{n}\\) is nondecreasing. By Zorn's Lemma, the set comprised of all admissible trials \\(x\\) has at least one element with a maximal value of \\(p_{n}\\). We note that maximizing \\(p_{n}\\) also maximizes the information-theoretic entropy by Eq. (12). According to the usual statement of the Second Law, the state of maximum entropy is one of thermodynamic equillibrium. We may therefore, in a nod to Refs. [17] and ([25] both, call the limit Eq. (22) an equillibrium point.
We note that, in the limit, Eq. (22) amounts to a fixed point of Eq. (19). Ordinary fixed-point theorems are inapplicable to Eq. (19) because it is neither continuous nor semicontinuous: It can map an open set \\(\\in H^{m+1}\\) to a singleton \\(x^{\\prime}\\). We can, however, adapt a celebrated construction introduced by Nash [25] to prove the existence of a fixed point of an equivalent self-mapping.
In fact, we shall prove a somewhat stronger result. Consider
\\[\\phi_{\\alpha}=max(0,p_{n}(x_{\\alpha})-p_{n}(x)). \\tag{23}\\]
for stipulated \\(x\\). The function \\(\\phi\\) is continuous in the mixture \\(x_{\\alpha}\\). Define the mapping \\(N:x\\to x^{\\prime}\\) by
\\[x^{\\prime}=\\frac{x+\\sum_{\\alpha}\\phi_{\\alpha}x_{\\alpha}}{1+\\sum_{\\alpha}\\phi_ {\\alpha}}, \\tag{24}\\]
where the index \\(\\alpha\\) is taken to run over members of any finite set of admissible trials \\(x_{\\alpha}\\) in the execution of the Metropolis algorithm. (One may think of the collection of all sequences \\(x_{\\alpha}\\) in ensemble-theoretic terms.) Suppose that \\(x^{\\prime}\\) is a fixed point under Eq. (24). In Eq. (24) some values of \\(\\alpha\\) correspond to choices for \\(\\{T_{i},\\epsilon_{i}(k)\\}\\) for which the posterior probability does not increase:
\\[p_{n}(x_{\\alpha})-p_{n}(x)\\leq 0. \\tag{25}\\]
For these values of \\(\\alpha\\),
\\[\\phi_{\\alpha}=0. \\tag{26}\\]
If the choice \\(x\\) is fixed under the mapping \\(N\\) in Eq. (24), then the contribution to \\(x^{\\prime}\\) from any \\(x_{\\beta}\\) must not decrease; therefore, \\(\\phi_{\\beta}=0,\\forall\\beta\\), lest the denominator in \\(\\Phi\\) exceed unity. Put another way, no other choice of \\(\\{T_{i},\\epsilon_{i}(k)\\}\\) can increase the posterior probability. But that is the definition of an equillibrium point.
If, on the other hand, an equillibrium point \\(x\\) maximizes the posterior probability Eq. (17), every \\(\\phi_{\\alpha}\\) vanishes, so that \\(x\\) is a fixed point.
Equation (24) is continuous and maps points \\(x\\) into a convex compactum \\(\\subset{\\bf R}^{m+1}\\). A fixed point
\\[x=N(x) \\tag{27}\\]
exists according to the Brouwer fixed-point theorem that, by construction, maximizes the _a-posteriori_ probability of \\(x\\).
The mapping Eq. (19) generates a sequence of trials \\(x\\) for which \\(p_{n}\\) is nondecreasing and gives the maximal value of \\(p_{n}\\) in the limit, while Eq. (27) demonstrates the existence of a trial \\(x^{*}\\) for which \\(p_{n}\\) cannot be made greater. In view of the ensemble-theoretic freedom to choose \\(x_{\\alpha}\\), we may identify the limit in Eq. (22) with the fixed point in Eq. (27). Therefore, a convergent subsequence of annealing trials exists that tends to an equilibrium point. Moreover, Eq. (27) demonstrates that the annealing search can, in principle, find \\(x^{*}\\) in a finite number of trials. We conclude that, granted a suitable annealing schedule, there exists at least one convergent sequence of trials that tends to MAP surface temperature and spectral emissivity estimates consistent with observed spectral radiances \\(I_{k}\\).
### Arbitrary spectral emissivities
The search algorithm just described assumes that the emissivity \\(\\epsilon(k)\\) is describable by a mixture of a finite set of spectral end members. While the spectral mixture characterization of \\(\\epsilon(k)\\) is of interest in its own right, it may be considered a stronger hypothesis than is strictly desirable. In particular, it seems intuitively reasonable that the simulated annealing approach to TES should work just as well-and admit a simpler algorithmic realization-by using trials with randomly chosen spectral emissivities, rather than by seeking a spectral mixture from a predetermined set of endmembers.
In fact, it is possible to reduce the case of search using arbitrary \\(\\epsilon(k)\\) for trials to an extension of the analysis in the preceding section by allowing the number of endmembers \\(m\\) to grow without limit for each trial \\(n\\). Instead of self-mappings into a single polytope with fixed \\(m\\), we consider a sequence of \\(H^{m_{n}+1}\\) as \\(m_{n}\\to\\infty\\), for each \\(n\\) in the annealing schedule.
The connection between arbitrary \\(\\epsilon(k)\\) and a spectral mixture with whose endmembers are allowed to grow without limit is easily seen. If we chose \\(x_{i},0\\leq i\\leq m_{j}\\) from a set of randomly chosen endmembers \\(\\epsilon_{i}(k)\\), it is clear that any random \\(\\epsilon(k)\\) can be constructed as a spectral mixture of other random endmembers. By induction: A single endmember \\(\\epsilon_{1}(k)\\) trivially reproduces an arbitrary \\(\\epsilon(k)\\) if it is chosen so \\(\\epsilon_{1}(k)=\\epsilon(k)\\). Suppose that any \\(\\epsilon(k)\\) equals a spectral mixture of \\(j\\) suitably chosen random \\(\\epsilon_{i}(k)\\). Then, by the inductive hypothesis for \\(j=2\\), it is possible to replicate any other \\(\\epsilon(k)\\) by a mixture of some spectral emissivity of \\(j\\) endmembers and a \\((j+1)^{st}\\) random endmember.
We proceed by constructing the polytope for an arbitrary number of spectral endmembers. The polyhedron of the m-simplex \\(K_{m}\\) with unit diameter may be circumscribed by an m-sphere of radius [23]
\\[r\\leq\\sqrt{\\frac{m}{2(m+1)}}. \\tag{28}\\]
\\(|K_{m}|\\) is thus a subset of the topological product of n replicas of the unit interval \\([0,1]\\)
\\[|K_{m}|\\subset I^{m}\\equiv[0,1]\\otimes[0,1]\\cdots[0,1] \\tag{29}\\]
Every m-simplex is thus contained within the topological product of a countable infinity of replicas of the unit interval \\([0,1]\\)[24]
\\[|K_{m}|\\subset I^{\\infty}\\equiv[0,1]\\otimes[0,1]\\cdots, \\tag{30}\\]
as is every convex polytope
\\[H^{m+1}\\subset I^{\\infty}. \\tag{31}\\]
All the polyhedra \\(|K_{m}|\\) and polytopes \\(H^{m+1}\\) are compact, and by Tychonoff's theorem, the set \\(I^{\\infty}\\) which circumscribes every \\(|K_{m}|\\) and \\(H^{m+1}\\) is likewise sequentially compact.
\\(I^{\\infty}\\) is homeomorphic to the _Hilbert cube_. The Hilbert cube
\\[\\mathcal{H}\\equiv[0,1]\\otimes[0,\\frac{1}{2}]\\cdots[0,\\frac{1}{n}]\\cdots \\tag{32}\\]is a subset of a Hilbert space with the \\(l_{2}\\) norm.3\\(I^{\\infty}\\)is therefore a _complete_ space: The sequence of trials, by sequential compactness of the Hilbert cube, and sets homeomorphic to it, possesses a convergent subsequence whose limit is, by completeness of \\(I^{\\infty}\\), an element of that space. The limit of the convergent subsequence of m-polyhedra is likewise contained within \\(I^{\\infty}\\).
Footnote 3: For any finite dimensional subspace of \\(\\mathcal{H}\\) however, we may still take the Euclidean norm when choosing a metric.
The treatment of convergence in Section III-B1 requires modification when the number of spectral endmembers is allowed to grow without limit. At each \\(n\\), the self-mapping \\(N\\) and function \\(\\phi(x)\\) are applied to \\(H^{m_{n}+1}\\) as before to give existence of an equilibrium fixed point. The simplex dimension \\(m_{n}\\) is allowed to grow without bound, however. The resulting sequence of equilibrium points \\(\\in H^{m_{n}+1}\\) possesses a convergent subsequence \\(\\subset H^{m_{n}+1}\\) for each value of \\(n\\).
With that caveat, as \\(n\\to\\infty\\) the sequence \\(\\{x_{n}\\}\\) converges to a MAP estimate of \\(T\\) and an arbitrary \\(\\epsilon(k)\\) by the same reasoning used in the previous section. In consequence, the limiting equilibrium point of the convergent sequence \\(\\Phi(x_{n})=x_{n}\\) as \\(m_{n}\\to\\infty\\) will tend to an estimate of the MAP value for \\(\\{T,\\epsilon_{i}(k)\\}\\) for any admissible \\(\\epsilon(k)\\). We conclude that a suitable simulated annealing search will converge to an arbitrary spectral emissivity that gives a MAP estimate of \\(\\{T,\\epsilon_{i}(k)\\}\\).
### _Uniqueness_
Whatever the dimensionality of the spectral endmember parameterization of emissivity, sequential compactness guarantees existence of a convergent subsequence of trials. In practice, we must expect that there will be more than one such sequence. The nonuniqueness of solutions to the TES problem suggests that there will be a continuous infinity of possible trials \\(\\{T_{n},\\epsilon_{i}(k)\\}\\) that yield any stipulated value for the posterior probability. In any realizable search strategy, however, we need only contend with a countable set of convergent subsequences. Amongst these there will be one for which the posterior probability is greatest.4 This will be the closest approach to the Maximum A-Posteriori solution achieved by simulated annealing. In the nature of things, more than one convergent subsequence may be expected to exist that yields this same maximal estimate, with the same asymptotic annealing temperature \\(T_{a}^{\\infty}\\). We ignore all subsequences except these maximal ones.
Footnote 4: One may appeal to Zorn’s lemma again at this point, if desired.
In References [15] and [16] expectation values for \\(T\\) and \\(\\{\\epsilon(k)\\}\\) over the the posterior probability Eqn. (15) were shown to give good estimates for physical surface temperatures and emissivities. We claim that the mean of a large number of subsequences that converge to the limiting MAP value will tend to the expectation values for \\(T\\) and \\(\\{\\epsilon(k)\\}\\) with respect to Eqn. (15).
The MAXENT estimator is constructed from the posterior probability of noise power in a spectral bin. For the sake of simplicity we assume identical noise power in each bin.5 A fully annealed MAP estimate may be thought of as an individual Bernoulli trial drawn from the likelihood function for \\(\\{T_{n},\\epsilon_{i}(k)\\}\\). By construction, all such trials are independent and identically distributed with bounded expectation values.6
Footnote 5: This assumption is inessential and may be relaxed.
Let
\\[\\overline{T}=\\frac{1}{N}\\sum_{i=1}^{N}T_{i} \\tag{33}\\]
and
\\[\\overline{\\epsilon(k)}=\\frac{1}{N}\\sum_{i=1}^{N}\\epsilon_{i}(k) \\tag{34}\\]
be the means of MAP surface temperature and spectral emissivity taken over over \\(N\\) convergent subsequences. Suppose the covariance matrix \\(\\mathbf{\\Sigma}\\) of the trials to be nonsingular. We invoke the multivariate Central Limit Theorem to conclude the mean values converge weakly to the multivariate Gaussian distribution:
\\[\\sqrt{N}\\left(\\begin{array}{c}\\overline{T}-\\langle T\\rangle\\\\ \\overline{\\epsilon(1)}-\\langle\\epsilon(1)\\rangle\\\\ \\vdots\\\\ \\overline{\\epsilon(m)}-\\langle\\epsilon(m)\\rangle\\end{array}\\right)\\sim\\mathbf{ N}_{m}(0,\\mathbf{\\Sigma}). \\tag{35}\\]
Reliance on the mixing hypothesis in the form given by Eq. (10), however, brings with it the concern that the relevant covariance matrix might be singular. In that event, the strong law of large numbers [30], [31] ensures
\\[\\overline{T}\\stackrel{{ a.g.}}{{\\rightarrow}}\\langle T\\rangle \\tag{36}\\]
and
\\[\\overline{\\epsilon(k)}\\stackrel{{ a.s.}}{{\\rightarrow}}\\langle \\epsilon(k)\\rangle, \\tag{37}\\]
but without giving estimated variances of the mean values, such as come with Eq. (35).
To the extent that the estimator used in the simulated annealing search is zero-mean error, we conclude the estimates yield accurate values for the physical values of \\(T\\) and \\(\\{\\epsilon(k)\\}\\). As the spectral weights \\(\\mathbf{x}\\), lying as they do between zero and unity, possess bounded moments, this conclusion applies to the limiting mean values of \\(\\{T_{n},\\{x_{m}^{n}\\}\\}\\) as well.
### _Arbitrarily fine spectral resolution_
It is worth considering briefly the limiting case of infinite spectral resolution for \\(\\epsilon(k)\\). In this section only, \\(k\\) refers to wavenumber. We shall admit as endmembers any continuous function \\(\\epsilon_{i}(k)\\) on a compact interval \\([k_{1},k_{2}]\\in\\mathbf{R}\\), with
\\[0\\leq\\epsilon_{i}(k)\\leq 1, \\tag{38}\\]
and bounded first derivative
\\[\\left\\|\\frac{d\\epsilon_{i}(k)}{dk}\\right\\|_{sup}<W \\tag{39}\\]on that interval.
Consider
\\[f_{m_{n}}(k)=\\epsilon(k)-\\sum_{j=0}^{m_{n}}\\lambda_{j}\\epsilon_{j}(k), \\tag{40}\\]
where, as before, as \\(m_{n}\\) increases without bound, an admixture of arbitrary \\(\\epsilon_{j}(k)\\) is included in the spectral mixture. A standard argument shows that Eqs. (38) and (39) imply the spectral emissivities and Eq. (40) are all members of an equicontinuous set. By the Arzela-Ascoli lemma, as \\(m_{n}\\rightarrow\\infty\\) there is a subsequence of trial emissivity spectra
\\[\\epsilon^{t}(k)=\\sum_{j=0}^{m_{n}}\\lambda_{j}\\epsilon_{j}(k) \\tag{41}\\]
for which an \\(M\\) exists such that for \\(m_{n}>M\\) and for any positive \\(\\delta\\)
\\[|f_{m_{n}}(k)|<\\delta \\tag{42}\\]
uniformly on \\([k_{1},k_{2}]\\). Thus, any physically admissible spectral emissivity \\(\\epsilon(k)\\) may be approximated arbitrarily well by a suitable spectral mixture of an unlimited number of end members.
We may regard the spectral mixture \\(x\\) as an upper function on \\([0,1]\\) and take for a norm the Lebesgue measure in the limit \\(m_{n}\\rightarrow\\infty\\), with associated metric
\\[d(x,y)=\\int d\\mu(x-y). \\tag{43}\\]
A mixture \\(x\\) then becomes a vector in a Banach space. As \\(n\\rightarrow\\infty\\) the sequence of limiting fixed points under the self-mapping \\(\\Phi\\) has a convergent subsequence \\(x_{n}\\) which, again, satisfies the Cauchy condition
\\[d(x_{n},x_{n+1})<\\epsilon \\tag{44}\\]
The argument from the Cauchy property of the convergent subsequence of \\(x_{n}\\) to the conclusion that the subsequence converges to a MAP equilibrium point likewise follows much as before. As \\(n\\rightarrow\\infty\\)
\\[0<p(x_{n+1})-p(x_{n})<\\epsilon, \\tag{45}\\]
so that the convergent subsequence of annealing trials, again, tends to an equilibrium point.
## IV Discussion
In Section III-B we proved convergence of simulated annealing searches for candidate MAP TES solutions. In Section III-C we argued that the average of a large number of these candidate MAP solutions converges almost surely to a unique estimate of surface temperature and spectral emissivity that, given a forward model leading to an unbiased estimator for \\(T\\) and \\(\\{\\epsilon_{k}\\}\\), closely approximates the true values of these quantities.
The motivation for seeking the Nash equilibrium analogy came from the realization that spectral mixing theory [27], [28], [29] amounts to the use of mappings into a simplex, and that iterative choice of weights in the TES problem according to an annealing schedule amounts to a self-mapping into a convex polytope. The analogy with spectral mixing theory, however, is incomplete: Spectral unmixing, in either its reflective or thermal variants, generally appends and extra end member called \"virtual dark \" or \"virtual cold\", used to accommodate the effect of contamination from noise and sensor artifacts. One may see the value of an extra end member by recalling that the highest order components in a principle components decomposition of a multivariate dataset tends to be dominated by noise and artifacts that do not correlate with the physical content of lower-order components. The role of a virtual garbage end member will be discussed in Part Two.
A special case of great interest is the situation in which one seeks evidence that spectral radiance sensed at the top of the atmosphere (TOA) contains evidence for the presence of a specific spectral component. This problem may be addressed by use of spectral mixtures comprised of the desired spectral component together with generic continuous end members whose number is allowed to grow without bound. The analysis of Section III-B2, however, cannot simply be modified by appending an endmember corresponding to the desired emissivity spectrum, as any completely random choice of \\(\\epsilon(k)\\) may well be correlated with the stipulated spectrum and so introduce a spurious admixture of that endmember into the analysis. On the other hand, in a physical mixture of spectral endmembers such correlations might in fact occur naturally. Without going into details, we offer some thoughts on this problem.
One way to proceed is to randomly select \\(\\epsilon_{i}(k)\\) for the background in such a way that the otherwise arbitrary background emissivity spectra all lie in the null space of the stipulated endmember. The choice of background \\(\\epsilon(k)\\) is thus made in much the same way as an empirical orthogonal basis set is selected.
Depending on the intended application, another way to handle this problem might be to find some way of marginalizing on the set of arbitrary background \\(\\epsilon(k)\\) in the calculation of the posterior probability Eq. (17). If performed by the same sort of stochastic sampling typical of simulated annealing, this calculation would resemble a numerical approximation to the Feynman-Kac formula.[32]; in essence, a Monte Carlo path integral calculation.
## V Concluding Remarks
The practical utility of the mathematical development in this paper may be questioned. We address briefly two possible concerns.
While convergence of the algorithm has been proved to our satisfaction, we have no equally satisfactory estimates of the rate of convergence, with the consequence that the choice of annealing schedule remains a matter of trial-and-error. In response to this concern, the availability of massively parallel computation made possible by the ready availability of cheap GPU arrays means that massive processing requirements need not preclude the use of a resource-hungry algorithm if that algorithm can provide performance not attainable by other approaches. Part Two of this study will address these issues.
Another legitimate concern is that the spectral emissivity of natural ground covers in the wild will seldom be known to the level of accuracy found in Ref. [26]. While true in general, this concern has not dissuaded other researchers from relying upon spectral unmixing.
The simulated annealing approach to TES by spectral unmixing does, however, offer something that other TES algorithms do not: By construction, it gives (in the limit) the unique best estimate in a Maximum _A-Posteriori_ sense, for the remote determination of surface temperature and spectral emissivity of a patch of ground that is known to be comprised of a spectral mixture of a stipulated set of spectral end members.
## References
* [1] Dash, P., F.-M. Gottsche, F.-S. Olesen, and H. Fischer, \"Land surface temperature and emissivity estimation from passive sensor data: theory and practice-current trends,\" _Int. J. Remote Sensing_, vol. 23, pp. 2563-2594, 2002
* [2] Li, Z-L, H. Wu, N. Wang, S. qui, J. A. Sobrino, Z. Wan, B-H Tang, and G.Yan, \"Land surface emissivity retrieval from satellite data,\" International Journal of Remote Sensing **34**, pp. 3084-3127, 2013
* [3] Wan, Z.-M., _MODIS Land-Surface Temperature Algorithm Theoretical Basis Document_, Institute for Computational Earth System Science, University of California, Santa Barbara, 1999
* [4] Li Z.-L., F. Becker, M. P. Stoll, and Z. Wan, \"Evaluation of Six Methods for Extracting Relative Emissivity Spectra from Thermal Infrared Images,\" _Rem. Sens. Env._, vol. 69, pp. 197-214, 1999
* [5] Kahle, A. B., and R. E. Alley, \"Separation of Temperature and Emittance in Remotely Sensed Radiance Measurements,\" _Rem. Sens. Env._, vol. 42, pp. 107-111, 1992
* [6] Kealy, P. S., and S. J. Hook, \"Separating Temperature and Emissivity in Thermal Infrared Multispectral Scanner Data: Implications for Recovering Land Surface Temperatures,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 31, pp. 1155-1164, 1993
* [7] Petitcolin, F., and E. F. Vermote, \"Land Surface Reflectance, Emisivity and Temperature from MODIS Middle and Thermal Infrared data,\" _Rem. Sens. Env._, vol 83(1-2), 112-134, 2002
* [8] Li Z.-L., and F. Becker, \"Feasibility of Land Surface Temperature and Emissivity Determination from AVHR Data,\" _Rem. Sens. Env._, vol. 43, pp. 67-85, 1993
* [9] Watson, K., \"Spectral Ratio Method for Measuring Emissivity,\" _Rem. Sens. Env._, vol. 42, pp. 113-116, 1992
* [10] Borel, C. C., and J.. Szymanski, \"Physics-based Water and Land Temperature Retrieval,\" in Handbook of Science Algorithms for the Multispectral Thermal Imager, B. W. Smith, Ed., Los Alamos National Laboratory and Savannah River Technology Center, 1998
* [11] Wan, Z.-M., and J. Dozier, \"A generalized split-window algorithm for retrieving land-surface temperature from space,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 34, pp. 892-905, 1996
* [12] Barducci, A., and I. Pippi, \"Temperature and emissivity retrieval from remotely sensed images using the 'Grey body emissivity' method,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 34, pp. 681-695, 1996
* [13] Wan, Z.-M., and Z.-L. Li, \"A physics-based algorithm for land-surface emissivity and temperature from EOS/MODIS data,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 35, pp. 980-996, 1997
* [14] Sobrino, J. A., J. C. Jimenez-Munoz, P. J. Zarece-Tejada. G. Sepulcre-Canto, Eduardo de Miguel, \"Land surface temperature derived from airborne hyperspectral scanner thermal infrared data,\" _Rem. Sens. Env._, vol. 102, pp. 99-115, 2006
* [15] Morgan, J. A, \"Bayesian Estimation for Land Surface Temperature Retrieval: The Nuisance of Emissivities\", _IEEE Transactions on Geosciences and Remote Sensing_, _43_, pp. 1279-1288, 2005
* [16] Morgan, J. A., \"Comparison of Bayesian land surface temperature algorithm performance with Terra MODIS observations\", International Journal of Remote Sensing **32**, 2012, p. 8139; see also arXiv:0909.4461v3, 2011
* [17] Metropolis N., A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, \"Equations of state calculations by fast computing machines,\" J. Chem. Phys. **21**, pp. 1087-1091, 1953
* [18] Kirkpatrick, S., C. D. Gelat, M. P. Vecchi, \"Optimization by Simulated Annealing,\" Science **220**, pp. 671-680, 1983
* [19] Press, W., Flannery, B., Teukolsky, S., and Vetterling, W., _Numerical Recipes in C_, Cambridge University Press, Cantab., 1988, pp. 326-334
* [20] Rotman, J. J., _An Introduction to Algebraic Topology_, Springer-Verlag, New York, 1988, pp. 31-38
* [21] Le Bas, M. J., and A. L. Streckeisen, \"The IUGS systematics of igneous rocks\", Journal of the Geological Society, London, **148**, pp. 825-833, 1991
* [22] Bretthorst, L., \"Bayesian Spectrum Analysis and Parameter Estimation,\" in Berger, J., S. Fienberg, J. Gani, K. Krickenberg, and B. Singer, Eds, _Lecture Notes in Statistics_, Springer-Verlag, New York, 1988, vol. 48
* [23] Blumenthal, L. M., and G. E. Wahlin, \"On the Spherical Surface of Smalled Radius Enclosing a Bounded Subset of n-Dimensional Euclidean Space,\" Bulletin of the American Mathematical Society **47**, pp. 771-777, 1941
* [24] Milnor, J., \"The Geometric Realization of a Semi-Simplicial Complex,\" Annals of Mathematics **65**, pp. 357-362, 1957
* [25] Nash, J. F., \"Non-Cooperative Games,\" Annals of Mathematics **54**, pp. 286-295, 1951
* [26] Salisbury, J. W., L. S. Walter, N. Vergo, and D. M. D'Aria, _Infrared (2.1-25) Operation of minerals_, Johns Hopkins University Press, Baltimore, 1992
* [27] Johnson, P. E., M. O. Smith, and J. B. Adams, \"Quantitative Analysis of Planetary Reference Spectra with Principal Components Analysis,\" J. Geophys. Res. **90**, pp. C805-C810, 1985
* [28] Adams, J. B., M. O. Smith, and P. E. Johnson, \"Spectral Mixture Modeling: A New Analysis of Rock and Soil Types at the Viking Lander 1 Site,\" J. Geophys. Res. **91**, pp. B8098-B8112, 1986
* [29] Adams, J. B., M. O. Smith, and P. E. Johnson, Correction to Spectral Mixture Modeling: A New Analysis of Rock and Soil Types at the Viking Lander 1 Site,\" J. Geophys. Res. **91**, p. B10,513, 1986
* [30] van der Waart, A. W., _Asymptotic Statistics_, Cambridge University Press, Cantab., 1998
* [31] Artstein, Z., and R. A. Vitalen \"A Strong Law of Large Numbers for Random Compact Sets,\" The Annals of Probability **3**, pp. 879-882, 1975
* [32] Kac, M., _Probability and Related Topics in the Physical Sciences_, Interscience Publishers, London, Chapter IV, 1959
John Morgan is a Senior Engineering Specialist in the Sensing and Exploitation Department at The Aerospace Corporation, where his duties include studies of spacecraft sensor system performance and remote sensing phenomenology. He has a BS from Caltech, and MS and Ph.D. degrees from Rice University, in physics. | The method of simulated annealing is adapted to the temperature-emissivity separation (TES) problem. A patch of surface at the bottom of the atmosphere is assumed to be a greybody emitter with spectral emissivity \\(\\epsilon(k)\\) describe by a mixture of spectral endmembers. We prove that a simulated annealing search conducted according to a suitable schedule converges to a solution maximizing the _A-Posteriori_ probability that spectral radiance detected at the top of the atmosphere originates from a patch with stipulated \\(T\\) and \\(\\epsilon(k)\\). Any such solution will be nonunique. The average of a large number of simulated annealing solutions, however, converges almost surely to a unique Maximum A-Posteriori solution for \\(T\\) and \\(\\epsilon(k)\\).
The limitation to a stipulated set of endmember emissivities may be relaxed by allowing the number of endmembers to grow without bound, and to be generic continuous functions of wavenumber with bounded first derivatives with respect to wavenumber.
Remote Sensing; Temperature-Emissivity Separation; Surface Temperature Estimation. | Give a concise overview of the text below. | 236 |
arxiv-format/2406_07484v1.md | # Towards Generalized Hydrological Forecasting using
Transformer Models for 120-Hour Streamflow Prediction
Bekir Z. Demiray1 and Ibrahim Demir1,2
######
Rainfall-runoff modeling; deep learning; flood forecasting; transformers; streamflow forecastingIntroduction
Recent years have seen a significant escalation in both the frequency and severity of natural disasters across the globe, underscoring the need for more reliable forecasting tools that extend beyond immediate alerts. According to an analysis by the World Meteorological Organization (2021), it is estimated that weather, climate, or water-related disasters have on a daily average over the last fifty years, resulted in financial losses of $202 million and led to 115 fatalities. Additionally, a 2022 report by Munich Re highlights that natural catastrophe, which include a range of disasters from hurricanes to floods, have cumulatively caused over $280 billion in damages worldwide. Specifically, in the United States, these disasters have led to an estimated $145 billion in damages, not to mention the immense loss of lives and significant property destruction. Research continues to point towards climate change as a driving force behind the increasing intensity and frequency of these extreme natural events, predicting a rise in both the scale of destruction and the number of affected individuals (WMO, 2021; UNESCO, 2023; Banholzer et al., 2014; IPCC, 2022).
Flooding ranks as the most frequent natural disaster, causing extensive financial damages and claiming numerous lives (WHO, 2021). Notably, in 2020, floods constituted over 60% of all natural disasters reported, contributing to 41% of the total deaths from such events (NDRCC, 2021). Research indicates that climate change is exacerbating the frequency and severity of flooding in various regions (Davenport et al., 2021; NOAA, 2022; Tabari, 2020; IPCC, 2022). This trend can be attributed to a variety of factors, including rising sea levels (Strauss et al., 2016), increased extreme rainfall events (Diffenbaugh et al., 2017), and more intense rainfall during hurricanes (Trenberth et al., 2018). Therefore, accurate streamflow forecasting is critical in effectively minimizing the impact of floods, particularly in terms of property damage and loss of life (Alabbad and Demir, 2022).
Additionally, streamflow forecasting serves as a cornerstone for informed decision-making in various areas of hydrology and water resource management. Applications ranging from watershed conservation (Demir and Beck, 2009) to precision agriculture (Yildirim et al., 2022) and the development of effective flood mitigation strategies (Li and Demir, 2022; Ahmed et al., 2021; Yaseen et al., 2018) heavily rely on this data. However, the complex and dynamic nature of hydrological systems, often characterized by nonlinear patterns and variability, makes achieving accurate streamflow predictions a persistent challenge (Honorato et al., 2018; Yaseen et al., 2017; Sit et al., 2024).
Numerous physical and data-centric methodologies focused on streamflow forecasting have emerged over time, each distinct in its approach, ranging from the use of varied data types to focusing on specific regions or levels of generalization (Salas et al., 2000; Yaseen et al., 2015). Approaches driven by physical principles (Beven and Kirkby, 1979; Ren-Jun, 1992; Arnold, 1994; Lee and Georgakakos, 1996; Devia et al., 2015) are capable of modeling complex physical interactions, including atmospheric dynamics and the evolution of global weather patterns (Yaseen et al., 2019; Sharma and Machiwal, 2021). While these models are invaluable, their implementation presents constraints, such as the need for detailed hydrological and geomorphological data, which leads to increased operational expenses. Additionally, their predictive accuracy tends to diminish for long-term forecasting.
Compounding these limitations, traditional physics-based models, due to their high computational demands and extensive parameter requirements, necessitate significant computational resources, resulting in notable costs (Mosavi et al., 2018; Sharma and Machiwal, 2021; Liu et al., 2022; Castangia et al., 2023). In response, recent studies have shifted focus to alternative methods, particularly highlighting the efficacy of machine learning and deep learning models as capable and often more accurate alternatives to physical models (Yaseen et al., 2015). Deep learning models, in particular, have demonstrated considerable improvements in both the accuracy and dependability of streamflow predictions, thus offering a potential transformative impact on the field of hydrological modeling (Demiray et al., 2023; Sit et al., 2023).
While traditional machine learning techniques, like Support Vector Machines (SVMs) and Linear Regression (LR), have found applications in streamflow forecasting and environmental analysis (Bayar et al., 2009; Li and Demir, 2024; Granata et al., 2016; Yan et al., 2018; Sharma and Machiwal, 2021), breakthroughs in artificial intelligence (AI) and enhanced GPU capabilities have catapulted the development of deep learning, ushering in new possibilities for this field (Sit et al., 2022). Among the diverse array of neural network architectures considered for this purpose (Sit et al., 2021; Chen et al., 2023), Recurrent Neural Networks (RNNs), particularly Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs), stand out as the most extensively studied and applied models in this field.
Kratzert et al. (2018) has applied LSTM models to daily runoff prediction, integrating meteorological data and finding that LSTM models can surpass the efficacy of established physical models in specific areas. Similarly, Xiang et al. (2021) found that LSTM-seq2seq models outperform linear models like linear regression, lasso regression, and ridge regression in terms of predictive accuracy and other evaluation metrics. Furthermore, Guo et al. (2021) conducted a comparative study of LSTMs, GRUs, and SVMs across 25 different locations in China, concluding that LSTMs and GRUs show similar performance levels, with GRUs offering the advantage of quicker training times. To gain a broader perspective on the depth of research in deep learning applications for streamflow prediction, extensive studies such as those by Yaseen et al. (2015) and Ibrahim et al. (2022) provide valuable insights.
Originally pioneered for language translation by researchers at Google (Vaswani et al., 2017), the Transformer architecture has been explored for diverse tasks, including time series analysis (Zhou et al., 2021; Wu et al., 2021; Zhou et al., 2022; Lin et al., 2022). Transformer applications for streamflow forecasting, however, remain a relatively new and actively developing research area. For instance, in the Mekong River Basin, a study demonstrated that LSTM models perform better than Transformers, particularly in dry seasons (Nguyen et al., 2023). Another study utilized a transfer learning approach on Transformers for effective flood prediction in data-sparse basins of the Yellow River, showcasing its potential in areas with limited data (Xu et al., 2023).
Furthermore, Liu et al. (2022) developed a Transformer-based model for monthly streamflow prediction on the Yangtze River, demonstrating its ability to incorporate both historical water levels and the influence of ENSO patterns. Similarly, Castangia et al. (2023) applied a Transformer model for predicting daily water levels within a river network, with a focus on capturing upstream hydrological signals. They successfully evaluated this technique using data from the 2014 Southeast Europe flood event. Additionally, research comparing various deep-learning models in streamflow prediction, including Transformers, has shown that different preprocessing and data extension methods can significantly affect predictive outcomes (Demiray et al., 2024), emphasizing the importance of model selection and data handling in hydrological forecasting.
In this study, we expand the scope of streamflow forecasting by utilizing a Transformer model to predict water levels for the next 120 hours across 125 different locations within Iowa, US. Our methodology involves using data from the preceding 72 hours, encompassing precipitation, evapotranspiration, and discharge values, as well as the location-based features such as slope and soil types to forecast future streamflow. A unique aspect of this study is the development of a generalized model, trained on data from all 125 stations simultaneously, rather than creating individual models for each location. The performance of this unified Transformer-based model is then compared with three other deep learning models as well as the traditional persistence method. The experimental results demonstrate that our Transformer-based approach notably surpasses all other methods in forecasting accuracy, showcasing its effectiveness in handling a broad range of hydrological data across multiple locations.
This paper is organized as follows: The forthcoming section will introduce the dataset employed in this research along with a description of the study area. Section 3 is dedicated to detailing the methodologies implemented in this study. Subsequently, Section 4 will unveil the experimental results, accompanied by an in-depth analysis and discussion of these findings. Finally, Section 5 will encapsulate the principal conclusions drawn from this research and explore potential avenues for future work.
## 2 Case Study and Dataset
WaterBench-Iowa, developed by Demir et al. (2022), is the foundational dataset for this study, specifically designed as a benchmark dataset to streamline hydrological forecasting research. This dataset, in alignment with the FAIR (findability, accessibility, interoperability, and reuse) principles, offers a valuable resource for data-driven and machine learning applications in streamflow forecasting. It encompasses an expansive collection of hydrological and meteorological data from 125 distinct locations across Iowa, gathered from reputable sources including NASA, NOAA, USGS, and the Iowa Flood Center. The location of 125 sensors in Iowa is depicted in Figure 1. The dataset covers a comprehensive period from October 2011 to September 2018, providing a rich temporal dataset for our analysis.
The WaterBench-Iowa dataset integrates various key hydrological data, including streamflow measurements, precipitation records, watershed characteristics, slope, soil types, and evapotranspiration. This broad spectrum of data allows for a deepened understanding of hydrological processes across diverse environmental conditions. The dataset's structure is conducive to a variety of machine learning and deep learning methodologies, with a high resolution in both temporal and spatial dimensions, along with extensive metadata and relational information.
In this study, we employed the most recent water year in the dataset as the test set, with the remaining data used for training. Additionally, 15% of the training set was allocated as a validation set, ensuring robust model evaluation and fine-tuning. Preprocessing methods in this study replicate those established in the WaterBench-Iowa publication (Demir et al., 2022). This adherence ensures the validity of comparisons between our results and the dataset's existing benchmark models. A statistical summary of the dataset is provided in Tables 1 and 2. Our research approach involved the use of a generalized model, trained using data from all 125 stations. This approach marks a departure from traditional practices that often rely on location-specific models. The generalized model's ability to simultaneously process data from multiple locations not only demonstrates its versatility but also underscores its potential in offering comprehensive insights into streamflow forecasting.
Figure 1: Location of used sensors in Iowa, US (adapted from Demir et al., 2022)
## 3 Methods
In this research, we focused on assessing the performance of a Transformer-based model for predicting streamflow and conducted a comparative analysis with four other models: Persistence, GRU, LSTM, and Seq2Seq, as referenced in the WaterBench dataset. This section will elaborate on the methodologies applied in this study, detailing both the Transformer model and the comparative models.
### Persistence Method
The Persistence approach (Eq. 1), often referred to as the nearest frame method, operates on the premise that future conditions will mirror the present. This method posits that the upcoming streamflow will closely resemble the most recent observations, essentially predicting that 'tomorrow will be the same as today'. Widely recognized as a baseline in hydrological research, including streamflow forecasting, the Persistence model holds a significant place, particularly in short-range forecasting scenarios. Numerous studies in hydrology (Krajewski et al., 2021) have
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & Area (km\\({}^{2}\\)) & Concentration time (h) & Slope & Loam & Silt & Sandy clay loam & Silty clay loam \\\\ \\hline Min & 6 & 2 & 0.38\\% & 0\\% & 0\\% & 0\\% \\\\ \\hline Max & 36453 & 315 & 4.32\\% & 98\\% & 100\\% & 84\\% & 93\\% \\\\ \\hline Mean & 5405 & 77 & 1.97\\% & 33\\% & 31\\% & 18\\% & 18\\% \\\\ \\hline Median & 1918 & 53 & 1.80\\% & 33\\% & 21\\% & 4\\% & 7\\% \\\\ \\hline SD & 8320 & 68 & 0.80\\% & 28\\% & 30\\% & 24\\% & 23\\% \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Statistical summary of watershed characteristics for 125 USGS gauges in the state of Iowa
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|} \\hline & Annual total precipitation (mm) & Max. hourly precipitation (mm) & Annual mean streamflow (m3/s) & Missing rate of precipitation (m3/s) & Missing rate of streamflow (raw data) & Missing rate of streamflow (raw data) \\\\ \\hline Min & 794 & 9.1 & 3 & 0.02\\% & 0.69\\% \\\\ \\hline Max & 1056 & 60.0 & 12963 & 0.04\\% & 33.14\\% \\\\ \\hline Mean & 952 & 24.8 & 1926 & 0.02\\% & 15.16\\% \\\\ \\hline Median & 961 & 22.2 & 608 & 0.02\\% & 16.14\\% \\\\ \\hline SD & 57 & 10.3 & 2864 & 0.01\\% & 6.4\\% \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Summary statistics of precipitation and streamflow across 125 catchments from water year 2012–2018, including missing data analysisnoted that surpassing the accuracy of the Persistence model can be a challenge. This challenge is especially pronounced when predictions are made for periods shorter than 12 hours.
\\[\\hat{Y}_{i+n}=Y_{i}\\]
Eq. 1
\\[\\hat{Y}_{i+n}=Predicted streamflow\\ values\\ between\\ time\\ t\\ and\\ t+n\\]
\\[Y_{i}=Observed\\ streamflow\\ values\\ at\\ time\\ t\\]
### Long Short-Term Memory
In time-series forecasting, while Recurrent Neural Networks (RNNs) have been pivotal in recognizing temporal patterns, they are often hindered by the vanishing gradient issue, limiting their capability in handling long-term dependencies. This challenge can significantly affect the precision of time-series forecasts, particularly in scenarios requiring memory of distant past events. To overcome these limitations, Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) were developed. LSTMs are adept at maintaining short-term memory over extended periods and efficiently processing long-term data dependencies. This makes them particularly suitable for time-series forecasting and hydrological tasks like flood and rainfall prediction, as evidenced by various studies (Kratzert et al., 2018; Feng et al., 2020; Frame et al., 2022; Sit et al., 2022).
An LSTM node receives an input tensor \\(x_{t}\\) and a hidden state tensor \\(h_{t\\text{-}1}\\) from the preceding node. Within the cell, multiple gates regulate information flow and memory retention. The input gate (\\(i_{t}\\)) controls the extent of new information to be stored, using weight matrices \\(W(i)\\) and \\(U(i)\\). It does this through a sigmoid function based on linear transformations of \\(x_{t}\\) and \\(h_{t\\text{-}1}\\). Similarly, the forget gate (\\(f_{t}\\)) determines information to discard from the cell state, employing a sigmoid function with weight matrices W(\\(f\\)) and U(\\(f\\)). The output gate (\\(O_{t}\\)) dictates how much of the cell state to expose as the hidden state, utilizing a sigmoid function with weight matrices W(\\(O\\)) and U(\\(O\\)) also influenced by \\(x_{t}\\) and \\(h_{t\\text{-}1}\\). The candidate cell state (\\(\\tilde{c}\\)), signifying potential additions to the cell state, is calculated using the hyperbolic tangent function (\\(tanh\\)) and linear transformations of \\(x_{t}\\) and \\(h_{t\\text{-}1}\\), with weight matrices W(\\(c\\)) and U(\\(c\\)).
The cell state \\(\\mathbf{c}_{t}\\) is then updated, incorporating inputs from the forget gate, input gate, and candidate cell state, as illustrated in Equation 2:
\\[c_{t}=f_{t}\\odot c_{t-1}+\\ i_{t}\\odot\\tilde{c}_{t}\\]
Eq. 2
Subsequently, the hidden state \\(h_{t}\\) is generated by applying the output gate to the hyperbolic tangent of the updated cell state, as shown in Equation 3:
\\[h_{t}=O_{t}\\ \\bigodot\\ \\tanh(c_{t})\\]This updated hidden state \\(\\mathbf{h}_{\\text{t}}\\) and cell state \\(\\mathbf{c}_{\\text{t}}\\) are then relayed to the next LSTM node in the sequence and to subsequent layers within the neural network. In hydrological forecasting, LSTM networks have demonstrated superior performance compared to basic RNNs and other time-series forecasting models, garnering popularity in the field. By effectively resolving the vanishing gradient problem and capturing intricate long-term dependencies, LSTMs have established themselves as invaluable tools for precise and dependable predictions in various hydrological scenarios.
### Gated Recurrent Units
Gated Recurrent Units (GRUs), introduced by Cho et al. (2014), represent a compelling alternative to Long Short-Term Memory (LSTM) networks for time-series forecasting. With their simplified internal structure, GRUs address efficiency concerns caused by LSTM complexity, making them advantageous for large-scale datasets common in hydrological forecasting.
GRUs achieve this through a dual-gate mechanism. The update gate (z\\({}_{\\text{t}}\\)) plays a crucial role in determining the extent to which new input data (x\\({}_{\\text{t}}\\)) and the previous hidden state (h\\({}_{\\text{t-1}}\\)) should contribute to the formation of the current hidden state (h\\({}_{\\text{t}}\\)). This gate enables the GRU to weigh the importance of recent versus historical information continually. In parallel, the reset gate (r\\({}_{\\text{t}}\\)) offers a powerful mechanism for discarding irrelevant past data, allowing the model to adapt to new trends and anomalies effectively. This selective forgetting is essential for maintaining model accuracy, especially in dynamic and complex hydrological environments where past trends may not always be indicative of future patterns.
The candidate hidden state (h\\({}_{\\text{t}}\\)), generated as a function of the reset gate, captures new information at the current timestep, considering only the most pertinent historical context. The final hidden state for the next step in the network is a blend of this candidate state and the past hidden state, modulated by the update gate. This process enables GRUs to dynamically adjust their memory, balancing between retaining valuable historical information and adapting to new data.
These design features make GRUs well-suited for complex hydrological forecasting tasks where rapid analysis and handling of long-term dependencies are crucial. By balancing computational demands with the need for accurate and dynamic modeling, GRUs have become a popular choice in hydrological forecasting. Their suitability for applications in flood prediction and streamflow analysis, where they excel in modeling historical trends and adapting to new data patterns, further solidifies their value within this field.
### Sequence-to-Sequence Model
In addition to LSTM and GRU networks, this study incorporates a variant of the Seq2Seq model, referenced from Xiang and Demir (2022), as a crucial baseline method. The Seq2Seq model, distinct for its encoder-decoder architecture, is designed to handle complex time-seriesforecasting tasks. It employs multiple TimeDistributed layers, culminating in a final dense layer to process and output predictions.
The Seq2Seq model operates with two distinct components: an encoder and a decoder. The encoder's role is to process the input time series data, such as historical rainfall, streamflow, and evapotranspiration from the past 72 hours. It converts this input into a context vector, which encapsulates essential temporal patterns and features, thus effectively summarizing the input data. GRUs are employed in this model for both the encoder and decoder, chosen for their efficacy in sequential data modeling and handling of long-range dependencies.
The decoder, on the other hand, is tasked with predicting future streamflow. It uses the context vector provided by the encoder, combined with additional future data inputs, to generate streamflow predictions for the next 120 hours. This process involves iteratively processing the context vector and the forecasted data, enabling the model to extend its predictions with each subsequent time step. To capture complex temporal dynamics, the Seq2Seq model employs multiple TimeDistributed layers, applying a consistent operation across each timestep of the output sequence.
The model's implementation concludes with a final dense layer, formatting the output sequence for the 120-hour streamflow predictions. For a detailed understanding of the Seq2Seq model's architecture and its application in hydrological forecasting, please refer to the study by Xiang and Demir (2022).
### Transformer Model
The Transformer model, a groundbreaking development in neural network architecture, was introduced by Vaswani et al. (2017). Initially designed for machine translation, its innovative design has found widespread application in various domains, including time series forecasting. The key to the Transformer's effectiveness is its self-attention mechanism, which allows for a more efficient and comprehensive analysis of long input sequences compared to traditional recurrent layers.
Self-attention (Equation 4), the core feature of the Transformer, reinvents how input sequences are processed. Each element in the sequence is first transformed into three distinct vectors: queries (Q), keys (K), and values (V), each of dimension d\\({}_{\\text{model}}\\). The self-attention scores are derived by performing dot-product operations between the query and key matrices. These scores are then scaled and passed through a softmax function to establish the relative importance of each element in relation to others. This process results in a new, weighted representation of the input sequence, allowing the Transformer to dynamically adjust each element's representation, considering the influence of all other elements in the sequence. This unique ability enables the model to capture long-range dependencies, which is crucial for accurate time series forecasting.
\\[Attention(Q,K,V)=softmax\\left(\\frac{QK^{T}}{\\sqrt{d_{model}}}\\right)V\\]To enhance its pattern recognition capabilities, the Transformer employs multi-head attention. In this approach, the query, key, and value vectors are split into multiple segments. Each segment is processed by a separate attention head, allowing the model to concurrently focus on different aspects of the input sequence. The outputs from these attention heads are then concatenated and linearly transformed. Multi-head attention thus enriches the model's potential for capturing complex relationships within the sequence.
A notable challenge in implementing self-attention is its lack of intrinsic positional awareness. Unlike recurrent neural networks, the Transformer does not process elements in sequence, which can lead to ambiguity regarding the order of elements. To overcome this, the Transformer incorporates static positional encoding. This encoding adds unique positional information to the input embeddings, ensuring that the model can distinguish between the positions of different elements within the sequence. Positional encoding is typically achieved using a specific formula involving sine and cosine functions of the positional index.
In our application of the Transformer model for hydrological forecasting, we have tailored the original framework to suit our specific needs. We introduced a linear embedding layer to adjust the size of the input features, preparing them for efficient processing by the Transformer's architecture. Additionally, we opted for a random variable for positional encoding, diverging from the traditional sine-cosine approach. This modification provides the model with necessary positional context while maintaining computational efficiency.
Our model also deviates in its structure, focusing on a single encoder layer equipped with eight attention heads. This configuration allows the model to simultaneously attend to different facets of the input sequence, enhancing its ability to process complex hydrological data. The model's size is set to 64, which strikes a balance between the model's complexity and computational demands. Furthermore, the encoder's internal feedforward network, with a
Figure 2: Visual Representation of Attention Mechanisms within the Transformer Architecture
dimension of 256, is designed to provide adequate capacity for processing and transforming features within the model. A GELU activation function is employed between two linear functions in the feedforward component, facilitating efficient non-linear transformations. The model is depicted in Figure 3.
In this study, the persistence, GRU, LSTM, and Transformer models were developed using PyTorch, while the Seq2Seq model was constructed with Keras. For comprehensive details on the implementation and architecture of the GRU, LSTM, and Seq2Seq models, please refer to the work by Demir et al. (2022). For the transformer model, throughout the training phase, Mean Absolute Error (MAE) was employed as the loss function, with Adam serving as the optimizer. We set the batch size to 512 and established a learning rate of 0.0001. To enhance model performance, the learning rate was halved if no improvement was observed over 10 epochs. Additionally, the training process was halted if no progress was noted for 20 consecutive epochs.
## 4 Results and Discussions
In this section, we present the findings of our investigation into 120-hour streamflow forecasting, with a particular focus on the performance of the Transformer model. This model is evaluated against four other models: three deep learning models, namely LSTM, GRU, and Seq2Seq, and the traditional Persistence method. Streamflow prediction is vital in fields like water resource management, environmental monitoring, and decision-making. Deep learning models have shown exceptional prowess in time-series forecasting, positioning them as suitable candidates for streamflow prediction. The Transformer model, with its innovative self-attention mechanism, is a relatively recent entrant in this domain and merits thorough exploration for its potential in capturing temporal dependencies in streamflow data.
Our comparative analysis utilizes four key metrics: Nash-Sutcliffe Efficiency (NSE), Kling-Gupta Efficiency (KGE), Pearson's r, and Normalized Root Mean Square Error (NRMSE). These metrics offer a comprehensive assessment of each model's predictive accuracy and the overall effectiveness of the Transformer model. We will provide a detailed analysis of these evaluation metrics and their significance in the context of streamflow forecasting. The subsequent sections will present a meticulous examination of the results from each model, shedding light on their individual strengths and limitations. Through this detailed evaluation, our goal is to discern the effectiveness of the Transformer model in 120-hour streamflow prediction and explore its implications for future research and practical applications.
Figure 3: Visual Representation of the Transformer Architecture
### Performance Metrics
In evaluating the efficacy of streamflow prediction models, it is essential to utilize robust performance metrics that accurately reflect the accuracy and reliability of the forecasts. For this purpose, this study employs four key metrics that are widely recognized in the field of hydrological modeling and streamflow forecasting: Nash-Sutcliffe Efficiency (NSE), Kling-Gupta Efficiency (KGE), Pearson's r and Normalized Root Mean Square Error (NRMSE). These metrics are chosen for their proven interpretability and comprehensive ability to assess various facets of model performance, as supported by previous research (Kratzert et al., 2018; Xiang and Demir, 2021; Liu et al., 2022).
First and foremost, Nash-Sutcliffe Efficiency (Equation 5) is a critical metric for gauging the predictive accuracy of hydrological models. As established in many studies (Krause et al., 2005; Arnold et al., 2012), NSE offers a quantifiable measure of the model's prediction capabilities in comparison to the observed streamflow data. Specifically, it evaluates the model's predictions relative to the average of the observed data. NSE values can range from negative infinity to a perfect score of 1, where 1 signifies an exact match between model predictions and observations. Scores above 0 indicate that the model predictions are superior to simply using the mean of the observed data. Conversely, negative NSE values suggest that using the mean of the observed data is more accurate than the model's predictions, denoting poor model performance. In the context of hydrological modeling, NSE values above 0.5 are generally considered acceptable, indicating a reasonable level of predictive accuracy (Arnold et al., 2012).
\\[NSE=1-\\frac{\\sum\\bigl{(}Y_{i}-\\hat{Y}_{i}\\bigr{)}^{2}}{\\sum(Y_{i}-\\bar{Y}_{i}) ^{2}}\\]
\\[\\begin{array}{l}Y_{i}=Observed streamflow\\ value\\ at\\ time\\ i\\\\ \\hat{Y}_{i}=Predicted\\ streamflow\\ value\\ at\\ time\\ i\\\\ \\bar{Y}_{i}=Mean\\ of\\ all\\ observations\\ at\\ time\\ i\\end{array}\\]
Another key metric used in this study is the Kling-Gupta Efficiency (KGE) (Equation 6), which has become increasingly prominent in hydrological modeling. KGE offers a comprehensive evaluation of the model's performance by combining aspects of correlation, bias, and variability in a single metric. The formula for KGE is expressed as follows (Equation 6):
\\[KGE=1-\\sqrt{(r-1)^{2}+(\\alpha-1)^{2}+\\ (\\beta-1)^{2}}\\]
In the equation, \\(r\\) is the correlation coefficient between observed and predicted streamflow, \\(\\alpha\\) is the ratio of the standard deviation of predicted streamflow to the standard deviation of observed streamflow, and \\(\\beta\\) is the ratio of the mean of predicted streamflow to the mean of observed streamflow. A KGE value of 1 indicates perfect model performance, reflecting an ideal balance between correlation, bias, and variability.
KGE provides a more holistic view of model performance compared to other metrics. It not only considers how well the predicted values match the observed data (correlation) but also evaluates the model's ability to accurately estimate the average and variability of the streamflow (bias and variability components). This makes KGE a valuable metric for assessing the overall suitability of a model in hydrological forecasting. Positive values of KGE are generally indicative of acceptable model performance, with higher values signifying better alignment between the model predictions and observed data.
Additionally, Pearson's correlation coefficient (Equation 7), commonly referred to as Pearson's r, serves as a crucial metric in our analysis. This statistical tool is employed to evaluate the linear relationship between the predicted streamflow values by the model and the actual observed streamflow data. Pearson's r effectively quantifies both the strength and the direction of this linear correlation. With its value ranging from -1 to 1, where 1 represents a perfect positive linear correlation, a higher positive value of Pearson's r indicates a model's enhanced reliability and accuracy. This is reflective of the model's capacity to closely follow the trends present in the observed data, thereby enabling precise predictions. By incorporating Pearson's r into our assessment, we can gauge the models' proficiency in capturing the observed streamflow variability throughout the 120-hour forecasting period.
\\[r=\\frac{\\sum(Y_{i}-\\bar{Y_{i}})\\Big{(}\\hat{Y_{i}}-\\bar{\\bar{Y_{i}}}\\Big{)}}{ \\sqrt{\\sum(Y_{i}-\\bar{Y_{i}})^{2}}\\sqrt{\\sum\\Big{(}\\hat{Y_{i}}-\\bar{\\bar{Y_{i}} }\\Big{)}^{2}}} \\tag{2}\\]
\\[Y_{i}=Observed streamflow\\ value\\ at\\ time\\ i\\] \\[\\hat{Y_{i}}=Predicted streamflow\\ value\\ at\\ time\\ i\\] \\[\\bar{Y_{i}}=Mean\\ of\\ all\\ observations\\ at\\ time\\ i\\] \\[\\bar{\\bar{Y_{i}}}=Mean\\ of\\ all\\ predicted\\ values\\ at\\ time\\ i\\]
The final metric used in this study to evaluate model performance is the Normalized Root Mean Square Error (NRMSE) (Equation 8). NRMSE quantifies the average magnitude of the errors between the predicted and observed streamflow values, normalized against the mean of the observed data. This metric offers a relative gauge of the model's accuracy, making it particularly useful for comparing performances across varying datasets. Given the diversity of locations in this study, NRMSE proves to be an apt choice for evaluation. The NRMSE value ranges between 0 and 1, where lower values signify superior model performance, denoting smaller errors in relation to the mean of the observed streamflow. Conversely, higher NRMSE values indicate larger errors and, consequently, lesser predictive accuracy.
These performance metrics play a crucial role in evaluating the effectiveness of our streamflow prediction models. They provide a quantitative basis for measuring how well these models can capture and replicate the complex patterns and dynamics inherent in streamflow data.
\\[NRMSE=\\frac{\\sqrt{\\frac{\\sum_{i=1}^{n}\\bigl{(}Y_{i}-\\hat{Y}_{i}\\bigr{)}^{2}}{\\# \\,of\\,sample}}}{\\bar{Y}}\\]
Eq. 8
\\[Y_{i} = Observed\\ streamflow\\ value\\ at\\ time\\ i\\] \\[\\hat{Y}_{i} = Predicted\\ streamflow\\ value\\ at\\ time\\ i\\] \\[\\bar{Y} = Mean\\ of\\ all\\ observations\\]
### Experiment Results
In this part of the study, we detail the results from our extensive experiments focusing on predicting streamflow over a period of 120 hours. The central aim was to gauge the efficacy of the Transformer-based model in streamflow prediction, benchmarking its performance against three deep learning models (LSTM, GRU, and Seq2Seq) and the traditional Persistence approach. To measure the predictive accuracy of these models, we relied on four pivotal metrics widely recognized in hydrological modeling and streamflow forecasting: Nash-Sutcliffe Efficiency (NSE), Kling-Gupta Efficiency (KGE), Pearson's r, and Normalized Root Mean Square Error (NRMSE). These metrics are instrumental in evaluating the models' proficiency in accurately reflecting the intricacies of streamflow patterns.
In conducting our experiments, a key factor was the adaptation of input data dimensions to suit the specific requirements of the GRU, LSTM, and Transformer models. The input data comprised a blend of location-specific values, historical data, and forecast values. The historical data included 72 hours of precipitation, evapotranspiration, and discharge values, while the forecast data comprised predictions for 120 hours of precipitation and evapotranspiration. Consequently, the input data for these models was structured in two segments: one with the shape [batch size, 72, 10] for past values, and the other [batch size, 120, 9] for forecast values.
To effectively integrate these two sets of inputs for the models, it was necessary to introduce an additional dimension for the forecast values. For the LSTM and GRU models, we employed a zero-padding approach, extending the forecast values with zeros to match the dimensional
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline & **NSE** & **KGE** & **R** & **NRMSE** \\\\ \\hline
**Persistence** & 0.864 & 0.922 & 0.931 & 0.824 \\\\ \\hline
**SeqSeq** & 0.921 & 0.867 & 0.962 & 0.612 \\\\ \\hline
**GRU** & 0.884 & 0.932 & 0.945 & 0.735 \\\\ \\hline
**LSTM** & 0.864 & 0.903 & 0.939 & 0.793 \\\\ \\hline
**Transformer** & 0.926 & 0.945 & 0.964 & 0.583 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Comparative performance summary of different models for 120-hour streamflow prediction on unified 125 stations (\\(\\frac{\\pi}{\\#}\\) higher better, \\(\\frac{\\pi}{\\#}\\) lower better)requirements, in line with the benchmark paper's methodology. For the Transformer model, however, we adopted a persistence approach. This involved extending the historical values into the forecast period by repeating the last available data point, a technique shown effective in other work (Demiray et al., 2024). With this additional dimension in place, the past and forecast values were merged to form a unified input with dimensions [batch size, 192, 10] for use in the Transformer, GRU, and LSTM models.
In our comprehensive analysis, captured in Table 3 and Figures 4 and 5, we aggregated the 120-hour streamflow prediction results from all 125 stations, treating them as a unified dataset. This approach allowed us to observe the overall performance of each model as if it were predicting for a single, extensive location. For each hour, we calculated the NSE, KGE, and Pearson's r scores on the unified dataset and then determined their median values of 120 hours. The table summarizes these median values for NSE, KGE, and Pearson's r, as well as the NRMSE for each model.
According to Table 3, the Transformer model demonstrates proficiency in streamflow prediction, as evidenced by its high median scores in both NSE and KGE. This underlines the model's capability to accurately simulate and predict streamflow, maintaining a delicate balance between sensitivity to actual streamflow variations and error minimization. Additionally, its strong performance in Pearson's r and low NRMSE further corroborate its robustness and precision in predictions. In comparison, the Seq2Seq model, while showing a strong correlation with the observed data (as indicated by a high Pearson's r score), falls slightly short in NSE and KGE scores when compared to the Transformer. This suggests that while Seq2Seq is effective in capturing the linear relationship in streamflow data, it may not be as adept as the Transformer in balancing various aspects of model performance.
components of model accuracy, such as correlation, bias, and variability. The LSTM model exhibits comparable NSE scores to the Persistence model, which serves as a baseline. This indicates that while LSTM is effective, it might not offer a significant advantage over simpler models in certain situations. Finally, the Persistence model's performance, particularly its high KGE score, is impressive considering its simplicity. This underscores the model's utility in certain forecasting scenarios where complex models might not provide significant additional benefits.
Figures 4 and 5 graphically represent the temporal changes in NSE and KGE scores, respectively, offering a visual representation of the models' ability to maintain accuracy over time. The Transformer model consistently maintains higher NSE and KGE scores across the 120-hour window, reinforcing its ability to sustain predictive performance over longer horizons. In contrast, the Persistence model, while starting strong, shows a more pronounced decline, particularly in NSE, as the forecast horizon extends. The Seq2Seq model, despite its robust Pearson's r value, exhibits variability in performance, particularly in KGE, suggesting limitations in balancing bias and variance when forecasting streamflow. These figures are instrumental in understanding the temporal stability of each model, with the Transformer model's lines depicting a slower decline in both NSE and KGE values, suggesting its robustness in handling the temporal complexity of streamflow prediction.
In order to assess the model performance across individual locations, we conducted a detailed location-wise analysis for the 120-hour streamflow prediction. For each of the 125 locations, we computed hourly NSE and KGE values, resulting in a distribution of these metrics for every hour of prediction. To synthesize this voluminous data, we took the median of the 125 NSE and KGE values at each hour, furnishing us with a representative performance indicator for that specific hour across all locations. This median-based aggregation method enabled us to mitigate the
Figure 5: Hourly KGE scores 120-hour streamflow prediction on unified 125 stations
The results demonstrate a distinct superiority of the Transformer model in terms of both Nash-Sutcliffe Efficiency (NSE) and Kling-Gupta Efficiency (KGE), showcasing its robust predictive performance relative to the other models under consideration. Initially, the Persistence model registers the highest NSE and KGE values of 0.9987 and 0.9989, respectively, indicating a strong start. However, this model's performance sharply deteriorates, highlighting its limitations in extended forecasts. In stark contrast, the Transformer model's performance declines much more gradually, maintaining median NSE and KGE scores of 0.6902 and 0.7541 at the final prediction hour, which substantially exceeds the final scores of other models. This indicates not only the Transformer's sustained accuracy over time but also its generalizability across the 125 individual sensor locations.
When considering the aggregated results from the unified dataset, the Transformer model's proficiency is further affirmed, with the consistency of its performance across diverse conditions, as reflected in Table 3 and Figures 4 and 5. This consistent performance further validates the model's capability to generalize effectively across the spectrum of data variability and location-specific characteristics. The Seq2Seq, GRU, and LSTM models exhibit moderate performance in comparison, with the GRU model particularly notable for its KGE values, signifying its ability to balance accuracy components like correlation, bias, and variability. The aggregation of insights from both the individual sensor location predictions and the unified dataset results strengthens the case for the Transformer model as the preferred tool for streamflow prediction, adeptly navigating the complexities of hydrological patterns and offering a significant advancement over both traditional and contemporary modeling approaches.
In addition to conducting a location-wise analysis, we delved deeper into the performance metrics of each model. For each of the 125 sensor locations, we calculated NSE, KGE, and Pearson's r values hourly over a 120-hour forecast period. This provided a detailed performance profile for each model at every location, hour by hour. We then calculated the median of these 120 hourly values for each sensor, offering a stable and representative measure of performance by mitigating the impact of anomalies or extreme values. For the NRMSE, we assessed the
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline & \\multicolumn{4}{c|}{**NSE**} & \\multicolumn{4}{c|}{**KGE**} \\\\ \\hline & **Min** & **Max** & **Median** & **Mean** & **Min** & **Max** & **Median** & **Mean** \\\\ \\hline
**Persistence** & -0.136 & 0.998 & 0.215 & 0.321 & 0.429 & 0.998 & 0.609 & 0.659 \\\\ \\hline
**SeqSeq** & 0.613 & 0.917 & 0.656 & 0.686 & 0.664 & 0.726 & 0.684 & 0.690 \\\\ \\hline
**GRU** & 0.584 & 0.913 & 0.649 & 0.684 & 0.713 & 0.903 & 0.737 & 0.765 \\\\ \\hline
**LSTM** & 0.578 & 0.888 & 0.670 & 0.686 & 0.702 & 0.879 & 0.758 & 0.770 \\\\ \\hline
**Transformer** & 0.690 & 0.988 & 0.729 & 0.769 & 0.741 & 0.927 & 0.785 & 0.800 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Statistical summary of hourly median NSE and KGE scores for 120-hour streamflow prediction across 125 stations (higher better)performance over the entire 120-hour period as a whole for each sensor. The final aggregation step involved calculating a single median value for each metric across all 125 sensors, offering a comprehensive view of each model's overall performance across varied geographical and hydrological conditions. Table 5 presents these aggregated median values. For Tables 6 and 7, instead of aggregating median values of 125 sensors, we utilized the performance values of individual sensors to calculate the results shown in these tables.
Based on Table 5, the Transformer model showcases superior performance with the highest median values in all metrics, indicating its robustness and reliability in varied hydrological scenarios. Notably, its lower NRMSE value signifies a lower prediction error compared to other models. The Seq2Seq and GRU models also display commendable performance but are slightly outperformed by the Transformer. The Persistence model, simpler in its approach, shows lower median values in NSE and Pearson's r, reflecting its limitations in capturing complex streamflow dynamics.
Table 6 highlights the comparative effectiveness of each model at specific locations. The Transformer model consistently outperforms others in the majority of stations across all metrics,
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline & **NSE** & **KGE** & **R** & **NRMSE** \\\\ \\hline
**Persistence** & 10 & 21 & 7 & 9 \\\\ \\hline
**SeqSeq** & 21 & 9 & 29 & 17 \\\\ \\hline
**GRU** & 19 & 20 & 22 & 19 \\\\ \\hline
**LSTM** & 20 & 22 & 27 & 22 \\\\ \\hline
**Transformer** & 55 & 53 & 40 & 58 \\\\ \\hline \\end{tabular}
\\end{table}
Table 6: Number of stations where each model performed best across metrics ( higher better, lower better)
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline & **NSE** & **KGE** & **R** & **NRMSE** \\\\ \\hline
**Persistence** & 0.214741 & 0.609873 & 0.634093 & 1.065168 \\\\ \\hline
**SeqSeq** & 0.667505 & 0.725735 & 0.871589 & 0.687117 \\\\ \\hline
**GRU** & 0.660703 & 0.748974 & 0.857164 & 0.706444 \\\\ \\hline
**LSTM** & 0.655519 & 0.768603 & 0.873331 & 0.709046 \\\\ \\hline
**Transformer** & 0.746169 & 0.791779 & 0.884445 & 0.634105 \\\\ \\hline \\end{tabular}
\\end{table}
Table 5: Aggregate median performance of models across 125 sensor locations ( higher better, lower better)reinforcing its suitability for diverse forecasting scenarios. It leads significantly in NSE and KGE, demonstrating its effectiveness in capturing both accuracy and dynamics of streamflow predictions. Other models, including Seq2Seq, GRU, and LSTM, show a balanced distribution of best performances across different stations, suggesting their potential applicability in specific contexts.
Table 7 focuses on the count of sensor stations where each model achieved an NSE score greater than 0.5. The Transformer model leads with 99 stations exceeding this threshold, validating its consistent and reliable predictive quality across a broad range of locations. The LSTM model follows closely, indicating its effective performance in many scenarios. The GRU and Seq2Seq models also demonstrate a significant number of stations with NSE scores above 0.5, while the Persistence model, with its simpler approach, understandably lags in this aspect.
As we draw the results section of this study to a close, it becomes clear that the Transformer model distinctly outperforms its counterparts--LSTM, GRU, Seq2Seq, and the traditional Persistence approach--in predicting streamflow over a 120-hour period. The comprehensive analyses, encapsulated in Tables 3, 4, 5, 6, and 7, and Figures 4, 5, 6, and 7, have consistently highlighted the Transformer model's superior performance. Notably, its ability to maintain higher median scores in Nash-Sutcliffe Efficiency (NSE), Kling-Gupta Efficiency (KGE), and Pearson's r, coupled with the lowest Normalized Root Mean Square Error (NRMSE), underscores its robustness and precision in diverse hydrological scenarios.
The model's effectiveness is further affirmed by its dominant performance across a majority of the 125 sensor locations, signifying its adaptability and reliability in various forecasting contexts. These findings not only validate the Transformer model's capability in balancing accuracy, bias, and variability in streamflow predictions but also mark it as a significant advancement over both traditional and contemporary modeling approaches.
## 5 Conclusion
In this study, we have explored the effectiveness of a generalized Transformer model in the context of 120-hour streamflow prediction, utilizing a comprehensive dataset encompassing 125 sensor locations across Iowa, USA. This approach represents a departure from traditional
\\begin{table}
\\begin{tabular}{|c|c|} \\hline & \\# NSE \\textgreater{} 0.5 \\\\ \\hline
**Persistence** & 45 \\\\ \\hline
**SeqSeq** & 78 \\\\ \\hline
**GRU** & 81 \\\\ \\hline
**LSTM** & 85 \\\\ \\hline
**Transformer** & 99 \\\\ \\hline \\end{tabular}
\\end{table}
Table 7: Count of sensor stations with NSE scores above 0.5 by model methods, as we developed a generalized model trained on data from all these locations simultaneously, rather than creating separate models for each site. By integrating 72 hours of historical data on precipitation, evapotranspiration, and discharge, along with location-specific attributes, the model has been tailored to capture the complex dynamics of hydrological systems effectively.
Our analysis, grounded in four key metrics - Nash-Sutcliffe Efficiency (NSE), Kling-Gupta Efficiency (KGE), Pearson's r, and Normalized Root Mean Square Error (NRMSE) - has revealed the Transformer model's superior capability in streamflow prediction compared to LSTM, GRU, Seq2Seq, and the Persistence approach. The model demonstrated higher median scores in NSE and KGE, alongside the lowest NRMSE values, underscoring its precision and reliability in diverse hydrological scenarios. The Transformer model's predominant performance across a majority of the sensor locations further accentuates its adaptability and applicability in various forecasting contexts. This study not only underscores the model's efficacy in streamflow prediction but also contributes significantly to the field of hydrological modeling, showcasing the potential of advanced deep learning techniques in enhancing prediction accuracy and efficiency.
Looking ahead, there are numerous opportunities for extending this research. Future studies could explore the application of the Transformer model in different hydrological settings, integrate a broader range of environmental variables, and assess its scalability for larger geographical areas. Further development and refinement of the model's architecture and training process could also enhance its predictive capabilities. Such advancements will be crucial for improving water resource management strategies and strengthening our preparedness for hydrological extremes in the face of evolving climate conditions.
## 6 References
* [1]Ahmed et al. (2021) Ahmed, A.M., Deo, R.C., Feng, Q., Ghahramani, A., Raj, N., Yin, Z. and Yang, L. (2021). Deep learning hybrid model with Boruta-Random forest optimiser algorithm for streamflow forecasting with climate mode indices, rainfall, and periodicity. Journal of Hydrology, 599, p.126350.
* [2]Alabbad & Demir (2022) Alabbad, Y., & Demir, I. (2022). Comprehensive flood vulnerability analysis in urban communities: Iowa case study. International journal of disaster risk reduction, 74, 102955.
* [3]Arnold (1994) Arnold, J., 1994. SWAT-soil and water assessment tool.
* [4]Banholzer et al. (2014) Banholzer, S., Kossin, J., & Donner, S. (2014). The impact of climate change on natural disasters. In Reducing disaster: Early warning systems for climate change (pp. 21-49). Springer, Dordrecht.
* [5]Bayar et al. (2009) Bayar, S., Demir, I., & Engin, G. O. (2009). Modeling leaching behavior of solidified wastes using back-propagation neural networks. Ecotoxicology and environmental safety, 72(3), 843-850.
* [6]Beven and Kirkby (1979) Beven, K.J. and Kirkby, M.J., 1979. A physically based, variable contributing area model of basin hydrology. Hydrological sciences journal, 24(1), pp.43-69.
* [7]Castangia, M., Grajales, L.M.M., Aliberti, A., Rossi, C., Macii, A., Macii, E. and Patti, E., 2023. Transformer neural networks for interpretable flood forecasting. Environmental Modelling & Software, 160, p.105581.
* Chen et al. (2023) Chen, Z., Lin, H. and Shen, G., 2023. TreeLSTM: A spatiotemporal machine learning model for rainfall-runoff estimation. Journal of Hydrology: Regional Studies, 48, p.101474.
* Davenport et al. (2021) Davenport, F. V., Burke, M., & Diffenbaugh, N. S. (2021). Contribution of historical precipitation change to US flood damages. Proceedings of the National Academy of Sciences, 118(4).
* Devia et al. (2015) Devia, G.K., Ganasri, B.P. and Dwarakish, G.S., 2015. A review on hydrological models. Aquatic procedia, 4, pp.1001-1007.
* Demir & Beck (2009) Demir, I., & Beck, M. B., 2009, April. GWIS: a prototype information system for Georgia watersheds. In Georgia Water Resources Conference: Regional Water Management Opportunities, UGA, Athens, GA, US.
* Demir et al. (2022) Demir, I., Xiang, Z., Demiray, B. and Sit, M., 2022. WaterBench-Iowa: a large-scale benchmark dataset for data-driven streamflow forecasting. Earth system science data, 14(12), pp.5605-5616.
* Demiray et al. (2023) Demiray, B.Z., Sit, M. and Demir, I., 2023. EfficientTempNet: Temporal Super-Resolution of Radar Rainfall. arXiv preprint arXiv:2303.05552.
* Demiray et al. (2024) Demiray, B.Z., Sit, M., Mermer, O. and Demir, I., 2024. Enhancing Hydrological Modeling with Transformers: A Case Study for 24-Hour Streamflow Prediction. Water Science & Technology 89 (9), 2326-2341.
* Diffenbaugh et al. (2017) Diffenbaugh, N. S., Singh, D., Mankin, J. S., Horton, D. E., Swain, D. L., Touma, D., & Rajaratnam, B. (2017). Quantifying the influence of global warming on unprecedented extreme climate events. Proceedings of the National Academy of Sciences, 114(19), 4881-4886.
* Granata et al. (2016) Granata, F., Gargano, R. and De Marinis, G., 2016. Support vector regression for rainfall-runoff modeling in urban drainage: A comparison with the EPA's storm water management model. Water, 8(3), p.69.
* Guo et al. (2021) Guo, Y., Yu, X., Xu, Y.P., Chen, H., Gu, H. and Xie, J., 2021. AI-based techniques for multi-step streamflow forecasts: application for multi-objective reservoir operation optimization and performance assessment. Hydrology and Earth System Sciences, 25(11), pp.5951-5979.
* Hochreiter and Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J., 1997. Long short-term memory. Neural computation, 9(8), pp.1735-1780.
* Honorato et al. (2018) Honorato, A.G.D.S.M., Silva, G.B.L.D. and Guimaraes Santos, C.A., 2018. Monthly streamflow forecasting using neuro-wavelet techniques and input analysis. Hydrological Sciences Journal, 63(15-16), pp.2060-2075.
* Ibrahim et al. (2022) Ibrahim, K.S.M.H., Huang, Y.F., Ahmed, A.N., Koo, C.H. and El-Shafie, A., 2022. A review of the hybrid artificial intelligence and optimization modelling of hydrological streamflow forecasting. Alexandria Engineering Journal, 61(1), pp.279-303.
* Ibrahim et al. (2017)IPCC, 2022: Climate Change 2022: Impacts, Adaptation, and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [H.-O. Portner, D.C. Roberts, M. Tignor, E.S. Poloczanska, K. Mintenbeck, A. Alegria, M. Craig, S. Langsdorf, S. Loschke, V. Moller, A. Okem, B. Rama (eds.)]. Cambridge University Press. Cambridge University Press, Cambridge, UK and New York, NY, USA, 3056 pp., doi:10.1017/9781009325844.
* Krajewski et al. (2021) Krajewski, W.F., Ghimire, G.R., Demir, I. and Mantilla, R., 2021. Real-time streamflow forecasting: AI vs. Hydrologic insights. Journal of Hydrology X, 13, p.100110.
* Kratzert et al. (2018) Kratzert, F., Klotz, D., Brenner, C., Schulz, K. and Herrnegger, M., 2018. Rainfall-runoff modelling using long short-term memory (LSTM) networks. Hydrology and Earth System Sciences, 22(11), pp.6005-6022.
* Lee and Georgakakos (1996) Lee, T.H. and Georgakakos, K.P., 1996. Operational Rainfall Prediction on Meso-\\(\\gamma\\) Scales for Hydrologic Applications. Water Resources Research, 32(4), pp.987-1003.
* Li and Demir (2022) Li, Z., & Demir, I., 2022. A comprehensive web-based system for flood inundation map generation and comparative analysis based on height above nearest drainage. Science of The Total Environment, 828, 154420.
* Li and Demir (2024) Li, Z., & Demir, I., 2024. Better localized predictions with Out-of-Scope information and Explainable AI: One-Shot SAR backscatter nowcast framework with data from neighboring region. ISPRS Journal of Photogrammetry and Remote Sensing, 207, 92-103.
* Lin et al. (2022) Lin, T., Wang, Y., Liu, X. and Qiu, X., 2022. A survey of transformers. AI Open.
* Liu et al. (2022) Liu, C., Liu, D. and Mu, L., 2022. Improved transformer model for enhanced monthly streamflow predictions of the Yangtze River. IEEE Access, 10, pp.58240-58253.
* Mosavi et al. (2018) Mosavi, A., Ozturk, P. and Chau, K.W., 2018. Flood prediction using machine learning models: Literature review. Water, 10(11), p.1536.
* Munich (2022) Munich Re. (2022). Hurricanes, cold waves, tornadoes: Weather disasters in USA dominate natural disaster losses in 2021. [https://www.munichre.com/en/company/media-relations/media](https://www.munichre.com/en/company/media-relations/media) -information-and-corporate-news/media-information/2022/natural-disaster-losses-2021.html.
* NDRCC (2021) NDRCC. (2021). 2020 Global Natural Disaster Assessment Report. [https://reliefweb.int/report/china/2020-global-natural-disaster-assessment-report](https://reliefweb.int/report/china/2020-global-natural-disaster-assessment-report)
* Nguyen et al. (2023) Nguyen, T.T.H., Vu, D.Q., Mai, S.T. and Dang, T.D., 2023. Streamflow Prediction in the Mekong River Basin Using Deep Neural Networks. IEEE Access, 11, pp.97930-97943.
* NOAA National Centers for Environmental Information (NCEI). (2022). US billion-dollar weather and climate disasters. [https://www.ncei.noaa.gov/access/monitoring/billions/](https://www.ncei.noaa.gov/access/monitoring/billions/), DOI:10.25921/stkw-7w73
* Ren-Jun (1992) Ren-Jun, Z., 1992. The Xinanjiang model applied in China. Journal of hydrology, 135(1-4), pp.371-381.
* Salas et al. (2000) Salas, J.D., Markus, M. and Tokar, A.S. (2000). Streamflow forecasting based on artificial neural networks. Artificial neural networks in hydrology, pp.23-51.
* Salas et al. (2002)Sharma, P. and Machiwal, D. eds., 2021. Advances in streamflow forecasting: from traditional to modern approaches. Elsevier.
* Sit et al. (2021) Sit, M., Demiray, B. and Demir, I., 2021. Short-term hourly streamflow prediction with graph convolutional GRU networks. arXiv preprint arXiv:2107.07039.
* Sit et al. (2022) Sit, M., Demiray, B.Z. and Demir, I., 2022. A systematic review of deep learning applications in streamflow data augmentation and forecasting. EarthArxiv, 3617. [https://doi.org/10.31223/X5HM08](https://doi.org/10.31223/X5HM08)
* Sit et al. (2023) Sit, M., Demiray, B.Z. and Demir, I., 2023. Spatial Downscaling of Streamflow Data with Attention Based Spatio-Temporal Graph Convolutional Networks. EarthArxiv, 5227. [https://doi.org/10.31223/X5666M](https://doi.org/10.31223/X5666M)
* Sit et al. (2024) Sit, M., Seo, B. C., Demiray, B., & Demir, I., 2024. EfficientRainNet: Leveraging EfficientNetV2 for memory-efficient rainfall nowcasting. Environmental Modelling & Software, 176, 106001.
* Strauss et al. (2016) Strauss, B. H., Kopp, R. E., Sweet, W. V., and Bittermann, K. (2016). Unnatural coastal floods: Sea level rise and the human fingerprint on US floods since 1950. Climate Central.
* Tabari (2020) Tabari, H. (2020). Climate change impact on flood and extreme precipitation increases with water availability. Scientific reports, 10(1), 1-10.
* Trenberth et al. (2018) Trenberth, K. E., Cheng, L., Jacobs, P., Zhang, Y., & Fasullo, J. (2018). Hurricane Harvey links to ocean heat content and climate change adaptation. Earth's Future, 6(5), 730-744.
* UNESCO (2023) UNESCO. (2023). The United Nations world water development report 2023: partnerships and cooperation for water. UN.
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L. and Polosukhin, I., 2017. Attention is all you need. Advances in neural information processing systems, 30.
* World Meteorological Organization (WMO) (2021) World Meteorological Organization (WMO). (2021). The Atlas of Mortality and Economic Losses from Weather, Climate and Water Extremes (1970-2019).
* Wu et al. (2021) Wu, H., Xu, J., Wang, J. and Long, M., 2021. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in Neural Information Processing Systems, 34, pp.22419-22430.
* Xiang et al. (2021) Xiang, Z., Demir, I., Mantilla, R., & Krajewski, W. F., 2021. A regional semi-distributed streamflow model using deep learning. EarthArxiv, 2152. [https://doi.org/10.31223/X5GW3V](https://doi.org/10.31223/X5GW3V)
* Xiang and Demir (2022) Xiang, Z. and Demir, I., 2022. Fully distributed rainfall-runoff modeling using spatial-temporal graph neural network. EarthArxiv, 3018. [https://doi.org/10.31223/X57P74](https://doi.org/10.31223/X57P74)
* Xu et al. (2023) Xu, Y., Lin, K., Hu, C., Wang, S., Wu, Q., Zhang, L. and Ran, G., 2023. Deep transfer learning based on transformer for flood forecasting in data-sparse basins. Journal of Hydrology, 625, p.129956.
* Yan et al. (2018) Yan, J., Jin, J., Chen, F., Yu, G., Yin, H. and Wang, W., 2018. Urban flash flood forecast using support vector machine and numerical simulation. Journal of Hydroinformatics, 20(1), pp.221-231.
Yaseen, Z.M., El-Shafie, A., Jaafar, O., Afan, H.A. and Sayl, K.N. (2015). Artificial intelligence based models for streamflow forecasting: 2000-2015. Journal of Hydrology, 530, pp.829-844.
* Yaseen et al. (2017) Yaseen, Z.M., Ebtehaj, I., Bonakdari, H., Deo, R.C., Mehr, A.D., Mohtar, W.H.M.W., Diop, L., El-Shafie, A. and Singh, V.P. (2017). Novel approach for streamflow forecasting using a hybrid ANFIS-FFA model. Journal of Hydrology, 554, pp.263-276.
* Yaseen et al. (2018) Yaseen, Z.M., Awadh, S.M., Sharafati, A. and Shahid, S. (2018). Complementary data-intelligence model for river flow simulation. Journal of Hydrology, 567, pp.180-190.
* Yaseen et al. (2019) Yaseen, Z.M., Sulaiman, S.O., Deo, R.C. and Chau, K.W., 2019. An enhanced extreme learning machine model for river flow forecasting: State-of-the-art, practical applications in water resource engineering area and future research direction. Journal of Hydrology, 569, pp.387-408.
* Yildirim et al. (2022) Yildirim, E., Just, C., & Demir, I., 2022. Flood risk assessment and quantification at the community and property level in the State of Iowa. International journal of disaster risk reduction, 77, 103106.
* Zhou et al. (2021) Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H. and Zhang, W., 2021, May. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 12, pp. 11106-11115).
* Zhou et al. (2022) Zhou, T., Ma, Z., Wen, Q., Wang, X., Sun, L. and Jin, R., 2022, June. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International Conference on Machine Learning (pp. 27268-27286). PMLR. | This study explores the efficacy of a Transformer model for 120-hour streamflow prediction across 125 diverse locations in Iowa, US. Utilizing data from the preceding 72 hours, including precipitation, evapotranspiration, and discharge values, we developed a generalized model to predict future streamflow. Our approach contrasts with traditional methods that typically rely on location-specific models. We benchmarked the Transformer model's performance against three deep learning models (LSTM, GRU, and Seq2Seq) and the Persistence approach, employing Nash-Sutcliffe Efficiency (NSE), Kling-Gupta Efficiency (KGE), Pearson's r, and Normalized Root Mean Square Error (NRMSE) as metrics. The study reveals the Transformer model's superior performance, maintaining higher median NSE and KGE scores and exhibiting the lowest NRMSE values. This indicates its capability to accurately simulate and predict streamflow, adapting effectively to varying hydrological conditions and geographical variances. Our findings underscore the Transformer model's potential as an advanced tool in hydrological modeling, offering significant improvements over traditional and contemporary approaches. | Give a concise overview of the text below. | 219 |
arxiv-format/2009_13037v1.md | Mixture of Spectral Generative Adversarial Networks for Imbalanced Hyperspectral Image Classification
Tanmoy Dam
School of Engineering and Information Technology, University of New South Wales Canberra, Australia.
Sreenatha G. Anavatti
School of Engineering and Information Technology, University of New South Wales Canberra, Australia.
Hussein A. Abbass
School of Engineering and Information Technology, University of New South Wales Canberra, Australia.
## II Introduction
With the advances made in imaging spectrometer over recent decades, the hyperspectral image classification (HIC) problem has attracted significant attention by the research community [1]. Due to the high-resolution continual bands, a spectrometer helps to capture the robust pixelwise information in images [2]. Traditional machine learning algorithms are less effective in extracting the most enriching features from large hyperspectral images [3]. Three broad learning techniques [3] are applied to HIC: supervised, unsupervised and semi-supervised learning. The k-nearest neighbors (KNN), and support vector machine (SVM) algorithms are known to give robust performance in HIC [3]. Recently, Convolutional Neural Networks (CNN) became the most widely used methods in the supervised deep learning domain [4]. The performance quality of CNN is dependent on well distributed large scale labelled data. However, creating a large amount of labelled information for HIC is a costly and time consuming process.
In class-imbalanced problems, the performance of CNN significantly deteriorates due to a tendency to bias the classifiers' parameters towards the majority classes. In contrast, unsupervised learning algorithms don't require class information; rather they have been used to discover the number of classes present in a dataset [2]. Semi-supervised algorithms take the few known labelled information as well as unlabelled information to improve the HIC performance for class-imbalanced dataset compared to unsupervised methods [5]. The semi-supervised learning algorithms are broadly described in the literature as generative and discriminative methods [6, 7, 8, 9]. Semi-supervised joint dictionary learning with soft-max(\\(S^{2}JDL-Sof\\)) loss can be found in [6].
Recently, GAN has become a popular approach, where both the generative and discriminative neural networks rely on min-max game theoretic principles [10]. Due to the adversarial nature of learning GAN games, a discriminative network (\\(D\\)) is able to extract more fine features from the data to improve classification performance [11]. Unsupervised GAN has a better ability to capture latent features than traditional supervised classification methods [11, 12]. When the class distribution is imbalanced, biasing the discriminator towards majority classes equally biases the classification accuracy performance towards majority classes [13], despite that generative networks take the noise distribution from the categorical Gaussian distribution.
Conditional GAN-based hyperspectral spatial-spectral image classification [14] learns all sub-tasks in parallel. In contrast, the two-player sequential semi-supervised hyperspectral GAN (HSGAN) [6] is a semi-supervised hyperspectral sequential discriminator-and-generator-based GAN. The Multiclass Spatial-Spectral Generative Adversarial Network (MSGAN) [3] is a spatial-spectral joint-learning-based two players GAN method, where the conditional class probabilities have been used to generate a specific real distribution. However, to reduce down the bias towards the majority class, an MSGAN discriminator relies on equal weighting for the generated classes. Therefore, the distribution of generated samples is different from the real class distribution. However, parallel structure features based discriminator gives better classification performance compared to sequential discriminators in remote sensing dataset [15]. The main contributions of this letter are as follows.
* We propose a novel mixture of generators spectral 1-d GAN (MGSGAN) structure for HIC. The MGSGAN generator is able to generate class-dependent real data.
* To improve the classification performance, a new parallel feature extraction structure has been introduced, replacing the classic sequential structure in classifier networks.
* The proposed MGSGAN approach has been validated through two datasets: Indian Pines and Pavia University. A statistical significance study is also incorporated to validate the proposed method.
## III MGSGAN Structure
In this section, we present the proposed MGSGAN and how the generators generate realistic images even in class imbalanced situations. When the real data distribution is imbalanced in nature, the conditional class distribution of the generating model can't generate the minority classes due to a bias towards majority classes in the discriminator networks [13]. To overcome this problem, the proposed MGSGAN employs a set of generators to generate a sample within the domain of specific real classes distribution. The proposed MGSGAN consists of three neural networks: a set of mixture of class conditionals generators (\\(G_{j}(\\theta_{g})=g_{1},g_{2}, ,g_{N}\\), \\(N\\)= number of classes present in the dataset), a discriminator (\\(D\\)) and a Classifier (\\(C\\)). The generator structure is represented as the sum of each class generator and its corresponding real class distribution. Thus class conditionals for each generator is defined as follows,
\\[G_{j}(\\frac{z}{c_{j}})=\\sum_{j=1}^{N}g_{j}\\tau_{j} \\tag{1}\\]
where, \\(\\tau_{j}=\\tau_{1}\\cap\\tau_{2}\\cap \\cap\\tau_{N}\\in R^{d}\\) is a class specific domain information of real data distribution. Hence, the mixtures of generator outputs are the generated samples within the domain of each real class data distribution \\((\\tau_{j})\\). Due to domain constraints, MGSGAN can generate minority samples as well as majority samples. The \\(G_{j}(\\theta_{g}))\\) is differentiable CNN, parameterized by \\(\\theta_{g}\\), and takes the Gaussian normal distribution (\\(z\\sim\\mathcal{N}(-1,1)\\)) and conditional categorical class information (\\(c_{j},j=1, ,N\\)) to generate the specific conditional class realistic distribution data (\\(p_{r}\\in x_{j}\\)). The discriminator network (\\(D\\)), parameterized by \\(\\theta_{d}\\), acts to discriminate between real data (\\(p_{r}\\)) and generated realistic distribution data (\\(G_{j}(z/c_{j})\\)) as a real and fake classification, respectively. However, selecting the generator is a more crucial task, where we have used conditional categorical class information associated with conditional real data within the class\\((\\tau_{j})\\). Therefore, the generated samples should belong to a specific class of the real distribution domain. Moreover, the classifier network (\\(C\\)), parameterized by \\(\\theta_{c}\\), is working as a normal classifier that can take class conditionals real data and generated realistic data. Hence, from the classifier perspective, the \\(G\\) network is working as data augmentation network to generate realistic data for the majority and minority classes to improve the classification performance. However, \\(G\\) and \\(D\\) are working as normal two players GAN game. To learn a realistic data distribution, the differentiable \\(G\\) network parameters \\((\\theta_{g})\\) learn by fooling the discriminator. In our proposed approach, all three network parameters are learnt jointly through min-max-max game principle.
The three players MGSGAN entropy loss objective function is defined as follows,
\\[\\min_{G}\\max_{D}\\max_{C}\\ \\ Q(D,G,C)=L_{D}+L_{G}+L_{C} \\tag{2}\\]
where,
\\[L_{D}=P_{j}^{r}\\,\\mathbb{E}_{x\\sim p_{r}}[logD(x)]+P_{j}^{g}\\,\\mathbb{E}_{G( \\frac{x}{c_{j}})\\sim p_{g}}[log(1-D(G(\\frac{z}{c_{j}})))] \\tag{3}\\]
\\[L_{G}=P_{j}^{g}\\,\\mathbb{E}_{G(\\frac{x}{c_{j}})\\sim p_{g}}[log(1-D(G(\\frac{z }{c_{j}})))] \\tag{4}\\]
\\[L_{C}=P_{j}^{c}\\,\\mathbb{E}_{x\\sim p_{r}}[logC(x)]+P_{j}^{c}\\,\\mathbb{E}_{G( \\frac{x}{c_{j}})\\sim p_{g}}[logC(G(\\frac{z}{c_{j}})] \\tag{5}\\]
Where \\(P_{j}^{r}\\), \\(P_{j}^{g}\\) and \\(P_{j}^{c}\\) are the \\(j^{th}\\) class conditional probabilities of real, generated and classifier data, respectively. In equation 2, the \\(G\\) network is only working with the \\(D\\) network to reach stability, whereas the classifier network takes real data and augmented generated data to predict the correct class. Therefore, the optimal \\(D\\) network is achieved by considering the \\(G\\) network similar to the class conditionals GAN [10]. However, the classifier network performance is maximised based upon only the \\(G\\) network. Thus, \\(G\\) is working as class constraints data augmentation generator. Therefore, it becomes two players min-max game as in [10]. The optimal \\(D^{*}\\) and \\(G^{*}\\) can be derived by the following propositions.
**Proposition 1**: _For any \\(C\\), the optimal discriminator(\\(D^{*}\\)) is based on the fixed \\(G\\) network. The optimal discriminator is defined as follows,_
\\[\\textbf{(}D)^{*}=\\frac{P_{j}^{r}p_{r}}{P_{j}^{r}p_{r}+P_{j}^{g}p_{g}} \\tag{6}\\]
**Proposition 2**: _The optimization of \\(Q(D,G^{*},C)\\) is the minimization of the following JS divergence._
\\[\\textbf{(}G)^{*}=-2log2+2JS(P_{j}^{r}p_{r}||P_{j}^{g}p_{g}) \\tag{7}\\]
The \\(C\\) network gives maximum classification performance when \\(G\\) and \\(D\\) are reached at optimal points i.e. \\(p_{r}=p_{g}\\). Hence, the class conditional \\(G\\) network always generates a realistic sample within the domain of class conditionals real distribution \\((p_{r})\\).
The \\(G\\), \\(D\\) and \\(C\\) structures are depicted in Figure 1 where three networks are using the same convolutional kernels. The MGSGAN algorithm is described in Algorithm 1.
## IV results & Experiments
In this section, we compare the performance of the proposed MGSGAN method along with other conditional GAN methods and two popular machine learning algorithms. The performance has been validated through two popular imbalanced hyperspectral dataset under two different training to testing ratio (TTTR) settings.
### _Indian Pines Dataset_
A vegetation site in northwestern Indiana, the data was captured by Airborne Visible/infrared Imaging Spectrometer Sensor in \\(1992\\). The \\(220\\) spectral bands ranging from \\(400\\) to \\(2500\\) nm channels were used for collecting \\(145\\times 145\\) pixels data. Due to atmospheric turbulence, \\(20\\) channels data have been corrupted, leaving us with \\(200\\) channels and \\(16\\) different classes to evaluate MGSGAN [3, 6]. The implementation was done using PyTorch (GeForce RTX 2060, 6GB GPU Ram) and python environments.
Table I reports the performance of the proposed MGSGAN in comparison to the conventional and state-of-the-art methods. Two training-to-testing ratios (TTTR) \\((5\\%\\ and\\ 10\\%)\\)of real data distributions test cases are used for validating the proposed MGSGAN performance with other two class conditional GAN models: the two players Auxiliary Classifier Hyperspectral Spectral GAN (ACHGSAN) and the three players Auxiliary Classifier Single GAN (ACSGAN).
As in ACHGSAN [16], the \\(G\\) and \\(D\\) networks are both working as data generation and data classification, simultaneously to improve classification performance. Thus, \\(D\\) sequential networks gives \\(N+1\\) outputs where \\(N\\) is true number of classes and last output is used for adversarial GAN objective. \\(G\\) and \\(D\\) networks are similar to HSGAN structure [6] where \\(D\\) networks is similar to traditional sequential convoluted structure [11]. In ACSGAN, we use without-domain constraints class conditional single generator instead of multiple generators of MGSGAN structure and remaining two networks (\\(C\\) & \\(D\\)) are the same. However, \\(C\\) network is working as normal lower dimensional features based classifier and we have assigned it as a CNN in the Table I.
The classification performance is compared with state-of-the-art methods such as HSGAN and \\(S^{2}\\)JDL-Sof [6]. Three popular performance indices (PIs) are used to check superiority among the methods, such as overall Accuracy (OA), kappa coefficients (kappa) and average accuracy (AA) [6]. \\(5\\%\\) samples have been used to train all six methods and remaining \\(95\\%\\) samples were used to test the classification performance. The classification performance of MGSGAN has improved \\(4.94\\%\\) in OA, \\(3.9\\%\\) in kappa and \\(4.97\\%\\) in AA compared to the ACSGAN. The significant improvement of classification performance is been obtained as \\(8.5\\%\\) in OA, \\(9.2\\%\\) in kappa and \\(8.56\\%\\) in AA by MGSGAN compared to the state-of-the-art HSGAN method [6].
It is observed from Table I that MGSGAN obtained better performance among the six methods. For the second TTTR case, \\((10\\%)\\) and \\((90\\%)\\) samples were used for training and testing respectively. The improvement of performance indices are \\(1.5\\%\\) in OA, \\(1.65\\%\\)in kappa and \\(3.63\\%\\) in AA compared to the ACSGAN. The notable improvement of classification performance achieves \\(5.23\\%\\) in OA, \\(3.19\\%\\) in kappa and \\(7.5\\%\\) in AA by MGSGAN compared to the HSGAN method, although, in HSGAN method [6], all the samples were used to train the generative models without labels information. Once the training has been completed, \\(10\\%\\) labelled samples are used to further train the last discriminator layers. However, in HSGAN, the OA is \\(78.6\\%\\) while \\(10\\%\\) samples were used to train the model [6]. The class conditional two players ACHGSAN achieved better results of \\(79.63\\%\\) in OA while using the the same \\(10\\%\\) samples for training in HSGAN. It is also seen that the significant performance improvement of adversarial MGSGAN is by \\(4.9\\%\\) OA, \\(5.81\\%\\) kappa, \\(9.12\\%\\)
Fig. 1: The 1-D MGSGAN architecture AA as compared to CNN.
Table II represents each class accuracy of available classes present in the Indian pines dataset, where \\(10\\%\\) samples are used to train all the methods. Table II contains each class average accuracy and its corresponding standard deviation for six methods over 10 runs. In addition, the first \\(16\\) rows are recorded information about each class accuracy and the last three rows define OA, kappa and AA for all the classes in Table II.
Fig 2 depicts one of the minority classes for the three GAN methods, where second TTTR conditions have been used for better visual illustration. Hyperspectral data contains a series of spectral bands of each class data. Therefore, we have plotted the average values of each classes over the number of spectral bands present in second TTTR training conditions.
Fig 2 shows the average values of generated samples for 'Stone-Steel-Towers' and its corresponding ground-truth. It is clearly observed that our proposed method MGSGAN has the ability to generate 'Stone-Steel-Towers' whereas the majority sample 'Soyabean-mintill' intervene into domain of 'Stone-Steel-Towers' for ACHSGAN and ACSGAN methods. Due to better generation of minority classes and parallel structure of the \\(C\\) network, the classification performance has improved significantly as observed in Table I.
### _Pavia Dataset_
The Pavia dataset was captured at Pavia University by Reflective optics image spectrometer sensors. The dataset contains nine classes with pixel sizes \\(610*340\\). 115 bands were captured ranging from \\(430nm\\) to \\(860nm\\) from where \\(12\\) noisy bands have been removed during the data prepossessing stage. To compare again our proposed method, the two TTTR settings (\\(1\\%\\ and\\ 3\\%\\)) have been considered similar to [3]. The classification performance with other state-of-the-methods are listed in Table III. For both settings, the proposed method has achieved better performance in terms of all the three index parameters. MSGAN-spectral method [3] is also considered along with other state-of-the-art methods. The classification performance of MGSGAN has improved \\(2.5\\%\\) in OA, \\(3.5\\%\\) in kappa and \\(2.03\\%\\) in AA compared to the second best results of ACSGAN. Similarly, for second TTTR case, the significant improvement of MGSGAN performance compared to the ACSGAN, in terms of three parameters are \\(4.44\\%\\) in OA, \\(3.7\\%\\) in kappa and \\(3.51\\%\\) in AA respectively. It is
\\begin{table}
\\begin{tabular}{l l l l l l l l l} \\hline \\hline Samples & P1 & SVM & KNN & CNN & ACHSGAN & HSGAN & \\(S^{2}\\)JDL-Sof & ACSGAN & MGSGAN \\\\ \\hline \\multirow{3}{*}{\\(5\\%\\)} & OA(\\(\\%\\)) & \\(86.45\\pm 1.05\\) & \\(68.32\\pm 0.56\\) & \\(73.13\\pm 1.56\\) & \\(71.26\\pm 1.05\\) & \\(74.92\\pm 0.41\\) & - & \\(77.469\\pm 0.26\\) & **81.29 \\(\\pm\\) 0.47** \\\\ & Kappa(\\(\\%\\)) & \\(64.33\\pm 1.14\\) & \\(63.81\\pm 0.657\\) & \\(69.31\\pm 1.76\\) & \\(67.48\\pm 1.45\\) & \\(72.00\\pm 0.01\\) & - & \\(75.69\\pm 1.45\\) & **78.64 \\(\\pm\\) 0.57** \\\\ & A(\\(\\%\\)) & \\(62.27\\pm 2.48\\) & \\(55.61\\pm 0.98\\) & \\(65.21\\pm 2.5\\) & \\(64.26\\pm 9.77\\) & \\(70.97\\pm 0.55\\) & - & \\(73.40\\pm 2.89\\) & **77.05 \\(\\pm\\) 2.78** \\\\ \\hline \\multirow{3}{*}{\\(10\\%\\)} & OA(\\(\\%\\)) & \\(73.55\\pm 0.49\\) & \\(73.361\\pm 0.59\\) & \\(82.12\\pm 0.36\\) & \\(79.63\\pm 0.69\\) & \\(83.53\\pm 0.87\\) & \\(82.25\\pm 1.08\\) & \\(84.94\\pm 0.469\\) & **86.16 \\(\\pm\\) 0.63** \\\\ & Kappa (\\(\\%\\)) & \\(70.14\\pm 0.53\\) & \\(69.43\\pm 0.66\\) & \\(79.58\\pm 1.22\\) & \\(76.03\\pm 0.77\\) & \\(80.01\\pm 0.01\\) & \\(79.01\\pm 0.01\\) & \\(82.83\\pm 0.53\\) & **84.20 \\(\\pm\\) 0.72** \\\\ & AA (\\(\\%\\)) & \\(74.30\\pm 0.55\\) & \\(67.48\\pm 1.26\\) & \\(78.10\\pm 1.72\\) & \\(75.3\\pm 2.1\\) & \\(79.27\\pm 0.60\\) & \\(63.51\\pm 0.66\\) & \\(82.24\\pm 1.23\\) & **85.33 \\(\\pm\\) 1.089** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: Classification Performance on Indian Pines Dataset
\\begin{table}
\\begin{tabular}{l l l l l l l l} \\hline \\hline Method & SVM & KNN & CNN & ACHSGAN & ACSGAN & MGGAN \\\\ \\hline Alfafafa & \\(58.04\\pm 9.98\\) & \\(32.68\\pm 16.47\\) & \\(71.95\\pm 10.97\\) & \\(61.78\\pm 13.98\\) & \\(82.11\\pm 4.14\\) & **83.00 \\(\\pm\\) 5.52** \\\\ Corn-notill & \\(63.04\\pm 3.34\\) & \\(60.41\\pm 1.71\\) & \\(59.79\\pm 14.61\\) & \\(64.51\\pm 5.16\\) & \\(80.84\\pm 2.12\\) & **81.62 \\(\\pm\\) 2.83** \\\\ Corn-min & \\(61.42\\pm 3.75\\) & \\(55.48\\pm 2.15\\) & \\(73.123\\pm 0.20\\) & \\(60.45\\pm 6.20\\) & \\(75.67\\pm 2.58\\) & **76.06 \\(\\pm\\) 5.07** \\\\ Corn & \\(62.28\\pm 7.97\\) & \\(31.12\\pm 5.47\\) & \\(66.90\\pm 8.6\\) & \\(54.92\\pm 3.77\\) & \\(67.21\\pm 12.17\\) & **68.92 \\(\\pm\\) 12.13** \\\\ Grass-pasture & \\(92.16\\pm 1.47\\) & \\(84.390\\pm 1.63\\) & \\(88.62\\pm 0.80\\) & \\(82.22\\pm 4.88\\) & \\(90.03\\pm 1.98\\) & **90.62 \\(\\pm\\).8** \\\\ Grass-trees & \\(93.89\\pm 1.58\\) & \\(93.56\\pm 1.21\\) & \\(95.13\\pm 1.00\\) & \\(95.64\\pm 1.54\\) & \\(96.04\\pm 0.96\\) & **96.92 \\(\\pm\\).37** \\\\ Grass-pasture-mowed & \\(78.4\\pm 13.41\\) & \\(79.6\\pm 8.28\\) & \\(75.00\\pm 5.7\\) & \\(89.33\\pm 3.77\\) & \\(76.00\\pm 8.64\\) & **90.2 \\(\\pm\\) 4.75** \\\\ Hay-windrowed & \\(94.86\\pm 1.00\\) & \\(92.65\\pm 0.98\\) & \\(95.58\\pm 1.13\\) & \\(95.364\\pm 1.105\\) & \\(97.28\\pm 0.955\\) & **98.69 \\(\\pm\\) 0.75** \\\\ Oats & \\(36.11\\pm 17.61\\) & \\(15.1\\pm 11.92\\) & \\(58.33\\pm 8.3\\) & \\(57.40\\pm 20.45\\) & \\(53.703\\pm 14.58\\) & **86.66 \\(\\pm\\) 12.95** \\\\ Soybean-notill & \\(62.44\\pm 2.37\\) & \\(75.59\\pm 3.68\\) & \\(63.72\\pm 11.2\\) & \\(71.54\\pm 3.079\\) & \\(81.045\\pm 0.60\\) & **85.24 \\(\\pm\\) 5.6** \\\\ Soybean-mintill & \\(62.85\\pm 2.41\\) & \\(79.08\\pm 1.28\\) & \\(82.91\\pm 1.4\\) & \\(77.15\\pm 2.83\\) & \\(83.29\\pm 0.97\\) & **84.89 \\(\\pm\\) 1.7** \\\\ Soybean-clean & \\(64.990\\pm 5.74\\) & \\(45.02\\pm 3.25\\) & \\(79.08\\pm 1.7\\) & \\(70.168\\pm 3.62\\) & \\(86.053\\pm 4.28\\) & **87.24 \\(\\pm\\) 4.27** \\\\ Wheat & \\(94.06\\pm 2.74\\) & \\(93.62\\pm 2.59\\) & \\(94.48\\pm 1.8\\) & \\(93.69\\pm 2.93\\) & \\(95.09\\pm 0.26\\) & **98.27 \\(\\pm\\) 92** \\\\ Woods & \\(92.05\\pm 2.63\\) & \\(91.39\\pm 2.79\\) & \\(93.54\\pm 1.3\\) & \\(95.14\\pm 0.90\\) & \\(96.48\\pm 0.68\\) & **96.90 \\(\\pm\\) 1.44** \\\\ Buildings-Grass-Trees & \\(50.28\\pm 1.03\\) & \\(31.66\\observed from Table III that the three player MGSGAN gives better performance than the two player ASHSGAN.
We have also studied the statistical significance of our proposed method with other methods through McNemar's test (\\(M_{t}\\)) and the performance is shown in Table IV for Indian pines and Pavia university datasets. The larger value of \\(M_{t}\\) indicates statistical significance at a higher confidence. Similar to [2], we conclude that performance is statistically significant for \\(M_{t}>1.96(5\\%\\alpha)\\).
## V conclusion
In this letter, a novel mixture of spectral generator GAN has been proposed for generating minority and majority classes to improve classifiers' performance for class-imbalanced datasets. To validate the proposed method, two train-to-testing ratio imbalance conditions for the Indian pines and Pavia University dataset have been considered. The proposed MGSGAN has improved classification performance compared to ACHSGAN, ACSGAN and spectral CNN. Our future work will consider mixtures of spatial-spectral GAN for improving the classification performance further.
## References
* [1] C.-I. Chang, _Hyperspectral data exploitation: theory and applications_. John Wiley & Sons, 2007.
* [2] M. Zhang, M. Gong, Y. Mao, J. Li, and Y. Wu, \"Unsupervised feature extraction in hyperspectral images based on wasserstein generative adversarial network,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 57, no. 5, pp. 2669-2688, 2018.
* [3] J. Feng, H. Yu, L. Wang, X. Cao, X. Zhang, and L. Jiao, \"Classification of hyperspectral images based on multiclass spatial-spectral generative adversarial networks,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 57, no. 8, pp. 5329-5343, 2019.
* [4] Y. Chen, H. Jiang, C. Li, X. Jia, and P. Ghamisi, \"Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 54, no. 10, pp. 6232-6251, 2016.
* [5] M. Zhang, J. Ma, and M. Gong, \"Unsupervised hyperspectral band selection by fuzzy clustering with particle swarm optimization,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 14, no. 5, pp. 773-777, 2017.
* [6] Y. Zhan, D. Hu, Y. Wang, and X. Yu, \"Semisupervised hyperspectral image classification based on generative adversarial networks,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 15, no. 2, pp. 212-216, 2017.
* [7] J. Li, J. M. Bioucas-Dias, and A. Plaza, \"Semi-supervised hyperspectral image classification based on a markov random field and sparse multinomial logistic regression,\" in _2009 IEEE International Geoscience and Remote Sensing Symposium_, vol. 3. IEEE, 2009, pp. III-817.
* [8] M. Marconcini, G. Camps-Valls, and L. Bruzzone, \"A composite semisupervised svm for classification of hyperspectral images,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 6, no. 2, pp. 234-238, 2009.
* [9] X. Cao, C. Wei, J. Han, and L. Jiao, \"Hyperspectral band selection using improved classification map,\" _IEEE geoscience and remote sensing 3_ letters, vol. 14, no. 11, pp. 2147-2151, 2017.
* [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" in _Advances in neural information processing systems_, 2014, pp. 2672-2680.
* [11] A. Radford, L. Metz, and S. Chintala, \"Unsupervised representation learning with deep convolutional generative adversarial networks,\" _arXiv preprint arXiv:1511.06434_, 2015.
* [12] J. T. Springenberg, \"Unsupervised and semi-supervised learning with categorical generative adversarial networks,\" _arXiv preprint arXiv:1511.06390_, 2015.
* [13] G. Douzas and F. Bacao, \"Effective data generation for imbalanced learning using conditional generative adversarial networks,\" _Expert Systems with applications_, vol. 91, pp. 464-471, 2018.
* [14] L. Zhu, Y. Chen, P. Ghamisi, and J. A. Benediktsson, \"Generative adversarial networks for hyperspectral image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 56, no. 9, pp. 5046-5063, 2018.
* [15] D. Lin, K. Fu, Y. Wang, G. Xu, and X. Sun, \"Marta gans: Unsupervised representation learning for remote sensing image classification,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 14, no. 11, pp. 2092-2096, 2017.
* [16] A. Odena, C. Otah, and J. Shlens, \"Conditional image synthesis with auxiliary classifier gans,\" in _Proceedings of the 34th International Conference on Machine Learning-Volume 70_. JMLR. org, 2017, pp. 2642-2651.
\\begin{table}
\\begin{tabular}{l c c c c c} \\hline MGSGAN & vs SVM & vs KNN & vs CNN & vs ACHSGAN & vs ACGGAN \\\\ \\hline Indian Pines & 19.07 & 19.32 & 6.89 & 10.71 & 2.19 \\\\ \\hline Pavia University & 33.42 & 32.17 & 17.10 & 26.73 & 9.44 \\\\ \\hline \\end{tabular}
\\end{table} TABLE IV: McNemar’s Tests
Fig. 2: Three generated model output for one minority class data | We propose a three-player spectral generative adversarial network (GAN) architecture to afford GAN with the ability to manage minority classes under imbalance conditions. A class-dependent mixture generator spectral GAN (MGSGAN) has been developed to force generated samples remain within the domain of the actual distribution of the data. MGSGAN is able to generate minority classes even when the imbalance ratio of majority to minority classes is high. A classifier based on lower features is adopted with a sequential discriminator to form a three-player GAN game. The generator networks perform data augmentation to improve the classifier's performance. The proposed method has been validated through two hyperspectral images datasets and compared with state-of-the-art methods under two class-imbalance settings corresponding to real data distributions.
**Keywords:** Mixture Generators Spectral Generative Adversarial Networks, Class Imbalance. | Write a summary of the passage below. | 172 |
arxiv-format/1101_4096v5.md | # A global hybrid coupled model based on Atmosphere-SST feedbacks
Andrea A. Cimatoribus
[email protected] Royal Netherlands Meteorological Institute, De Bilt, The Netherlands
Sybren S. Drijfhout
Royal Netherlands Meteorological Institute, De Bilt, The Netherlands
Henk A. Dijkstra
Institute for Marine and Atmospheric research Utrecht, Utrecht University, Utrecht, The Netherlands
######
Introduction
Since the pioneering work by [1] on a conceptual model of the thermohaline circulation, the problem of the stability of the Atlantic Meridional Overturning Circulation (AMOC) has become one of the main issues in climate research. A collapse of the AMOC is often used to explain abrupt changes in past climate records. In recent years, a possible AMOC collapse in response to increased freshwater forcing in the northern North Atlantic, expected as a consequence of global warming, has been identified as a low probability but high risk future climate event [2; 3; 4].
An abrupt collapse of the AMOC, in response to a quasi-equilibrium increase in freshwater forcing in the North Atlantic, has been reported in different ocean and climate models of intermediate complexity (EMICs) [5]. This implies a non-linear response of the ocean to the freshwater forcing, with a sudden collapse of the overturning above a threshold value of the freshwater forcing. The EMIC results are challenged by the model experiments of [6] and by IPCC-AR4 general circulation model (GCM) results, as analysed in [7]. In the latter, it is found that the AMOC strength decreases approximately linearly in response to a \\(CO_{2}\\) increase according to the SRES-A1B scenario and there is no collapse. It must be noted that the simulations to detect possible multiple equilibria regimes of the AMOC in these GCMs have not been done. The near-linear response to the gradual freshwater flux perturbation as found in [7] does not rule out the possibility of a sudden collapse with a stronger freshwater flux.
However, from the GCM results it has been suggested that the existence of a multiple equilibria regime is an artifact of ocean-only models, and in particular of poor (or absent) representation of ocean-atmosphere interactions. In an ocean-only model, the salt advection feedback is the central feedback affecting the stability of the AMOC. When an atmosphere is coupled to the ocean model, other feedbacks, due to the ocean-atmosphere interaction, become relevant. The effect of these feedbacks may eventually overcome the effect of the salt-advection feedback, and remove the multiple equilibria found in ocean-only models and EMICS.
In some models, the response of the atmosphere to AMOC changes may indeed act to stabilise the present day AMOC [8; 9]. In particular, the southward shift of the intertropical convergence zone would enhance the surface salinity of the Atlantic north of the equator,increasing the northward salinity transport by the northern hemispheric gyres [8; 10]. The decrease in the atmospheric temperature of the Northern Hemisphere (NH), as a consequence of the AMOC collapse, may also play a role [9]. Lower atmospheric temperatures would determine stronger heat extraction from the ocean and, consequently, higher densities of surface waters. This effect may be more than compensated by the insulating effect of a NH ice cover extending more to the south [8]. The potential impact of changes in the wind-stress, in particular zonal wind-stress, has recently been investigated in [11], but the magnitude of the changes induced by the wind-stress feedback remains unclear.
The question that must be answered is: \"Do the atmospheric feedbacks remove the multiple equilibria regime of AMOC, as found in ocean-only models and EMICs?\" The first step to try to answer this question is, in our view, to find a simple, but quantitative, description of these atmospheric feedbacks, extending that of box-model representations [12]. Only when a quantitative description of the feedbacks is available, it is possible to assess the impact of the ocean-atmosphere interaction on the stability properties of the AMOC. Studies to isolate the effect of the different feedbacks using a GCM are computationally expansive. Furthermore, the complexity of a full GCM can hinder the understanding of the relevant processes in the system. For these reasons, simpler atmospheric models are needed to provide dynamic boundary conditions to full ocean GCMs. Their design can benefit from the fact that the atmosphere, on the ocean time scales, can effectively be treated as a \"fast\" component that adjusts to the ocean anomalies. These coupled models are often referred to as \"hybrid coupled models\" (HCMs).
Since the main known atmosphere-ocean coupled mode of variability is the El Nino Southern Oscillation (ENSO), HCMs have been developed mainly to study this phenomenon, focusing on the interaction between wind and sea surface temperature (\\(SST\\)) in the tropical oceans. In this framework, the main atmosphere-ocean interaction to include in the model is the change in the zonal winds over the equatorial Pacific in response to \\(SST\\) anomalies [13]. [14] used a statistical model of the wind-stress based on an empirical orthogonal function decomposition of real data, coupled to a regional GCM of the equatorial Pacific. They found good forecasting skill for ENSO variability prediction, and HCMs have been extensively used for ENSO forecasting since then [15]. Singular value decomposition of observational data has been used in [16], to implement an anomaly model of wind-stress for the equatorial Pacific. The HCM including this model has been used to investigate the role of ENSO-like feedbacksin seasonal variability. In [17], linear regressions on Nino-3 and Nino-4 indexes are used in combination with a red noise term to study the importance of local wind feedbacks in the Tropical Pacific. Singular value decomposition in combination with a stochastic term has been used also in [18]. In these studies, the wind-stress-\\(SST\\) interaction is generally the main point of interest, but other feedbacks are active as well in the ocean-atmosphere system. Changes in wind speed affect evaporation and, as a consequence, surface temperature [19]. Also the freshwater flux is correlated to \\(SST\\), through the triggering of convective events in the atmosphere [20; 21].
Our aim here is to develop a global HCM that includes all the main atmosphere-ocean feedbacks relevant for the stability of the AMOC, in an approach that focuses on the quasi-steady state behaviour rather than on variability. As we want to follow an approach as general as possible, we regress all the surface fluxes pointwise on \\(SST\\). Since the \\(SST\\) variability has a typical extent ranging from regional to basin scale, the atmosphere-ocean interaction is roughly captured by this local approach. In the HCM, two linear perturbation terms dependent on \\(SST\\) are added to the climatology of the forcing fields of the ocean model. A term depending on the local \\(SST\\) anomaly represents the atmosphere-ocean feedbacks that are acting in a statistical steady state. The large-scale changes in the surface fluxes due to the collapse of the AMOC can not be described by these local regressions alone, but are included through a second linear term that depends on the anomalous strength of the overturning circulation itself, measured through the NH annual average \\(SST\\) anomaly. Taken together, the local- and large-scale terms give a simple representation of the atmospheric feedbacks which play a role in the stability of the AMOC.
As a demonstration of concept, our regressions are based on the output of an EMIC (described in section II). The linear atmospheric feedback representations are presented in section III with results in section IV. The performance of the HCM is compared to the one of the original EMIC in section V. With both local and large-scale regression terms, the HCM captures the changes in atmospheric fluxes in response to AMOC changes. The advantages of the HCM over the EMIC are that (i) a more than ten fold decrease in computation time is achieved and (ii) it gives the possibility to selectively investigate the effect of different physical processes on the stability of the AMOC separately.
The EMIC SPEEDO
The HCM is constructed from data of the EMIC SPEEDO [22], an intermediate complexity coupled atmosphere/land/ocean/sea-ice general circulation model. The choice for an EMIC is motivated by the fact that multi-thousand year runs are needed to construct the HCM, which is at the moment not feasible with a GCM.
The atmospheric component of SPEEDO is a modified version of Speedy [23; 24; 25; 26; 27], an atmospheric GCM, having a horizontal spectral resolution of T30 with a horizontal Gaussian latitude-longitude grid (approximately \\(3^{\\circ}\\) resolution) and 8 vertical density levels. Simple parameterisation are included for large-scale condensation, convection, radiation, clouds and vertical diffusion. A simple land model is included, with three soil layers and up to two snow layers. The hydrological cycle is represented with the collection of precipitation in the main river basins and outflow in the ocean at specific positions. Freezing and melting of soil moisture is included.
The ocean model component of SPEEDO is the CLIO model [28]. It has approximately a \\(3^{\\circ}\\times 3^{\\circ}\\) resolution in the horizontal, with 20 vertical layers ranging in resolution from 10 \\(m\\) to 750 \\(m\\) from the surface to the bottom. The horizontal grid of the ocean model is curvilinear, and deviates from a latitude-longitude one in the north Atlantic and Arctic basins to avoid the singularity of the north pole. A convective adjustment scheme, increasing vertical diffusivity when the water column is unstably stratified, is used in the model. LIM sea-ice model is included in CLIO [20]. A coupler provides the boundary conditions to the components, and performs the interpolations between the different ocean and atmosphere model grids in a conservative way.
Studies conducted both with an EMIC [29] and with a fully implicit ocean model [30] showed the fundamental role of the salinity budget at the southern boundary of the Atlantic ocean in determining the response of the AMOC to freshwater anomalies [31]. The value of the net freshwater transport by the overturning circulation at \\(35^{\\circ}\\)S, shorthanded \\(M_{ov}\\), is likely a control parameter that signals the coexistence of two stable equilibria of the AMOC. If \\(M_{ov}\\) is positive, the AMOC is importing freshwater into the Atlantic basin and only the present-day \"ON\" state of the overturning is stable. If \\(M_{ov}\\) is negative, freshwater is exported out of the basin by the AMOC, and a second stable \"OFF\" state of the AMOC exists, with reversed or no overturning in the Atlantic ocean.
In the equilibrium solution of SPEEDO, the Atlantic basin integrated net evaporation is overestimated both with respect to most other models and to the few available observations [31]. Furthermore, the zonal gradient of salinity in the south Atlantic is reversed too, with a maximum on the eastern side. The high evaporation over the basin, combined with the low freshwater import by the gyre due to the reversed zonal salinity profile, force the overturning circulation to import freshwater (\\(M_{ov}=0.29\\;\\mathrm{Sv}\\)) in order to close the budget. For these reasons, a small freshwater flux correction is needed in the model for the purpose of our study, since we are interested in the feedbacks connected with a permanent collapse of the AMOC. Following the example of [29], a freshwater increase is applied over the eastern Atlantic, from the southern boundary to the latitude of the Gibraltar strait, summing up to 0.2 Sv. A dipole correction is applied over the southern gyre to reverse the zonal salinity profile, with a rate of 0.25 Sv[42]. All the corrections are performed as a virtual salt flux, keeping the global budget closed with an increased evaporation in the tropical Pacific and Indian oceans. As a consequence of these corrections, the net freshwater transport of the AMOC at the southern boundary of the Atlantic basin becomes negative (\\(M_{ov}=-0.069\\;\\mathrm{Sv}\\)). As proposed in [29] and [30], this situation may allow the coexistence of multiple equilibria of AMOC under the same boundary conditions. Even if the data necessary for the definition of the HCM comes from 300 years of simulations alone, in the testing phase of different freshwater corrections applied to reach the regime where the MOC can permanently collapse, several tens of thousand years of integrations have been simulated by the EMIC (i.e., changing fresh-water correction and going to equilibrium, testing flux diagnostics, testing whether the collapse of the AMOC is permanent), motivating the use of a fast EMIC.
The surface boundary conditions for the ocean are computed from the atmospheric model as follows. Since the atmospheric boundary layer is represented by only one model layer, near surface values of temperature (\\(T_{sa}\\)), wind (\\(\\vec{U}_{sa}\\), the bold font indicating a vector quantity) and specific humidity (\\(Q_{sa}\\)) are extrapolated from the values of the model lowest full layers. Furthermore, an effective wind velocity is defined to include the effect of unresolved wind variability as \\(|V_{0}|=\\left(\\vec{U}_{sa}\\cdot\\vec{U}_{sa}+V_{gust}^{2}\\right)^{\\frac{1}{2}}\\), where \\(V_{gust}\\) is a model parameter. The ocean model provides through the coupler the values of \\(SST\\), from which also the saturation specific humidity at the surface (\\(Q_{sa}^{sat}\\)) is computed through the Clausius-Clapeyron equation. With these quantities, the surface boundary conditions for the ocean are computed. The sensible (\\(\\Phi_{SQ}\\)) and latent heat (\\(\\Phi_{LQ}\\)) fluxes into the ocean are obtained from the bulk formulas:
\\[\\begin{split}\\Phi_{SQ}&=\\rho_{sa}c_{p}C_{H}\\left|V_{0} \\right|\\left(T_{sa}-SST\\right),\\\\ \\Phi_{LQ}&=\\rho_{sa}L_{H}C_{H}\\left|V_{0}\\right|min \\left[\\left(Q_{sa}-Q_{sa}^{sat}\\right),0\\right],\\end{split} \\tag{1}\\]
where \\(\\rho_{sa}\\) is the surface air density, \\(c_{p}\\) and \\(L_{H}\\) are the specific heat of air and the latent heat of evaporation, respectively, and \\(C_{H}\\) is a heat exchange coefficient, a model parameter depending on the stability properties of the boundary layer. The parameterisation of the radiative fluxes are more complex. For the short-wave (\\(\\Phi_{SW}\\)) and long-wave components (\\(\\Phi_{LW}\\)), two and four frequency bands are used, respectively. Transmittance is computed for each band separately, taking into account air density, water content and cloud cover. The total non-solar heat flux (\\(\\Phi_{Q}\\)) is just the sum of the different components:
\\[\\Phi_{Q}=\\Phi_{SQ}+\\Phi_{LQ}+\\Phi_{LW}. \\tag{2}\\]
Separate parameterisation are used for precipitation due to convection (\\(\\Phi_{Pcv}\\)) and to large-scale condensation (\\(\\Phi_{Pls}\\)). River runoff (\\(\\Phi_{R}\\)) is provided by the land model. The net evaporation (\\(\\Phi_{E}\\)) can then be computed as:
\\[\\Phi_{E}=\\Phi_{LQ}/L_{H}-\\Phi_{Pls}-\\Phi_{Pcv}-\\Phi_{R}. \\tag{3}\\]
The wind-stress vector is computed as:
\\[\\vec{\\Phi_{U}}=\\rho_{sa}C_{D}\\left|V_{0}\\right|\\vec{U}_{sa}, \\tag{4}\\]
where \\(C_{D}\\) is a drag coefficient.
## III Linear regressions
Our aim is to capture the changes in the atmospheric forcing connected with the changes in the ocean state, that is the atmospheric response to a collapse of the AMOC. As motivated in the introduction, we assume that these atmospheric feedbacks can be expressed as functions of \\(SST\\) alone. First, the feedbacks that keep the system in a statistical equilibrium state are always present, and are expressed in our case as a function of local \\(SST\\). They are extracted from a 200 years long statistical steady state run (CLIM) of SPEEDO. The departure from the steady state arises during an externally forced AMOC collapse, in association with the large-scale \\(SST\\) footprint of a AMOC decline. The feedbacks involved in the collapse are different from the ones acting at the steady state. To study the large-scale feedbacks, a 4000 year experiment was performed, starting from CLIM, with an additional 0.4 Sv freshwater flux centred around southern Greenland during the first 1000 years; this run is referred to as PULSE. In the first hundred years of the experiment, the AMOC collapses and a shallow reverse overturning cell is established in the Atlantic basin. Since in this paper the focus is only on the impact of a complete and steady collapse of the AMOC, we only show the results using the large freshwater anomaly mentioned, that guarantees that the AMOC is brought to a steady reversed state.
The maximum of the meridional overturning streamfunction during the first two hundred years of both PULSE and CLIM runs are shown in figure 1 (bottom panel). After the first 1000 years of the experiment, the additional freshwater pulse is released and the model tends to an equilibrium state with no sign of recovery of deep water formation in the northern north Atlantic after 3000 years (top panel of figure 1). Taken together, the feedbacks extracted from CLIM and PULSE runs provide the representation of the changes of the atmospheric fluxes during a collapse of the AMOC.
To provide the simplest description of the changes taking place at the ocean-atmosphere interface, the first order approximation is the addition of a linear perturbation term to the climatology of surface atmosphere-ocean fluxes. In particular, we consider a linear regression on \\(SST\\). This approach is clearly limited, but it is an approximation that gives a consistent representation of the large-scale feedbacks. The results can be successfully used as boundary conditions for the ocean-only model, as will be shown below.
To force the ocean model, we need five surface fluxes: non-solar heat flux (that includes long-wave radiation, latent and sensible heat fluxes), short-wave radiative heating, net evaporation, zonal and meridional wind-stresses. The incoming short-wave radiation is not regressed, and only its average seasonal cycle is retained, since its response to SST is completely mediated through a cloud cover response that is not well represented in the Speedy model [22].
Two linear models are used for regressing data from CLIM and PULSE. The CLIM data is fitted with:
\\[\\phi(i,j)-\\overline{\\phi(i,j)}=p_{1}(i,j)\\cdot\\left(SST(i,j)-\\overline{SST(i, j)}\\right), \\tag{5}\\]
where \\(\\phi\\in\\left\\{\\Phi_{Q},\\Phi_{E},\\vec{\\Phi_{U}}\\right\\}\\) is a particular surface flux field to be regressed, \\(p_{1}\\) is the model parameter field to be fitted, \\(i\\) (\\(j\\)) is the grid index in the east-west (north-south) direction and the overbar indicates a time average. Monthly data is used in the fit of CLIM data to represent the seasonal cycle. Note that this formulation is a _local_ regression, by which we mean a regression between quantities that belong to the same grid cell of the model.
The natural variability signal caught by regressions from equation (5) is removed from PULSE data. Only the first 100 years of PULSE are used, since we are interested in the response that can approximately be considered linear. The residual signal \\(\\phi_{r}(i,j)\\) can then be regressed with a second linear model:
\\[\\phi_{r}(i,j)=p_{2}(i,j)\\cdot\\left(\\left\\langle SST\\right\\rangle_{NH}-\\overline {\\left\\langle SST\\right\\rangle_{NH}}\\right), \\tag{6}\\]
where the symbol \\(\\left\\langle\\ \\ \\right\\rangle_{NH}\\) denotes the average over the NH. In this case the regressor is, for all grid cells, the yearly average SST in the NH, a good indicator of the state of the AMOC [9], as figure 1 suggests (bottom panel, dashed line). Yearly mean data is used for the fit of PULSE. It must be stressed that the last term of equation (6) is the average NH \\(SST\\) for the CLIM run, since we are interested in the deviation from the equilibrium state. Consequently, the intercept is set to zero, since the terms involving \\(p_{2}\\) need not to have an effect when the climate is in a neighbourhood of CLIM.
All the regressions are computed with the _lm_ (linear model) function provided in the R statistical software, version 2.8.0 [32]. The regressions are computed through a least square technique, and we require a statistical significance higher than the 95 percentile, discarding all the fits with a _p-value_ (provided by _lm_ itself) higher than 0.05. This equals to discarding a fit if the probability of having the same result using random data is higher than 5%. When this occurs the fit is considered unsuccessful, and only the climatological value of CLIM (\\(\\overline{\\phi(i,j)}\\) in equation (5)) is kept and both \\(p_{1}(i,j)\\) and \\(p_{2}(i,j)\\) are set to zero. The output of the fitting procedure shows very weak sensitivity to the chosen significance level.
The same regression procedure was applied also to the output of the uncorrected original SPEEDO model. The results obtained from the two models, with or without freshwater flux corrections, are consistent on both qualitative and quantitative grounds. A partial exception is the southern ocean and the Labrador sea, where the strength of the feedbacks is different. An analysis of these differences is beyond the scope of the present study, but may be associated with changes in sea-ice cover in the two models.
We now give the formulation of the boundary conditions for the ocean-only model to be forced by our \"climatology with feedbacks\". The surface heat flux into the ocean is computed as a combination of the regressions and a restoring term to the climatology:
\\[\\begin{split}\\Phi_{Q}(i,j)=&\\overline{\\Phi_{Q}(i,j)}+p _{1}^{\\Phi_{Q}}(i,j)\\cdot\\left(SST(i,j)-\\overline{SST(i,j)}\\right)\\\\ +& p_{2}^{\\Phi_{Q}}(i,j)\\cdot\\left(\\left\\langle SST \\right\\rangle_{NH}-\\overline{\\left\\langle SST\\right\\rangle_{NH}}\\right)\\\\ +&\\overline{\\Phi_{SW}(i,j)}\\\\ +&\\frac{\\rho_{sa}c_{p}\\left|\\overline{V_{0}(i,j)} \\right|}{\\tau}\\cdot\\left(\\overline{SST}(i,j)-SST(i,j)\\right),\\end{split} \\tag{7}\\]
where \\(p_{1}^{\\Phi_{Q}}\\) and \\(p_{2}^{\\Phi_{Q}}\\) are the local and large-scale regression parameters for the heat flux, \\(\\rho_{sa}\\) and \\(\\overline{V_{0}(i,j)}\\) are fixed climatological values and the relaxation time \\(\\tau\\) is chosen to be 55 days for the ocean, consistently with the bulk formula of the coupled model of equation (1).
The net evaporation flux is computed in three steps. First, the deviations from the climatological values, \\(\\delta\\Phi_{E}\\), are computed at each grid cell:
\\[\\begin{split}\\delta\\Phi_{E}(i,j)=& p_{1}^{\\Phi_{E}} (i,j)\\cdot\\left(SST(i,j)-\\overline{SST(i,j)}\\right)\\\\ +& p_{2}^{\\Phi_{E}}(i,j)\\cdot\\left(\\left\\langle SST \\right\\rangle_{NH}-\\overline{\\left\\langle SST\\right\\rangle_{NH}}\\right),\\end{split} \\tag{8}\\]
where \\(p_{1}^{\\Phi_{E}}\\) and \\(p_{2}^{\\Phi_{E}}\\) are the regression parameters for the net evaporation flux. Then, the global integral of the deviations, \\(\\Delta\\Phi_{E}\\), is computed on the model grid and the budget imbalance is set to zero. The total freshwater flux reads then:
\\[\\Phi_{E}(i,j)=\\overline{\\Phi_{E}(i,j)}+\\delta\\Phi_{E}(i,j)-\\Delta\\Phi_{E}/\\Sigma, \\tag{9}\\]
where \\(\\Sigma\\) is the ocean surface area.
For the wind-stress vector, only the output of the regressions is used:
\\[\\begin{split}\\vec{\\Phi_{U}}(i,j)&=\\overline{\\vec{ \\Phi_{U}}(i,j)}+\\vec{p_{1}^{\\vec{\\Phi_{U}}}}(i,j)\\cdot\\left(SST(i,j)-\\overline {SST(i,j)}\\right)\\\\ &+\\vec{p_{2}^{\\vec{\\Phi_{U}}}}(i,j)\\cdot\\left(\\left\\langle SST \\right\\rangle_{NH}-\\overline{\\left\\langle SST\\right\\rangle_{NH}}\\right),\\end{split} \\tag{10}\\]
where \\(\\vec{p_{1}^{\\vec{\\Phi_{U}}}}(i,j)\\) and \\(\\vec{p_{2}^{\\vec{\\Phi_{U}}}}(i,j)\\) are the vectors of the regression parameters for local and large-scale regressions respectively, for the two components of the wind-stress. Over sea-ice, a fixed climatology of air-ice fluxes is used. When sea-ice is present, weighting is applied by the model to the surface fluxes multiplying by the fractional ocean area \\((1-\\varepsilon(i,j))\\), where \\(\\varepsilon(i,j)\\) is the fractional sea-ice cover of the cell.
The technique described returns the rate of change of the field with \\(SST\\) or \\(\\left\\langle SST\\right\\rangle_{NH}\\) only in those areas where a linear regression is statistically significant. Furthermore, setting the regression parameters to zero still leaves a constant climatology that can be used as boundary condition for the ocean model. We thus have the complete control over which feedbacks are acting at the ocean-atmosphere interface, and we can selectively investigate their individual or collective effect.
## IV Results
### Local regressions
The fitting procedure for CLIM data is generally successful and the results of the regressions on CLIM data are reported in figures 2 and 3.
In figure 2, the average value of the regressed fields is reported (\\(\\overline{\\phi(i,j)}\\) in equation (5)). The total heat flux (including short-wave radiation) is shown in figure 2. The net evaporation includes the river runoff. The values of the regression parameter \\(p_{1}\\) are shown in figure 3 for all the regressed fields. In both figures 2 and 3 the values are weighted by the fractional free ocean surface of the cell to compensate for the effects of average sea-ice cover. The effect of changes in sea-ice cover are not included into the regressions, as the effect of sea-ice is taken into account by CLIO model. As discussed below, the changes in sea-ice can strongly modify the feedbacks (compare figures 3 and 6).
For all the regressed fields, the contribution to the fluxes of the local regression terms can be important compared to the average value, in particular at the western boundaries and outside the equatorial and polar regions. This is clear when we consider the \\(SST\\) variability on a daily basis; the root of the variance is well above \\(1^{\\circ}\\)C everywhere in the subtropical and subpolar ocean, with peak values of about \\(7^{\\circ}\\)C close to the NH western boundaries (not shown).
The linear regressions only capture part of the natural variability of CLIM fluxes, but the error is generally lower than \\(10\\%\\) of the original field over a major part of the ocean (not shown).
Apart from the standard damping on \\(SST\\) that also operates in ocean-only models driven by a prescribed atmosphere, the atmospheric control over the atmosphere-ocean heat flux counteracts this damping in many regions, in particular in the tropics and at high latitudes (positive values in figure 3 a). This means that the linear feedback for the heat flux is not damping the \\(SST\\) anomalies. Relevant exceptions are the equatorial ocean, the central north Atlantic, the northern portion of the Southern Ocean and other smaller areas. It should be noted that in the polar areas, the sea-ice cover determines the effective feedback in the heat flux, and often changes the sign of the feedback. The exact mechanism of this feedback is discussed in more detail in section IV.2.
To investigate the origin of the pattern of the local heat feedback outside the polar regions, the same regression procedure was applied to each component of the heat flux separately, namely sensible and latent heat fluxes and long-wave radiation (not shown). The change in the latent heat release is the most important component of the heat flux change. The feedback of sensible heat flux is slightly weaker in magnitude, and is positive with the only relevant exceptions of the North Atlantic and the equatorial ocean. The long-wave radiation feedback follows the same pattern, and is the weakest term. As first noted in [33], the sign of the heat flux feedback from equation (1) depends to first order only on the relative change of \\(T_{sa}\\) and \\(SST\\), if the wind is assumed constant. A positive feedback is possible only if the change in \\(T_{sa}\\) is larger than the one in \\(SST\\). This is almost always true in our model in the areas where the heat feedback is positive, as we find when \\(T_{sa}\\) is regressed on \\(SST\\) (not shown).
A plausible explanation of this positive heat feedback, at least at low and mid latitudes, is given by the convection-evaporation feedback mechanism as proposed by [21]. There is a strong resemblance between the patterns of increased convective precipitation and those of weaker latent heat loss at higher \\(SST\\). This suggests that, in the tropical and subtropical areas where a positive heat flux feedback is observed, a positive \\(SST\\) anomaly is associated with anomalous convergence of wet air that both contributes to the reduction of evaporation[43] and enhances precipitation if convection is triggered. Regression of surface pressure on \\(SST\\) also supports this hypothesis, since higher \\(SST\\)s correlate with lower surface pressure in the tropical and subtropical areas. Regarding net evaporation (figure 3 b), a weak increase is observed at higher \\(SST\\) over most of the ocean. On the contrary, in most of the tropical areas the increase in convective events leading to increased precipitation dominates the freshwater feedback (basically, the blue areas of figure 3 b), as discussed above.
In the case of wind-stress, a decreased magnitude is observed in connection with higher \\(SST\\) (compare figure 3 c and d with the mean fields of figure 2). The term \\(|V_{0}|\\) of equation (1) is regressed on the local \\(SST\\), confirming that over most of the ocean at low and mid latitudes lower than average winds are observed in association with higher than average \\(SST\\)s (not shown), implying lower heat transfer through the interface. The correlation decreases moving poleward and the mechanism involved is basically the wind-evaporation feedback [19], that connects higher evaporation (lower \\(SST\\)) with stronger winds. The fact that we do not observe stronger winds where an increase of convective precipitation is found is not surprising, since the parameterisation of convection does not affect the horizontal wind field [23]. A positive correlation between wind speed and \\(SST\\) is observed only in the western part of the subtropical gyre of the Southern Hemisphere (SH) of the Atlantic ocean, south of Greenland and in the Labrador sea, in the northeastern part of the subpolar gyre of Pacific ocean, and in some other smaller regions. Even though the negative wind feedback is thought to be dominant, some evidence for a positive feedback has been found for the Kuroshio extension area, in the northeastern Pacific [34; 35]. The best known wind-\\(SST\\) feedback mechanism where the wind response to SST anomalies is central is the Bjerknes' feedback in the equatorial Pacific areas, in connection with the ENSO [13]. The fundamental coupled variability of the equatorial ocean-atmosphere system is that of a decrease of the western Pacific trade winds in response to a positive anomaly of \\(SST\\) in the eastern equatorial Pacific. Even though the model has too low resolution to exhibit a realistic ENSO [22], a weakening of the trade winds in the western and central equatorial ocean is captured by the linear regressions (figure 3 c) and is consistent with the anomaly patterns connected with ENSO [36]. The stronger convective precipitation detected in the western Pacific at higher \\(SST\\)s may be a sign of anomalous convergence of the low level atmospheric circulation, again in agreement with what shown by [36]. The origin of the dipole structure of the meridional wind feedback between NH and SH (figure 3 d) is basically a reflection of the weaker dominant winds at higher \\(SST\\).
### Large-scale regressions
Moving to the results of _large-scale_ regressions, it must be kept in mind in the interpretation of the results that the fit is performed only on the residuals of _local_ regressions, not on the full data of PULSE and that the fit is performed on a decreasing quantity, the NH average \\(SST\\).
The collapse of the AMOC causes a decrease in the NH average \\(SST\\) of about \\(1.2^{\\circ}\\)C. A weaker change of opposite sign is observed over the Southern Ocean (approximately \\(0.4^{\\circ}\\)C). This NH-SH temperature dipole is a robust feature of different models, and is connected with lower northward heat transport in the Atlantic ocean, as already found in [9]. The changes in the heat flux are mainly captured by the large-scale regression parameter alone. This can be evinced comparing the large-scale heat flux parameter and the diagnosed changes in the flux from the coupled model, and is connected with the larger magnitude of the large-scale parameter. The main response of the heat flux after the overturning collapse, not considering changes in the sea-ice cover (figure 4 a), would be that of an increased heat extraction from the ocean in the NH (\\(9.9\\ W/(m^{2}\\ ^{\\circ}\\)C) on average). When the effect of a changing sea-ice cover is included in the computation of the heat feedback (figure 6 b), its sign changes in the high latitudes of the NH (\\(-9.6\\ W/(m^{2}\\ ^{\\circ}\\)C) on average in the NH), which means that heat released to the atmosphere decreases. This result is in contrast with what the regression parameter \\(p_{2}\\) suggests, but consistent with the sign of the effective regression parameter. The difference is explained below. The net heat flux, weighted by the ice-free area \\((1-\\varepsilon)\\), can be written as:
\\[\\phi_{Q}=(1-\\varepsilon)(\\overline{\\phi_{Q}}+\\partial\\phi_{Q}/\\partial SST+ \\partial\\phi_{Q}/\\partial\\left\\langle SST\\right\\rangle_{NH}). \\tag{11}\\]
\\(p_{2}^{\\phi_{Q}}\\) is simply \\(\\partial\\phi_{Q}/\\partial\\left\\langle SST\\right\\rangle_{NH}\\) while the effective parameter is:
\\[\\begin{split} p_{2,eff}^{\\phi_{Q}}&=\\partial(\\phi _{Q}\\cdot(1-\\varepsilon))/\\partial\\left\\langle SST\\right\\rangle_{NH}\\\\ &=(1-\\varepsilon)\\partial\\phi_{Q}/\\partial\\left\\langle SST \\right\\rangle_{NH}-\\phi_{Q}\\partial\\varepsilon/\\partial\\left\\langle SST \\right\\rangle_{NH}\\\\ &=(1-\\varepsilon)p_{2}^{\\phi_{Q}}-\\phi_{Q}\\partial\\varepsilon/ \\partial\\left\\langle SST\\right\\rangle_{NH}.\\end{split} \\tag{12}\\]
The second term on the right hand side of equation 12 describes the changes in sea-ice cover in response to \\(SST\\) changes. This term is larger than the first term over most of the Northern North Atlantic. Sea-ice cover changes determine the sign change in the large-scale heat feedback term. A similar reasoning holds for the local feedback. In general, the NH-SH heat flux dipole seen in figure 4 a is driven by the decrease of NH near-surface temperature, that follows a pattern similar to that of \\(SST\\) (figure 5), but with stronger sensitivity to AMOC changes everywhere except for the southern mid latitudes. This amplification of the \\(SST\\)signal, in particular in the atmosphere of the high latitudes of NH, is a consequence of the appearance of sea-ice during winter. Without sea-ice changes, these differential variations in \\(SST\\) and atmospheric temperature would tend to produce an increased upward heat flux in the NH (figure 4 a). This increased heat loss is more than counteracted by the decrease in open ocean area; the increased ice cover effectively drives the cooling of atmospheric temperatures above the North Atlantic. This can be seen from the changes in the heat flux diagnosed from the coupled model including the insulating effect of sea-ice (figure 6 c) and this is confirmed by the _large-scale_ regression parameter computed including the effect of sea-ice (figure 6 b). This \"effective\" regression parameter is the result of the same fitting procedure, applied in this case to the surface heat flux weighted by the actual sea-ice cover and not to the complete heat flux. The results for the local (large-scale) regression are those shown in figure 6 a (b). As a consequence, this regression parameter gives a better representation of the feedbacks that the ocean effectively senses (including the effect of sea-ice). Note that the HCM only uses \\(p_{1}\\) and \\(p_{2}\\), and not the effective response coefficients. The changes in sea-ice cover result from explicitly resolved ice dynamics and thermodynamics.
At low and mid latitudes in the NH the changes are due to reduced evaporation in response to lower \\(SST\\) and, at low latitudes, to lower wind speed. The changes in the surface long-wave radiation budget are smaller in magnitude, and amount to an increased net emission of long-wave radiation almost everywhere in the NH except from the GIN seas. This effect has been observed in other model experiments and is connected with the reduced downward long-wave radiation flux over compensating the decreased black body emission at lower \\(SST\\)s [37]. The decrease in the downward long-wave flux is an effect of a drier atmosphere, and partly balances the reduced latent heat flux. These changes in heat flux amount to a positive feedback on an AMOC anomaly when the effect of sea-ice is included, favouring a decrease of the surface density in the deep water formation areas of the North Atlantic in connection with weaker overturning circulation.
The patterns of the net evaporation change (figure 4 b) are consistent with the findings of [9] (their figure 9 e, with opposite sign). The AMOC collapse causes a reduction of net evaporation over the tropical and subtropical NH and over the tropical SH, due to lower \\(SST\\)s (figure 5). In the few areas where an increase in evaporation is observed (basically the north equatorial oceans), this is due to stronger winds. At low latitudes, a significant change of the precipitation patterns also plays a role, with a dipole pattern centred around the equator, and positive to the south. This southward shift of the intertropical convergence zone (ITCZ) produces the strongest precipitation increase over the Amazon river basin. This response of the Hadley cell is connected with the southward shift of the latitude of maximum heating, and has been observed consistently in different climate models [9; 10; 37] and in an idealised framework too [38]. A similar, though weaker, pattern of precipitation change is observed in the Pacific and Indian oceans. The increased precipitation over the entire southern Atlantic more than compensates for the increased evaporation due to higher \\(SST\\) in this part of the basin. A slow down of the hydrological cycle over Europe is detected as two negative peaks off the coast of France and in the North sea. On a global scale, the regressions of PULSE residuals determine an evaporation increase of 0.13 \\(mm/(day~{}^{\\circ}\\)C). Therefore, our linear approach is not conserving the ocean water mass and needs a budget closure correction when used as boundary condition for the ocean, as implemented in equation (9).
In the case of wind-stress, the response of the atmosphere is somewhat less straightforward to understand, and it deserves a longer discussion. For what concerns the meridional wind-stress, the changes in the low and mid latitudes are driven by the response of the zonally averaged temperature profile to the AMOC collapse. The equator to pole temperature difference increases by approximately 4\\({}^{\\circ}\\)C in the NH. In the SH, the opposite is true, with a smaller change. These changes are clearly mirrored in the zonally averaged wind-stress. Stronger southward wind blows on the ocean with a collapsed AMOC in the NH up to 50\\({}^{\\circ}\\)N. The situation is similar in the SH, but with a weaker circulation down to 40\\({}^{\\circ}\\)S, following the opposite change in the zonally averaged temperature. The zonal winds over the Southern Ocean are also reduced. A more peculiar feature is observed in the north Atlantic. A pressure anomaly dipole between Greenland and northeastern Atlantic develops, with positive sign to the east, in connection with the differential cooling between these two regions (stronger cooling over eastern Atlantic). This in turn determines an anomalous anticyclonic circulation centred north of Scotland, with impacts on both the meridional and zonal wind-stress. Referring to our regressions, the changes due to the AMOC collapse in the tropical regions are already caught by the local regression parameter (\\(p_{1}\\), figure 3 c and d). This can be understood considering that the change in \\(SST\\) due to the AMOC collapse (figure 5) is a dipole centred at the latitude of the southernropic (at the equator in the Atlantic ocean) and positive to the south of it, with an amplitude of a few degrees. In fact, the changes due to the overturning collapse are overestimated by the _local_ regressions, and \\(p_{2}\\) (figure 4c and d) amounts to a correction opposite to \\(p_{1}\\). The positive values of \\(p_{2}\\) for meridional wind-stress in the intertropical regions (figure 4 d) signal the southward shift of the ITCZ, that is an anomalous southward wind with decreasing NH average \\(SST\\), not represented by the local regressions. Also the anomalous anticyclonic circulation is reproduced in the large-scale regressions by the dipoles over northeastern Atlantic (positive to the south and to the east). The impact on AMOC stability of wind-stress feedbacks has been investigated in the recent paper by [11], where a simple zonally averaged atmospheric model was used. Even though it is quite difficult to compare their results with the results from a GCM like SPEEDO, the general picture is similar. The atmospheric circulation in the NH is strengthened, while the opposite is true for the SH. The magnitude of the changes in SPEEDO is close to their lowest estimates.
## V HCM Test
The HCM consists of the ocean component of SPEEDO (i.e., CLIO) and the dynamic boundary conditions described in the previous section. It was tested by comparing its results with the original SPEEDO model. The first experiment (regCLIM) starts from the end state of the ocean of the CLIM run. The model is forced only by the local regressions (values of \\(p_{2}\\) set to zero) for 3000 years. Next, all the large-scale regressions are also switched on, and the model runs for 2000 years more.
Results of the regCLIM run are shown in figure 7. On the top panel, the deviation from CLIM mean value of the global average sea temperature (salinity) is reported in black (red). The area shaded in grey on the left margin of figure 7 marks the (200 years) data of the CLIM run. The light blue area marks the first 3000 years of the regCLIM of the ocean-only model, with only local regressions active. To estimate the theoretical equilibrium state of the model, we fit the global average sea temperature and salinity from years 1201-5200 of regCLIM with the function:
\\[f(t)=a_{1}{\\rm sin}\\left(\\frac{t+a_{2}}{a_{3}}\\right){\\rm exp}\\left[-\\frac{t+ a_{2}}{a_{4}}\\right]+B, \\tag{13}\\]
where \\(t\\) is time, \\(a_{1},\\ldots,a_{4}\\) are the fit parameters, and \\(B\\) is a constant background that represents the state of the system at infinite time. The theoretical equilibrium state computed from this procedure is \\(0.31^{\\circ}\\)C colder and \\(7.2\\cdot 10^{-4}psu\\) fresher than the coupled CLIMrun. Little drift, but a substantial reduction of the variability due to the restoring term, is observed in the global average \\(SST\\) (figure 7, black line in the bottom panel). The NH average \\(SST\\) increases by \\(0.18^{\\circ}\\)C (difference between last 200 years of regCLIM and CLIM). The maximum of the AMOC is, at the end of regCLIM, approximately 1 Sv weaker than in the CLIM run (bottom panel of figure 7, in red). The AMOC, as the left bottom panel of figure 8 shows, is weaker and approximately 500m shallower in the HCM. The freshwater transport by the AMOC at \\(30^{\\circ}\\)S in the last 200 years of regCLIM (grey shaded area on the right of figure 7) is \\(M_{ov}=-0.06\\:\\mathrm{Sv}\\). To keep \\(M_{ov}<0\\), the freshwater corrections described in section II are 50% stronger than in the fully coupled model.
To investigate the origin of the changes in the AMOC strength, we diagnose the surface fluxes of density for the CLIM and regCLIM runs. The surface density flux \\(\\Phi_{\\rho}\\) can be estimated using the formula [39; 40]:
\\[\\Phi_{\\rho}=-\\frac{\\alpha}{c_{p}}\\Phi_{H}+\\rho_{0}\\beta\\frac{\\Phi_{E}\\cdot SSS }{1-SSS\\cdot 10^{-3}}, \\tag{14}\\]
where \\(\\alpha=-1/\\rho_{0}\\left(\\partial\\rho/\\partial T\\right)\\), \\(\\beta=1/\\rho_{0}\\left(\\partial\\rho/\\partial S\\right)\\), \\(\\Phi_{H}\\) is the total surface heat flux into the ocean (\\(\\Phi_{H}=\\Phi_{Q}+\\Phi_{SW}\\)), \\(\\rho_{0}\\) is the reference water density, \\(SSS\\) is the surface salinity measured in ppt. The density flux into the ocean is shown in figure 9 in units of \\(10^{-6}\\cdot kg/(m^{2}\\ s)\\) for the CLIM run (top panel). The effect of sea-ice cover is taken into account in the computation of the density flux, and the model grid (distorted over north Atlantic and Arctic) is used to avoid interpolation errors. The difference between the fluxes from the regressions in the last 200 years of regCLIM and CLIM is reported in the bottom panel of figure 9. Even if the changes are generally small (note the different colour scales in the figure), when the difference is averaged over the GIN seas and the Arctic Mediterranean (taking as southern boundaries the Bering strait and the latitude of the southern tip of Greenland), we find that the density flux decreases by \\(2\\cdot 10^{-8}kg/(m^{2}\\ s)\\). This value represents a 10% decrease of the average density flux over the same area, that nicely fits the relative change in maximum overturning strength.
The definition of the HCM, that does not include any high frequency stochastic component, causes a strong reduction of variability, but low frequency variability of the system seems to be preserved. To show this, a multi taper method (MTM) analysis [41] was performed on the time series of the maximum of overturning streamfunction of the Atlantic. The analysis is performed on the yearly data of CLIM (a longer control run is used, 1000 s years long) and the last 1000 years of regCLIM (figure 10). At the lower end of the spectrum, energy is concentrated at similar frequencies in the two models, below approximately 0.02 year\\({}^{-1}\\). At higher frequencies, instead, the broad peaks found in the HCM between 0.02 year\\({}^{-1}\\) and 0.09 year\\({}^{-1}\\) are not present in the original coupled model, while the peaks found above 0.1 year\\({}^{-1}\\) in CLIM are lost in the HCM. Also the first empirical orthogonal function of \\(SST\\) computed from the HCM resembles the one from CLIM only in the northwestern Atlantic. This approach is thus limited when the internal variability of the ocean is of interest, but in the present work the focus is only on the quasi-equilibrium response. Atmospheric noise and lagged correlations are probably needed to better represent and excite the modes of variability of the system.
As a final test, a pulse experiment was performed in the HCM. In this test, that will be shorthanded as regPULSE, we apply the same freshwater anomaly as in PULSE (see section III), also increased by 50% as the corrections already applied in regCLIM. The initial conditions for regPULSE are provided by the final state of regCLIM: year 5200 of figure 7. In regPULSE, as in PULSE, the anomaly is applied for 1000 years, letting the model reach a new equilibrium afterwards. We focus our analysis on the response of the system during the first hundred years of the run, where the regressions are expected to be significant.
The AMOC maximum for regPULSE is reported in figure 11 as a dashed line. The response of the AMOC in regPULSE, when measured by this quantity, follows closely the one in PULSE. The only substantial differences are its lower initial condition and the weaker variability of the regPULSE signal. The weaker variability of regPULSE signal is no surprise, considering the fact that our regressions do not add any high frequency variability to the system, depending only on \\(SST\\).
Looking at the entire overturning streamfunction of the Atlantic, the results are also encouraging. On the right hand side of figure 8, the overturning of the collapsed state that is established after the first 100 years of the pulse experiment are compared in PULSE and regPULSE. In the top right panel of figure 8, the streamfunction of years 101-110 of PULSE run is shown as a reference. The difference between regPULSE and PULSE during the same years is reported below. The results of the HCM are in good agreement with those of SPEEDO, showing a reversed cell only slightly weaker than in PULSE. The largest differences are at the southern border of the Atlantic basin, likely in connection with the general underestimation of the density flux over the southern ocean (figure 9). For what concerns the barotropic streamfunction during the pulse experiments, the only significant differences are found in the southern ocean (not shown). Over the Pacific sector of the Southern ocean, the underestimation of the barotropic streamfunction represents about 20% of the transport predicted by PULSE. This discrepancy is probably connected with an overestimation of the decrease of the southern westerly winds in the regressed forcing in response to the collapse of the AMOC.
## VI Summary and conclusions
In this paper we described a new technique for developing a global HCM that includes a basic representation of the feedbacks due to the ocean-atmosphere interaction, relevant for the stability of the AMOC. The steady state feedbacks of the system were represented through linear regression terms depending on the local deviation of \\(SST\\) from its mean value. The large-scale response of the atmosphere to an externally forced AMOC collapse is included with a regression on the NH hemisphere average temperature.
The results of the regressions give a quantitative representation of the changes in the surface fluxes that is consistent with other model experiments [9; 10; 37]. In particular, we can detect the changes in heat flux at the surface due to the cooling of the NH after a AMOC collapse. Significant changes are observed also in the freshwater flux, in connection with the response of the general circulation in the atmosphere to the changes in the equator to pole temperature profile, that determine the response of the winds as well. The boundary conditions computed in section III, were then successfully used as a dynamic forcing for an ocean-only model.
This ocean forced by a \"minimal atmospheric model\" guarantees a decrease of the computation time between ten and twenty times with respect to the original coupled model. The ocean model forced by the regressions which form the HCM reaches a steady state close to the one of the original coupled model. Furthermore, an experiment is performed where the AMOC is collapsed in both the fully coupled model and in the ocean forced by the regressions. The two results are in good agreement. This enables us to proceed to further use the HCM to investigate the impact of the atmospheric feedbacks on the stability of the AMOC. In particular, the formulation of the forcing shown in section III enables us to selectively choose which fluxes are fixed to a climatological value, and which ones are computed dynamically as a function of \\(SST\\). We can thus investigate the impact of each feedback separately on quantitative grounds, and we can aim at a deeper understanding of the main physical processes involved in the collapse and recovery of the AMOC. It is also important to analyse the response of the HCM to weaker freshwater anomalies. Reducing the anomaly that forces the AMOC collapse, the atmospheric feedbacks are likely to play an increasingly dominant role.
The model can obviously be extended in many ways. Using higher order (nonlinear) models in the data fit is unlikely to be worth the effort. The study of the role of atmospheric noise and of correlations lagged in space and time, and their inclusion in the HCM, may instead greatly improve the representation of atmosphere-ocean interaction with respect to the variability of the AMOC.
As a final remark, we want to stress that our technique to design the HCM is general. We do not rely on any ad-hoc assumption connected with the nature of the EMIC that was used for this work. For this reason, this technique is potentially interesting for many other problems (apart from the stability of the AMOC) where a computationally efficient, simple representation of the ocean-atmosphere interaction is desired. For instance, instead of using data from the atmospheric component of SPEEDO, the ocean component could be coupled to a statistical atmosphere derived from a state-of-the-art coupled climate model or from reanalysis data, at least for the computation of local regressions.
###### Acknowledgements.
This work is funded by the Netherlands Organisation for Scientific Research. We acknowledge Camiel Severijns (KNMI) for his precious technical support, and Matthijs den Toom (IMAU) for the stimulating discussions.
## References
* (1) H. Stommel, Tellus **13**, 224 (1961).
* (2) W. S. Broecker, Science **278**, 1582 (1997).
* (3) P. U. Clark, N. G. Pisias, T. F. Stocker, and A. J. Weaver, Nature **415**, 863 (2002).
* (4) R. B. Alley, J. Marotzke, W. D. Nordhaus, J. T. Overpeck, D. M. Peteet, R. a. Pielke, R. T. Pierrehumbert, P. B. Rhines, T. F. Stocker, L. D. Talley, et al., Science **299**, 2005 (2003).
* (5) S. Rahmstorf, M. Crucifix, A. Ganopolski, H. Goosse, I. Kamenkovich, R. Knutti, G. Lohmann, R. Marsh, L. A. Mysak, Z. Wang, et al., Geophysical Research Letters **32**, L23605 (2005).
* (6) J. Yin, M. E. Schlesinger, N. G. Andronova, S. Malyshev, and B. Li, Journal of Geophysical Research **111**, D12104 (2006).
* (7) A. Schmittner, M. Latif, and B. Schneider, Geophysical Research Letters **32**, L23710 (2005).
* (8) M. Vellinga, R. A. Wood, and J. M. Gregory, Journal of Climate **15**, 764 (2002).
* (9) R. J. Stouffer, J. Yin, J. M. Gregory, K. W. Dixon, M. J. Spelman, W. Hurlin, A. J. Weaver, M. Eby, G. M. Flato, H. Hasumi, et al., Journal of Climate **19**, 1365 (2006).
* (10) U. Krebs and A. Timmermann, Journal of Climate **20**, 4940 (2007).
* (11) O. Arzel, M. H. England, and O. A. Saenko, Journal of Climate **in press** (2010).
* (12) M. Nakamura, P. H. Stone, and J. Marotzke, Journal of Climate **7**, 1870 (1994).
* (13) M. A. Cane, Annual Review of Earth and Planetary Sciences **14**, 43 (1986).
* (14) T. P. Barnett, M. Latif, N. Graham, M. Flugel, S. Pazan, and W. White, Journal of Climate **6**, 1545 (1993).
* (15) M. Latif, D. Anderson, T. Barnett, M. Cane, R. Kleeman, A. Leetmaa, J. O'Brien, A. Rosati, and E. Schneider, Journal of Geophysical Research **103**, 14375 (1998).
* (16) H.-H. Syu, J. D. Neelin, and D. Gutzler, Journal of Climate **8**, 2121 (1995).
* (17) G. Burgers and G. J. van Oldenborgh, Journal of Climate **16**, 2396 (2003).
* (18) J.-S. von Storch, J. P. Montavez, and B. Saojini Beena, Meteorologische Zeitschrift **14**, 755 (2005).
* (19) J. D. Neelin, I. M. Held, and K. H. Cook, Journal of the Atmospheric Sciences **44**, 2341 (1987).
* (20) N. E. Graham and T. P. Barnett, Science **238**, 657 (1987).
* (21) G. J. Zhang, V. Ramanathan, and M. J. McPhaden, Journal of Climate **8**, 3040 (1995).
* (22) C. A. Severijns and W. Hazeleger, Geoscientific Model Development Discussions **2**, 1115 (2009).
* (23) F. Molteni, Climate Dynamics **20**, 175 (2003).
* (24) F. Kucharski and F. Molteni, Climate Dynamics **21**, 677 (2003).
* Bracco et al. (2004) A. Bracco, F. Kucharski, R. Kallummal, and F. Molteni, Climate Dynamics **23**, 659 (2004).
* Hazeleger and Haarsma (2005) W. Hazeleger and R. J. Haarsma, Climate Dynamics **25**, 387 (2005).
* Breugem et al. (2007) W.-P. Breugem, W. Hazeleger, and R. J. Haarsma, Journal of Climate **20**, 2691 (2007).
* Goosse and Fichefet (1999) H. Goosse and T. Fichefet, Journal of Geophysical Research **104**, 23337 (1999).
* de Vries and Weber (2005) P. de Vries and S. L. Weber, Geophysical Research Letters **32**, L09606 (2005).
* Huisman et al. (2010) S. E. Huisman, M. den Toom, H. A. Dijkstra, and S. Drijfhout, Journal of Physical Oceanography **40**, 551 (2010).
* Rahmstorf (1996) S. Rahmstorf, Climate Dynamics **12**, 799 (1996).
* (32) The R Development Core Team, _R: A Language and Environment for Statistical Computing_ (R Foundation For Statistical Computing, Vienna, Austria, 2009).
* (33) C. Frankignoul, Reviews of Geophysics **23**, 357 (1985).
* Wallace et al. (1990) J. M. Wallace, C. Smith, and Q. Jiang, Journal of Climate **3**, 990 (1990).
* Nonaka and Xie (2003) M. Nonaka and S.-P. Xie, Journal of Climate **16**, 1404 (2003).
* Deser and Wallace (1990) C. Deser and J. M. Wallace, Journal of Climate **3**, 1254 (1990).
* Laurian et al. (2009) A. Laurian, S. S. Drijfhout, W. Hazeleger, and R. van Dorland, Geophysical Research Letters **36**, L20708 (2009).
* Drijfhout (2010) S. S. Drijfhout, Journal of Climate **23**, 757 (2010).
* Gulev et al. (2003) S. K. Gulev, B. Barnier, H. Knochel, J.-M. Molines, and M. Cottet, Journal of Climate **16**, 3085 (2003).
* Tziperman (1986) E. Tziperman, Journal of Physical Oceanography **16**, 680 (1986).
* Ghil et al. (2002) M. Ghil, M. R. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, M. E. Mann, A. W. Robertson, A. Saunders, Y. Tian, F. Varadi, et al., Reviews of Geophysics **40**, 1 (2002).
* (42) The model used in [29] shares the same ocean model component as SPEEDO, but uses ECBilt as the atmospheric model instead of Speedy. In their setup, the basin integrated net evaporation of the Atlantic ocean is underestimated, while the zonal salinity contrast in the southern Atlantic is overestimated. Therefore, their correction has a sign opposite to that here.
* (43) The reduction of evaporation is mainly due to weaker winds.
\\begin{table}
\\begin{tabular}{l|l l l} Name & Model & Freshwater & Total \\\\ & & anomaly & length \\\\ & & & (years) \\\\ \\hline CLIM & EMIC & None & 200 \\\\ PULSE & EMIC & 0.4 \\(Sv\\) & 4000 \\\\ regCLIM & HCM & None & 5000 \\\\ regPULSE & HCM & 0.6 \\(Sv\\) & 5000 \\\\ \\end{tabular}
\\end{table}
Table 1: List of the model runs described in the text.
Figure 1: Top panel: Maximum AMOC for the entire PULSE experiment, in Sv. Bottom panel: Maximum AMOC for CLIM (red full line) and first 200 years of PULSE (black full line) in Sv (left \\(y\\)–axis). NH average \\(SST\\) in \\({}^{\\circ}\\)C (right \\(y\\)–axis) for PULSE (dashed black line).
Figure 2: Average value of the regressed fields from CLIM data (\\(\\overline{\\phi(i,j)}\\) in equation (5)), weighted by the fractional ocean area \\(\\overline{(1-\\varepsilon(i,j))}\\). a) Total heat flux in \\(W/m^{2}\\), positive downwards. b) Net evaporation in \\(mm/day\\). c) and d) are the zonal and meridional components of wind–stress respectively, in \\(10^{-3}\\cdot N/m^{2}\\).
Figure 3: As in figure 2, but for the local regression parameter \\(p_{1}\\). The units are the same of figure 2, divided by \\({}^{\\circ}\\)C. In panel a, only the non–solar heat flux is considered.
Figure 4: As in figure 2, but for the large–scale regression parameter \\(p_{2}\\). The units are the same of figure 2, divided by \\({}^{\\circ}\\)C In panel a, only the non–solar heat flux is considered. In panel b, the signal of the freshwater pulse has been removed from the source data.
Figure 5: Difference in \\(SST\\) (\\({}^{\\circ}\\)C) between the years 91–100 of PULSE experiment and the mean state of CLIM.
Figure 6: Effective feedbacks for heat flux (short wave radiation excluded), when changes in sea–ice are considered (see text). Effective regression parameters for the heat flux, computed including the effect of changes in sea–ice (\\(p_{1}\\) in panel a and \\(p_{2}\\) in panel b). Also the change in the heat flux, as directly diagnosed from the coupled model, is shown in panel c, computed as the difference in ice–weighted heat flux from years 91-100 and 1-10 of PULSE. Note that a different color scale is used in the top panel.
Figure 7: Deviation from CLIM average in regCLIM of four quantities: global average sea temperature (top panel, black, left \\(y\\)–axis), global average salinity (top panel, red, right \\(y\\)–axis), global average \\(SST\\) (bottom panel, black, left \\(y\\)-axis) and maximum AMOC (lower panel, red, right \\(y\\)–axis).
Figure 8: Overview of overturning streamfunction in the various models. In the top panels, AMOC for the CLIM mean state (top left) and for years 101 to 110 of PULSE (top right) are shown. The shaded contours are every 2 Sv, the red filling is for positive values, blue for negative. The thick line is the zero contour. In the left bottom panel, the difference of the overturning streamfunction between the last 200 years of regCLIM and the CLIM mean state is shown. In the right bottom panel, the difference of the overturning streamfunction between years 101 to 110 of regPULSE and PULSE runs. The contours in the lower panels are every 1 Sv.
Figure 9: In the top panel, the surface density flux for CLIM is shown in \\(10^{-6}\\cdot kg/(m^{2}~{}s)\\). In the bottom panel, the difference of the same quantity between the last 200 years of regCLIM and CLIM. Different colour scales are used in the two panels. In the figure, the grid of the ocean model is used (distorted in the north Atlantic and Arctic), to avoid interpolation errors.
Figure 10: MTM spectra of the time series of the maximum AMOC (solid lines) for CLIM (top panel) and regCLIM (bottom panel). The dashed smooth lines represent, from the lowest to the highest, the estimated red noise background and the median, 90%, 95% and 99% significance levels associated with it. In both cases, the resolution is \\((5years)^{-1}\\) and 7 tapers were used. Time series are 1000 years long.
Figure 11: Maximum AMOC for first 200 years of PULSE (full line) and regPULSE (dashed line) in Sv. | A global hybrid coupled model is developed, with the aim of studying the effects of ocean-atmosphere feedbacks on the stability of the Atlantic meridional overturning circulation. The model includes a global ocean general circulation model and a statistical atmosphere model. The statistical atmosphere model is based on linear regressions of data from a fully coupled climate model on sea surface temperature both locally and hemispherically averaged, being the footprint of Atlantic meridional overturning variability. It provides dynamic boundary conditions to the ocean model for heat, freshwater and wind-stress. A basic but consistent representation of ocean-atmosphere feedbacks is captured in the hybrid coupled model and it is more than ten times faster than the fully coupled climate model. The hybrid coupled model reaches a steady state with a climate close to the one of the fully coupled climate model, and the two models also have a similar response (collapse) of the Atlantic meridional overturning circulation to a freshwater hosing applied in the northern North Atlantic. | Summarize the following text. | 201 |
arxiv-format/1108_0831v1.md | # Towards Spatio-Temporal SOLAP
Pablo Bisceglia
1Universidad de Buenos Aires, Argentina [email protected]
Leticia Gomez
2Instituto Tecnologico de Buenos Aires, Argentina [email protected]
Alejandro Vaisman
3Universidad de la Republica, Uruguay [email protected]
## 1 Introduction
In Geographic Information Systems (GIS), spatial data are organized in _thematic layers_, stored in suitable data structures, while associated attributes are usually stored in conventional relational databases. In real-world applications, spatial objects in a layer can be added, removed, split, merged, or their shape may change. Tryfona and Jensen [1] classify spatio-temporal applications according with the kind of support of the changes occurring in the spatial objects. They distinguish between objects with _continuous motion_ (e.g., a car moving in a highway), objects with _discrete changes_ (e.g, parcels changing boundaries), and objects combining _continuous motion and changing shapes_ (e.g., a stain in a river). On the other hand, OLAP (On-Line Analytical Processing) [2] provides a set of tools and algorithms that allow efficiently querying multidimensional repositories called Data Warehouses. OLAP data are organized as a set of _dimension hierarchies_ and _fact tables_, and can be perceived as a _data cube_, where each cell contains a measure or set of measures of interest. The problem of integrating OLAP and GIS systems for decision-making analysis has been called SOLAP [3]. One of the models proposed for SOLAP is Piet [4], a framework that integrates spatial, spatio-temporal, and non-spatial multidimensional data. In this paper we add temporal capabilities to SOLAP, extending Piet-QL (the query language associated to the Piet data model) to support _discrete changes_.
A Motivating Example.We present a typical scenario about land property information. Figure 1 (left) shows four parcels of land, P1 through P4, probably characterized by attributes like type of soil, owner. We assume that parcels are represented in a GIS layer denoted \\(L_{land}\\). Non-spatial information is stored in aconventional data warehouse. A dimension hierarchy denoted _Land_ stores information related to the parcels. The bottom level of this dimension contains the parcel identifiers (p1 through p4). There is a mapping (not shown in the figure) between spatial objects in \\(L_{land}\\) and members of the bottom level (_parcellId_) of the dimension _Land_. At a certain moment, parcels P3 and P4 are merged into a single one P\\({}_{3-4}\\). Changes must also be performed at the data warehouse, meaning that elements p3 and p4 are deleted and p\\({}_{3-4}\\) is added, along with the corresponding rollups to region r2. A mapping between p\\({}_{3-4}\\) and P\\({}_{3-4}\\) is also defined. This is depicted on the right hand side of Figure 1. Other changes may also occur. In a _discrete changes_ scenario like this, we may want to know the history of P\\({}_{3-4}\\), the production of each existing parcel as of the year 2006, or to pose queries like \"Production by year per square mile for each parcel of land, for the parcels in Montevideo\". Answering these kinds of queries requires extending non-temporal SOLAP data models and query languages (like Piet-QL) with temporal capabilities. This is the problem we address in this paper where, after an overview of related work (Section 2), we define the temporal data model (Section 3). Then (Section 4) we present the syntax and semantics of TPiet-QL, and discuss the expressiveness of the language. We conclude in Section 5.
## 2 Related Work
Rivest _et al._[5] introduced the concept of SOLAP (standing for Spatial OLAP), a paradigm aimed at exploring spatial data by drilling on maps in a way analogous to what is performed in OLAP with tables and charts. Piet [4] is a formal model for SOLAP, where the integration between GIS and OLAP is materialized through a function that maps elements in the data warehouse to elements in the GIS layers. Piet comes equipped with a query language, Piet-QL [6], that
Figure 1: Initial situation (left): land partition and Land dimension hierarchy; after merging P3 and P4 (right): changes in spatial objects and in the dimension hierarchy.
supports the operators proposed by the Open Geospatial Consortium4 for SQL, adding the necessary syntax to integrate OLAP operations through MDX5. Piet-QL is designed to support (besides standard GIS and OLAP queries, GIS queries filtered using OLAP conditions, like \"Name of the cities with total sales higher that 5000 units\"; (d) OLAP queries filtered by spatial conditions, like \"Total sales in cities within 100Km from Montevideo\". Filtering is implemented through a predicate denoted IN. The Piet-QL query \"Parcels crossed by the 'Uruguay' river, with sales greater than 5000 units\" reads in Piet-QL.
Footnote 4: [http://www.opengeospatial.org](http://www.opengeospatial.org)
SELECT GIS l.id
FROM land l, rivers lr
WHERE intersects(l,lr) AND lr.name = \"Uruguay\" AND l IN(
SELECT CUBE filter([Land].[Land parcelId].Members,
[Measures].[Parcel Sales] \\(>\\) 5000)
FROM [Sales]);
Here, 'land' and 'rivers' represent two thematic layers containing spatial objects (the parcel subdivision of a given region, and the rivers, respectively). The OLAP subquery (identified with the keyword CUBE) is linked to the outer query by the predicate IN, and returns a collection of identifiers of spatial objects.
The Spatio-Temporal Relational data Model (STRM), introduced by Tryfona and Hadzilacos [7], provides a set of constructs consisting in relations, layers, virtual layers, object classes, and constraints, all with spatial and temporal extent, on top of well-known models. In this model, a _layer_ is a set of geometric figures like points, lines, regions or combinations of them, with associated values. The authors also define a layer algebra, which, based on four operations over layers, provides a semantics to SOLAP.
Other proposals such as SECONDO [8] and Hermes [9] support moving object databases but, like other spatio-temporal models (except Piet), they are not oriented towards addressing the problem of integrating GIS, OLAP and Moving Object data.
## 3 Spatio-Temporal Piet
In the temporal extension to Piet (TPiet), each tuple in a relation is timestamped with its validity interval. Time is introduced as a new sort (domain). For clarity of presentation, in the sequel we work with point-based temporal domains, although we use interval-based domains to implement our ideas [10]. In temporal databases, the concepts of _valid_ and _transaction_ times refer to the instants when data are valid in the real world, and when data are recorded in the database, respectively [11]. We assume _valid_ time support. Also, a distinguished variable _Now_ represents the (moving) current time instant. The _lifespan_ of a GIS layer \\(L\\), _lifespan_\\((L)\\), is the collection of all the time instants where the layer is valid. The _lifespan_ of a set of layers \\(\\mathcal{L}\\), \\(lifespan(\\mathcal{L})\\), is the union of the lifespans of all the layers in \\(\\mathcal{L}\\). Finally, we assume that no structural changes occur at the GIS or at the data warehouses, meaning that a layer containing polygons at its creation instant will contain polygons throughout its lifespan.
Given the above, a _Temporal GIS-OLAP Dimension Schema_\\(\\mathrm{TG}_{sch}\\) is a tuple \\(\\langle H,\\mathcal{A},\\mathcal{D},\\mu\\rangle\\), where \\(H\\) is a mapping from layers to geometries, \\(\\mathcal{A}\\) is a set of _partial_ functions \\(Att\\) that map attributes in OLAP dimensions to GIS layers, \\(\\mathcal{D}\\) is a set of dimension hierarchies [12], and \\(\\mu\\) a dimension level in a standard OLAP Time dimension. Elements in \\(\\mu\\) are in the temporal domain. Further, \\(H\\), \\(\\mathcal{A}\\), and \\(\\mathcal{D}\\) satisfy the following conditions: (a) A layer is created when the first object is added to it; (b) \\(H\\) is constant throughout the lifespan of the GIS; (c) For each layer \\(L\\), the function \\(Att\\) is defined only in \\(lifespan(L)\\); (d) The functions \\(Att\\in\\mathcal{A}\\) do not change with time, i.e., \\(Att_{1}(parcelId,Land)\\) will always return \\(L_{land}\\). (e) The schema of the dimensions in \\(\\mathcal{D}\\) is constant during the lifespan of the GIS. Associated with a dimension schema, we have a dimension instance, which consists in: A set of relations \\(r^{t}_{L_{i}}\\) such that each tuple \\(\\langle g_{i},ext(g_{i}),t\\rangle\\rangle\\) in \\(r^{t}_{L_{i}}\\), represents the existence of an object \\(g_{i}\\) (and its extension) in \\(L_{i}\\) at the instant \\(t\\); A collection of functions \\(\\alpha\\) that map elements in OLAP dimension levels to geometric elements in a GIS layer, at a given time; A collection of dimension instances, one for each dimension schema \\(D\\in\\mathcal{D}\\) in \\(\\mathrm{TG}_{sch}\\). We assume that spatial objects have the same attributes throughout their lifespan.
Temporal Piet Data StructureThe data structure of TPiet-QL is organized in: (a) Application information. This is the data warehouse structure. Contains dimension and fact tables. (b) GIS information. The data structure for the map layers (one table per layer). Temporal attributes FROM and TO indicate the interval of validity of each object in a layer. (c) GIS-OLAP mapping information. Stores the relationship between geometric and application information (i.e., the \\(\\alpha\\) functions). Temporal attributes are also included here to indicate the temporal validity of a mapping. (d) There are also data structures to store precomputed information containing the overlay of different layers (see [4]).
We briefly explain the update semantics. When a new object is _created_ at instant \\(t_{1}\\), say, in the layer _Land_, a tuple is inserted in the _Land_ table with the corresponding parcel information. Attributes FROM and TO are set to \\(t_{1}\\) and the distinguished value _Now_, respectively. If this parcel, call it \\(p_{1}\\), is _split_ into \\(p_{2}\\) and \\(p_{3}\\) at instant \\(t_{2}\\), the tuple for \\(p_{1}\\) is timestamped with TO=\\(t_{2}-1\\) (i.e., an instant immediately before \\(t_{2}\\) in the object's granularity); in addition, two tuples are _created_ for \\(p_{2}\\) and \\(p_{3}\\), with FROM=\\(t_{2}\\), and TO=_Now_. Later, at \\(t_{4}\\), two parcels, \\(p_{5}\\) and \\(p_{6}\\) are _merged_ into a single one, call it \\(p_{56}\\). The former two tuples are _deleted_ as before (i.e., timestamped with TO=\\(t_{4}-1\\)), and \\(p_{56}\\) is _created_ with FROM=\\(t_{4}\\) and TO=_Now_. The _update_ operation at instant \\(t\\) is equivalent to the _deletion_ of a tuple (i.e., a timestamping with \\(t-1\\)), and the _insertion_, at instant \\(t\\), of a new one (keeping the same identifier). The _reincarnation_ operator is analogous to an update, except for the fact that the instants of deletion and insertion are not consecutive.
We now discuss the _data warehouse_ side. When operations on the GIS side require creating new spatial objects, the corresponding objects must be inserted in the warehouse dimensions, also defining new mappings. However, when an _update_ occurs (like a change in an object's shape) the object identifier does not change and no action needs to be taken on the warehouse side. Also note that insertions can be performed without impacting the warehouse or the mapping function, although this could produce incomplete answers to some queries (the ones that involve accessing the warehouse), due to the incomplete mapping (i.e., the object would only be in one of the parts of the system). One of the premises of the Piet data model is to allow autonomous maintenance of warehouse and GIS information. There are at least two possible situations: (a) The data warehouse and associated data cubes are non-temporal, in the sense that only fact tables are updated, and the dimensions are _static_, i.e., only the current state of the dimension data is available; (b) The data warehouse has _temporal_ capabilities, i.e., dimensions are updated and their history is preserved. For example, the notion of _slowly changing dimensions_ can be used [2], where a new dimension tuple is added when an update occurs (dimension tables are extended with FROM/TO attributes). Other solutions can be found in the literature [13, 14].
## 4 Query language
Definition 1 (Spatio-temporal object): We denote by spatio-temporal object a tuple of the form \\(\\langle\\mathit{objectId},\\mathit{geometry},\\mathit{attribute}_{1},\\mathit{ }, \\mathit{attribute}_{n},\\mathit{interval}\\rangle\\), where geometry is the geometric extension of the object, \\(\\mathit{attribute}_{i}\\) are alphanumeric attributes, and 'interval' is the interval of validity of the object, of the form \\([FROM,TO]\\).
In Definition 1, _interval_ is a single interval. In temporal databases it is usual to talk about temporal elements, i.e., sets of intervals. For simplicity of presentation, in this paper we work with single intervals instead of temporal elements. This makes the paper easier to read, without reducing its substance. In what follows we refer to spatio-temporal objects as 'objects', and denote \\(\\mathcal{G}\\) a collection of spatio-temporal objects. Based on Allen's interval set of predicates [15], in Figure 2 we specify the syntax and semantics of a collection of predicates over spatio-temporal objects, intervals, and time instants.
Note that DURING and COVERS represent the predicate X DURING Y in Allen's algebra. OVERLAPS represents X OVERLAPS Y and Y OVERLAPS X. The same for MEETS, STARTS, and FINISHES. BEFORE and AFTER represent X \\(<\\) Y and Y \\(<\\) X, respectively. We also need some functions, namely:
\\(\\mathtt{IIIntersection}(I_{1},I_{2})\\)**:**: \\(T\\times T\\times T\\times T\\to T\\times T\\); returns the interval when \\(I_{1}\\) and \\(I_{2}\\) intersect.
\\(\\mathtt{Coalesce}(\\mathcal{G})\\)**:**: Analogously to the 'Coalesce' operator used in temporal databases, it produces groups of objects whose temporal intervals are consecutive and that coincide in all other attributes, returning a collection of spatio-temporal objects.
Sparto-temporal JoinsA key operation in any spatio-temporal query language is the _join_. Different kinds of temporal joins have been proposed in the literature [11], and two main classes can be identified: (a) Disjoint join; and (b) Overlap join. In the former, given \\(n\\) (timestamped) tuples, it is not required that their time intervals overlap. In the latter, the time intervals must overlap and there are two possibilities: all the time intervals have at least one common time instant, or they are joined in a 'chained' fashion, e.g., \\(t_{1}.TO\\geq t_{2}.FROM\\wedge t_{2}.TO\\geq t_{1}.TO\\). Disjoint joins provide more expressiveness to a query language than overlap joins, allowing to query for asynchronous events (e.g., parcels owned by X before a region changed name). Examples (following Allen [15]) are before-join(X,Y), and meet-join(X,Y), with conditions \\(X.TO\\leq Y.FROM\\) and \\(X.TO=Y.FROM\\), respectively. The joins above are denoted T-joins. When a T-join requires the equality of a collection of non-temporal attributes specified as a predicate \\(P_{a}\\), we say that we are in presence of a GT-join (standing for generic temporal). That is, a GT-join corresponds to the expression \\(\\sigma_{P_{a}\\wedge overlap-join(X,Y)}(X,Y)\\). That means, given two tuples, the tuples in the result of a GT-join will be the ones that have overlapping time intervals and verify the non-temporal predicate \\(P_{a}\\). In a spatio-temporal setting we can implement the temporal joins using the operators defined above.
In the presence of spatio-temporal objects, the GT-join can be defined using the standard topological relationships [16], like Touches\\((g_{1},g_{2})\\), or Contains\\((g_{1},g_{2})\\). Consider two layers storing the histories of airports and cities. Figure 3 (left) shows two stages of city \\(c_{1}\\): one in the interval [0,50], and the other in the interval [51,Now]. Airport \\(a_{1}\\) was first relocated at instant 100, and then, due to the
Figure 2: Predicates over spatio-temporal objects, intervals, and instants.
city expansion, it was located well outside the new city limits. Figure 3 (right) shows how the two objects \\(a_{1}\\) and \\(c_{1}\\) interact along their timelines: the airport is within the city limits only in the intervals [51,100] and [101,200]. The relational representations are given below.
\\begin{tabular}{|c|c|c|c|c|} \\hline cityId & the\\_geom & & FROM & TO \\\\ \\hline c1 & g1 & & 0 & 50 \\\\ c1 & g2 & & 51 & Now \\\\ c2 & g3 & & 0 & 30 \\\\ \\hline \\end{tabular}
We can list the pairs city-airport such that an airport was within the city limits as a GT-join, where the non-temporal predicate _Contains_ is spatial:
\\(\\begin{array}{l}\\sigma_{\\phi}(Airports\\times Cities)\\\\ \\phi=contains(Airports.geom,Cities.geom)\\wedge overlap-join(Airports,Cities) \\end{array}\\)
The result would contain the tuples \\(\\langle a1,c1,51,100\\rangle\\) and \\(\\langle a1,c1,101,200\\rangle\\), representing (see Figure 3), that between instants 51 and 100, a1 remained within the city limits of c1.
The TPiet-QL Query LanguageThe discussion above set the basis for defining a temporal extension to the GIS part of Piet-QL, yelding the TPiet-QL language.
SELECT GIS [SNAPSHOT][CURRENT] list_of_attributes
FROM [OVERLAP] T1 t1, ,Tn tn
WHERE \\(\\Phi\\)
T1 through Tn represent thematic layers, t1 through tn range over the spatial or _spatiotemporal objects_ in these layers, and the \\(a_{i}\\)'s represent attributes of these
Figure 3: A city an its airport (left); Interaction of \\(a_{1}\\) and \\(c_{1}\\) along their timelines (right)
objects. The OVERLAP keyword in the FROM clause states that the overlap join semantics must be applied (see below). The list of attributes in the SELECT clause defines the schema of the result: a subset of the union of the attributes of the spatiotemporal objects mentioned in the FROM clause. The SNAPSHOT keyword (analogous to the one in TSQL2 [17]) is used to return a non-temporal relation, eliminating the interval/s associated with each tuple in the query result. CURRENT is the same as SNAPSHOT but selecting the _current_ state of the relation before the projection is performed. That means, the query will return a collection of spatial objects corresponding to the spatiotemporal ones which contain the keyword 'Now' in the attribute TO.
The condition \\(\\Phi\\) is composed of conjunctions and disjunctions of the function and predicates mentioned above, and can also include the Piet-QL predicate IN (and the corresponding OLAP sub-query), to provide compatibility with Piet-QL, and to support OLAP in a spatio-temporal SOLAP scenario. This is why we keep the Piet-QL keyword GIS in the SELECT clause. We show this below by means of some examples.
The semantics of the query is defined by the cartesian product of the geometric objects in all the thematic layers in the FROM clause. If the OVERLAP keyword is specified, only the tuples whose intervals overlap are considered, (ie., the tuples such that \\(\\cap_{ti.interval,i=1,n}\
eq\\emptyset\\)), and the overlapping interval is included in the result, which is coalesced by default using all the non-temporal attributes in the SELECT clause. We illustrate this semantics extending the city-airport example with a layer containing parcels, described in the table below (on the right we show the distances between cities and parcels, during different time intervals):
\\begin{tabular}{|c|c|c|c|c|} \\hline parcellId & the\\_geom & & FROM & TO \\\\ \\hline p1 & g1 & & 10 & 20 \\\\ p1 & g2 & & 21 & 40 \\\\ p2 & g3 & & 30 & 50 \\\\ p3 & g4 & & 40 & 100 \\\\ & & & & \\\\ \\hline \\end{tabular}
Consider a query asking for pairs city-parcel such that the distance between them is/was less than 100Km. According to the usual semantics of a temporal join, the query returns tuples of the form \\(\\langle p_{i},c_{j},Interval\\rangle\\), where \\(Interval\\) is the interval when they where closer than 100Km from each other. The query reads in TPiet-QL:
SELECT GIS c,p
FROM OVERLAP Parcels p, Cities c
WHERE Distance(c.the\\_geom,p.the\\_geom) \\(<\\) 100
The result will be (note that this result is coalesced):
\\begin{tabular}{|c|c|c|c|} \\hline cityId & parcelId & FROM & TO \\\\ \\hline c1 & p1 & 10 & 20 \\\\ c1 & p2 & 30 & 50 \\\\ c1 & p3 & 40 & 100 \\\\ \\hline \\end{tabular}
[MISSING_PAGE_POST]
\\begin{tabular}{|c|c|c|c|} \\hline cityId & parcelId & FROM & TO \\\\ \\hline c1 & p1 & 10 & 20 & 80 \\\\ c1 & p1 & 21 & 40 & 120 \\\\ c1 & p2 & 30 & 50 \\\\ c1 & p3 & 40 &Let us give now an example of a TPiet-QL query returning an OLAP cube filtered with a spatio-temporal sub-query containing with SNAPSHOT clause: \"Production cost and parcel sales in 2009, for the parcels crossed by rivers at that time\". This query reads:
SELECT CUBE [Measures].[Production Cost], [Measures].[Parcel Sales],
Product.[All_Products] ON ROWS
FROM [Sales]
WHERE AND [Time].[2009] AND
[Land].[All Land] IN (
SELECT GIS SNAPSHOT 1.id
FROM OVERLAP Land l, Rivers r
WHERE Crosses(r,l) AND
COVERS(r,[1/1/2009,12/31/2009]) AND
COVERS(l,[1/1/2009,12/31/2009]) ) ;
We conclude with the query: \"Parcels crossed by the Uruguay river, with production sales greater than 5000 units\". (Technically, in TPiet-QL, a GIS-OLAP query).
SELECT GIS l
FROM OVERLAP land l, rivers r
WHERE Crosses(l,r) AND r.name = \"Uruguay\" AND l.id IN(
SELECT CUBE
filter([Land].[Land parcelId].Members,
[Measures].[Parcel Sales] > 5000)
FROM [Sales]);
The query returns the spatiotemporal objects containing the parcels with the requested production, their information, and the intervals when each parcel in the result crossed the Uruguay river.
Expressive PowerOver the data model described in Section 3, a formal spatio-temporal query language, denoted \\(\\mathcal{L}_{t}\\), has been defined. This query language is studied in detail in [18]. We show now that TPiet-QL is based on this formal query language, and that most queries expressible in \\(\\mathcal{L}_{t}\\) are captured by this temporal extension to Piet-QL. We illustrate these ideas using a very simple GIS-OLAP query, which includes a reference to an external data cube called 'Production', with dimensions Land and Time, and measure 'quantity', representing the production of wheat per year. The query asks for the parcels having an area larger than 100 Ha in 1996, currently larger than they were at that time, and with a production of wheat larger than 1000 Tons in 2009. The formal query in \\(\\mathcal{L}_{t}\\) reads:
\\[Q=\\{p\\ |\\ (\\exists e_{p})(\\ \\exists e_{p_{1}})(\\exists a)( \\exists qty)\\\\ (r^{t}_{L_{land}}(p,e_{p_{1}},1996)\\ \\wedge\\ r^{t}_{L_{land}}(p,e_{p},Now)\\ \\wedge\\]\\[area(e_{p_{1}})=a\\ \\wedge\\ a>100\\ \\wedge\\ area(e_{p})>a)\\ \\wedge\\] \\[Production(p,2009,qty)\\ \\wedge\\ qty>1000\\}.\\]
Here, \\(Production(p,2009,qty)\\) is a term representing a fact table, \\(area\\) is a function computing the area of a spatial object, and \\(r^{t}_{L_{land}}(p,e_{p},t)\\) are terms representing the parcels and their geometric extensions across time (in a point-based fashion), corresponding to the elements in the model of Section 3. This query can be expressed in TPiet-QL as follows:
SELECT GIS p1.id
FROM land p, land p1
WHERE area(p) \\(>\\) area(p1) AND
COVERS(p,[1996,1996]) AND COVERS(p1,[2010,2010]) AND
p1.id=p.id AND p1.id IN(
SELECT CUBE
filter([Land].[Land parcellId].Members,
[Measures].[qty] \\(>\\) 1000)
FROM [Production]) SLICE [Time].[2009];
The constructs \\(\\mathcal{L}_{t}\\) are present in the TPiet-QL expression above. The main difference is that instead of using non-temporal functions over the extensions of spatial objects like in \\(\\mathcal{L}_{t}\\), i.e., while _area_ is applied over a geometry (e.g., \\(e_{p}\\)), TPiet-QL uses temporal functions over spatio-temporal objects (e.g., \\(p\\)). It can be seen that queries expressible in \\(\\mathcal{L}_{t}\\) can be expressed in TPiet-QL since there is a translation for each of the terms in one language to the other. We omit a term-by term proof for the sake of space.
Vaisman and Zimanyi [19] recently proposed a comprehensive and formal classification for spatio-temporal data warehousing, defining a taxonomy of queries. For example, the SOLAP class of queries is defined as the class containing the queries that can be expressed in relational calculus with aggregate functions, extended with _spatial_ data types. Analogously, the ST-OLAP class of queries is the class containing the queries that can be expressed in the calculus extended with spatial and the _moving_ types defined in [20]. We can say that our proposal falls somewhere in between the ST-OLAP and ST-TOLAP classes (the latter includes temporal OLAP support).
## 5 Conclusion and Future Work
We have presented a spatio-temporal query language for temporal SOLAP, denoted TPiet-QL, that supports discrete changes in the spatial objects in the thematic layers of a GIS. TPiet-QLextends Piet-QL, a query language for SOLAP. We introduced the syntax an semantics of the language, and discussed its expressive power. Our next step is to produce an implementation, which includes a visualization tool for spatio-temporal data, and the development of efficient methods for query processing.
**Acknowledgements:** The authors of this paper were partially funded by the LACCIR project \"Monitoring Protected Areas using an OLAP-enabled Spatio-temporal GIS\".
## References
* [1] Tryfona, N., Jensen, C.S.: Conceptual data modeling for spatiotemporal applications. GeoInformatica **3** (1999) 245-268
* [2] Kimball, R.: The Data Warehouse Toolkit. J.Wiley and Sons, Inc (1996)
* [4] Gomez, L.I., Haesevoets, S., Kuijpers, B., Vaisman, A.A.: Spatial aggregation: Data model and implementation. Inf. Syst. **34** (2009) 551-576
* [5] Rivest, S., Bedard, Y., Marchand, P.: Towards better support for spatial decision making: Defining the characteristics of spatial on-line analytical processing (SOLAP). Geomatica **55** (2001) 539-555
* [6] Gomez, L., Vaisman, A., Zich, S.: Piet-QL: a query language for GIS-OLAP integration. In: ACM-GIS. (2008)
* [7] Tryfona, N., Hadzilacos, T.: Logical data modelling of spatio temporal applications: Definitions and a model. In: IDEAS. (1998) 14-23
* [8] Guting, R.H., de Almeida, V.T., Ansorge, D., Behr, T., Ding, Z., Hose, T., Hoffmann, F., Spiekermann, M., Telle, U.: SECONDO: An extensible DBMS platform for research prototyping and teaching. In: ICDE. (2005) 1115-1116
* a framework for location-based data management. In: EDBT. (2006) 1130-1134
* [10] Toman, D.: Point vs. interval-based query languages for temporal databases. In: PODS. (1996) 58-67
* [11] Tansel, A., Clifford, J., Gadia (eds.), S.: Temporal Databases: Theory, Design and Implementation. Benjamin/Cummings (1993)
* [12] Hurtado, C.A., Mendelzon, A.O.: OLAP dimension constraints. In: PODS. (2002) 169-179
* [13] Eder, J., Koncilia, C., Morzy, T.: The COMET metamodel for temporal data warehouses. In: CAiSE. (2002) 83-99
* [14] Mendelzon, A.O., Vaisman, A.A.: Temporal queries in OLAP. In: VLDB. (2000) 242-253
* [15] Allen, J.: Maintaining knowledge about temporal intervals. Communications of the ACM 26(11) (1983) 832-843
* [16] Egenhofer, M.J.: Spatial SQL: A query and presentation language. IEEE Trans. Knowl. Data Eng. **6** (1994) 86-95
* [17] Snodgrass, R.T., ed.: The TSQL2 Temporal Query Language. Kluwer (1995)
* [18] Gomez, L.I., Kuijpers, B., Vaisman, A.A.: A data model and query language for spatio-temporal decision support. Geoinformatica DOI:10.1007/s10707-010-0110-7 (2010)
* [19] Vaisman, A., Zimanyi, E.: What is spatio-temporal data warehousing? In: DaWaK. (2009) 9-23
* [20] Guting, R.H., Bohlen, M., Jensen, C., Lorentzos, N., Schneider, M., Vazirgiannis, M.: A foundation for representing and quering moving objects. ACM Transactions Database Systems **25** (2000) 1-42 | The integration of Geographic Information Systems (GIS) and On-Line Analytical Processing (OLAP), denoted SOLAP, is aimed at exploring and analyzing spatial data. In real-world SOLAP applications, spatial and non-spatial data are subject to changes. In this paper we present a temporal query language for SOLAP, called TPiet-QL, supporting so-called discrete changes (for example, in land use or cadastral applications there are situations where parcels are merged or split). TPiet-QL allows expressing integrated GIS-OLAP queries in an scenario where spatial objects change across time. | Give a concise overview of the text below. | 119 |
isprs/d6cd91ce_f8da_476a_ac01_9113d54f56e7.md | Topography and Displacement of Polar Glaciers from Multi-Temporal SAR Interferograms: Potentials, Error Analysis and Validation
Franz Meyer
Remote Sensing Technology,TU Muenchen, Arcisstrasse 21, D-80333 Munich, Germany - [email protected]
## 1 Introduction
The capability of SAR interferometry (InSAR) in terms of deformation monitoring and topographic mapping has been proven by various case studies during the last decades. In recent years, the focus of investigations has changed towards a detailed analysis of potential error sources, such as temporal and geometrical decorrelation, atmospheric path delay, surface penetration and orbit uncertainties. The analysis of stable targets, so called permanent scatterers, identified from a number of interferograms enables to minimize the effect of temporal and geometrical decorrelation and to remove the influence of the atmospheric path delay. Based on this technique, DEMs with meter accuracy and millimeter terrain motion detection can be derived. However, due to the lack of stable targets, this method can not be applied for the analysis of glaciers and ice sheets, which is a well known application of InSAR. Thus, the evaluation of possible error sources is still a challenging problem in glacier monitoring.
This paper presents an estimation method to determine topography and motion of polar ice masses from SAR interferograms. The approach is focused on a systematic modeling of all processing steps and their particular stochastic properties. The functional and stochastic description of all influences on the interferometric phase signal serves as basis for a detailed accuracy, robustness and error analysis of the estimated results. Special emphasis is put on the investigation of influences from topography and motion, as well as the effects of orbit errors, atmospheric path delays, and the penetration depth of the signal into the surface.
## 2 Method
### Adjustment model
The aim of all adjustment methods is to map a number of \\(n\\) erroneous observations \\(b\\) on a number of \\(u<n\\) unknown parameters \\(x\\). To make this step possible it is indispensable to formulate functional relations between observations and unknowns.
_The functional model_ of a least-squares adjustment based on erroneous observations is defined by
\\[b+\\hat{\\varepsilon}=f(\\hat{x}_{1},\\hat{x}_{2},\\hat{x}_{3}, ,\\hat{x}_{u-1}, \\hat{x}_{u}) \\tag{1}\\]
with \\(\\hat{\\varepsilon}\\) being the estimated values of residuals and the estimated unknowns \\(\\hat{x}_{i}\\). If accuracy measures for the observations are available, weighting of the observations may be performed. Observations with high accuracy will get high weights and will therefore have strong influence on the estimated parameters and vice versa. The a priori information about the accuracy of the observations is called _stochastic model_ and is arranged in the so called covariance matrix \\(K_{bb}\\).
Using the Gauss-Markov theory the optimal solution of a over-determined equation system as shown in Equation (1) is derived by minimizing the objective function \\(\\delta\\):
\\[\\delta=\\hat{\\varepsilon}^{T}P_{bb}\\hat{\\varepsilon}\\quad\\rightarrow\\quad min \\tag{2}\\]
with \\(P_{bb}=K_{bb}^{-1}\\). Solving this minimization problem yields the adjusted unknowns \\(\\hat{x}\\) as well as their theoretical accuracies expressed by the \\(Q_{\\hat{x}\\hat{x}}\\) matrix
\\[\\hat{x}=\\Delta\\hat{x}+\\stackrel{{\\circ}}{{x}} = \\left(A^{T}P_{bb}A\\right)^{-1}A^{T}P_{bb}b+\\stackrel{{ \\circ}}{{x}} \\tag{3}\\] \\[Q_{\\hat{x}\\hat{x}} = \\left(A^{T}P_{bb}A\\right)^{-1} \\tag{4}\\]
with \\(A\\) comprising the functional relations and \\(\\stackrel{{\\circ}}{{x}}\\) containing approximate values for the unknowns (Mikhail, 1976).
### Observations and unknowns
Based on observations derived from SAR data the unknown topography \\(h\\) and motion \\(v=\\frac{\\hat{x}_{n}}{\\Delta t}\\) of polar glaciers are estimated. Within the adjustment, only that component of the surface movement that lies in the line of sight of the sensor can be determined. Thus, \\(v\\) always corresponds to the line of sight component of surface motion.
SAR SLC's of the ERS C-band SAR serve as primary data source. From this data sets \\(N\\) SAR interferograms are formed. Unfortunately, the temporal baseline \\(\\Delta t\\) of the interferograms can not be arbitrarily chosen. \\(\\Delta t\\) is above all limited by temporal decorrelation. Especially in snow covered polar regions changing wind conditions, temperature variations, and precipitation result in a strong decrease of correlation with time. To warrant interferograms with sufficient quality, only interferograms originating from the ERS tandem mission are considered, which comprise a temporal baseline of only a single day. The ambiguous interferometric phase values are unwrapped based on a minimum spanning tree approach before implementing them into the adjustment. In addition a reference phase screen is subtracted from the interferograms in beforehand using ERS D-PAF precision orbit information.
### Functional model
As described above, the functional model comprises the deterministic relations between observations an unknowns. For solving the proposed problem, three different sub-models are necessary. The formulation of the sub-models and their particular characteristics are derived in the following.
#### 2.3.1 Interferometric model
Although the phase \\(\\phi\\) of an interferogram acquired over glaciated terrain is influenced by many parameters, \\(\\phi\\) is dominated by influences from surface topography \\(h\\), coherent senor motion \\(v\\) in line-of-sight of the sensor, the difference of the slant-atmospheric delay \\(\\Delta sd\\) between the two acquisitions, and the penetration depth \\(d\\) of the RADAR signal into the glacier surface. The unwrapped interferometric phase at position \\((i,j)\\) of an interferogram can be written as
\\[\\phi^{i,j}_{uwn} = -\\frac{4\\pi}{\\lambda}\\left(\\frac{B^{i,j}\\cos(\\theta^{i,j}-\\alpha^ {i,j})}{r^{i,j}\\sin(\\theta^{i,j})}h^{i,j}+v^{i,j}+\\Delta sd^{i,j}\\right) \\tag{5}\\] \\[-\\arctan\\left(-\\frac{2\\pi\\cdot\\sqrt{e^{\\prime}}\\cdot d^{i,j}\\cdot B ^{i,j}_{\\perp}}{r^{i,j}\\lambda\\tan(\\theta^{i,j})}\\right)\\]
The notation used in the equation is in accordance with (Hanssen, 2001). The four different parts of Equation (5) show the mathematical description of the above mentioned influences onto the interferometric phase. The geometric reference phase is already corrected in this representation. According to Equation (5) each interferometric phase observation induces 4 unknown parameters (\\(h\\), \\(v\\), \\(\\Delta sd\\), \\(d\\)). Thus, the inversion of the model is a highly underdetermined problem. A solution can be found if \\(i\\)) additional observations are incorporated on a pixel by pixel basis, or if \\(ii\\)) prior information is integrated into the equation system. The second strategy might be employed if one or more parameters of the equation system are known (e.g. external DEM's, or knowledge about surface deformation). Such information is mostly not available in the arctic environment. Thus, a solution has to be found by a combination of a series of interferograms in consideration of additional assumptions about the time evolution of some parameters.
#### 2.3.2 Temporal model
To guarantee a successful separation of the phase components in Equation (5) functional relations describing the connection between unknowns in different data sets have to be established. Such models are only found for deterministic processes, i.e. signals that do not arise from a stochastic process. In principle, this holds only for the evolution of topography and surface displacement. As topography changes are usually slow, and because of the limited sensitivity of the interferometric phase with respect to topography variations, a time independent description of surface topography \\(h\\) has been chosen. Introducing this model reduces the amount of topography-related unknowns from \\(N\\cdot i\\cdot j\\) unknowns to \\(i\\cdot j\\) unknowns.
As described in (Fidland and Lingle, 1998) and (Frolich and Doake, 1998) the assumption of constant glacier flow is doubtful especially if ERS tandem interferograms are used. For modeling a time-dependent flow behavior \\(v(t)\\) a mathematical model is employed. We refrain from using physical flow models, because of their high complexity, significant non-linearity, and limited quality. As least-squares adjustments are better suited for solving linear problems, linear models for describing the glacier flow are favored. Considering the usually uneven distribution of the data sets over time a piecewise Lagrange polynom is selected. The maximum polynomial order \\(\\bar{u}\\) is equal to \\(\\bar{u}=N-u_{v}-1\\), where \\(u_{v}\\) is the number of parameters not related to surface motion. The term \\(-1\\) warrants a redundant equation system. Thus, the surface motion \\(v(t)\\) is modeled by
\\[v(t)=\\sum_{g=1}^{N-\\bar{u}}a_{g}t^{g-1} \\tag{6}\\]
#### 2.3.3 Spatial model
The unknown parameters are not solved in each pixel but rather in the nodes of a regular spatial grid. This step is allowed if the sampling rate of the digital data sets is higher than necessary for the representation of their information content. The restriction of calculating the desired parameters only in a coarser grid entails several advantages. On one hand, it reduces processing time, on the other hand, it increases redundancy and, by this, the ability of the adjustment to detect gross errors in the observations. The mesh size has to be chosen properly to avoid undersampling. Bilinear planes have been selected for approximating the spatial correlation of topography and motion. The functional relation between an observed phase value in an arbitrary position \\(\\phi^{i,j}\\) and an unknown value in a node of the corresponding bilinear raster \\(\\phi^{k,l}\\) is given by
\\[\\phi^{i,j} = \\phi^{k,l}+(\\phi^{k+1,l}-\\phi^{k,l})dr+(\\phi^{k,l+1}-\\phi^{k,l}) dc+ \\tag{7}\\] \\[(\\phi^{k+1,l+1}-\\phi^{k+1,l}-\\phi^{k,l+1}+\\phi^{k,l})drdc\\]
where \\(dr=\\phi^{i}-\\phi^{k}\\) and \\(dc=\\phi^{j}-\\phi^{l}\\).
Although using the proposed models allows to reduce the number of unknowns, the equation system is still underdetermined. This is due to the un-modeled atmospheric artifacts and the unknown penetration depth. In (Hanssen, 2001) a stochastic model for approximating the influence of the atmosphere on SAR interferograms is proposed, which is based on the spatial correlation of the atmospheric signal. As the atmosphere in the arctic area can not hold much water vapor and is usually characterized by a stable stratification, atmospheric effects are neglected in this study. The penetration depth of C-band SAR signals into firm and ice was studied in detail in (Hoen, 2001). Maximal penetration depth into dry snow is shown to be up to 30 m. In this paper penetration depth is considered constant in time. Influences by constant penetration depth is considered as part of the topography component.
### Stochastic model
Weighting of observations is done by considering the coherency of the observed phase values. The probability density function (PDF) of the interferometric phase for each resolution cell is calculated from the coherency using the theory described in (Bamler and Hartl, 1998) and (Lee et al., 1994). The standard deviations of the observed phase values are derived from the PDF function by
\\[\\sigma_{\\phi}=\\int\\limits_{-\\phi}^{\\phi}(\\phi-\\phi_{0})^{2}PDF(\\phi)dt \\tag{8}\\]with \\(\\phi_{0}\\) being the expectation value of the interferometric phase. From the standard deviations the weight matrix \\(P_{bb}=diag\\frac{1}{\\sigma_{ii}^{2}}\\) is set up. The stochastic model derived in this way implicitly accounts for noise introduced by temporal, thermal and geometric decorrelation as well as errors originating from imperfect interpolation and co-registration procedures.
Additional variance and covariance values are usually introduced by orbit errors and atmospheric effects. These additional error sources are neglected in this study for the following two reasons: (1) Orbit errors have been significantly reduced using control point information. (2) Atmospheric effects showed to be small in polar regions as their cold atmosphere appears to be very stable and can not hold much water vapor. A closer look on atmospheric effects is included in section 3.
## 3 Analysis of the approach
### Accuracy analysis
The diagonal of \\(Q_{\\pm 2}\\), which contains information about the variances of the adjusted parameters, is used to derive theoretical standard deviations of topography \\(\\sigma_{topo}\\) and surface motion \\(\\sigma_{disp}\\). For analyzing the accuracy of the approach, simulated data sets have been generated on the basis of existing DEM and velocity maps. The coherency estimates of each interferogram account for the local surface slope. The dependency of \\(\\sigma_{topo}\\) and \\(\\sigma_{disp}\\) on the number of independent data sets is shown in Figure 1.
The improvement, which can be attained if more than two interferograms are combined for estimating the unknown parameters, is clearly visible.
Figure 2 shows how the standard deviations of the estimated topography and displacement parameters depend on the observation geometry, which is mainly a function of the interferometric baseline \\(B\\). The presented results are calculated on the basis of 3 simulated interferograms. In 256 simulation runs, the baselines of the interferograms 1 and 2 are varied from 0 m to 400 m each. The baseline of the third interferogram is fixed at 200 m. In general, the standard deviations of both, topography and motion, show distinct dependency on the baseline ratio of the involved interferograms. Baselines of similar length result in a weak configuration of the adjustment model and finally, in increased values for \\(\\sigma_{topo}\\), and \\(\\sigma_{disp}\\). In case of identical baselines the adjustment get's singular (this case is dictated by the cross in Figure 2). If the baselines of the 3 interferograms are well distributed, the topographic height may be estimated with an accuracy of \\(\\sigma_{topo}\\approx 3\\) m and the surface motion with a standard deviation of \\(\\sigma_{disp}\\approx 1-2\\) mm/day. More detailed inspection of Figure 2 shows however, that topography- and displacement-uncertainties are not minimized by the same measurement setup. \\(\\sigma_{topo}\\) is decreasing for well distributed long baselines, optimal accuracies of surface motion arise if all baselines are short.
### Sensitivity regarding model errors
The interferometric model presented in Equation (5) has been simplified by neglecting the influences of atmospheric effects and penetration depth. If these non-modeled influences are significant, model errors are introduced resulting in a systematic falsification of the estimated unknowns.
#### 3.2.1 Atmospheric effects
Because of the relative character of an interferogram, the theoretical expectation value of atmospheric effects will be zero for an arbitrary pixel. However, the variance of the signal might be significant depending on the respective weather conditions. If enough observations are combined, the empirical expectation value, which is estimated from the data, converges the theoretical value. Thus, for large amount of observations, atmospheric effects will cancel out. The effect of non-modeled atmospheric influences on the estimated unknowns has been calculated for varying observation configurations. Atmospheric phase screens have been simulated based on a method presented in (Hanssen, 2001) considering the effect of the polar atmosphere on the interferometric phase as described in (Gray et al., 1997). Figure 3 shows the effect of the polar atmosphere on the unknown topography \\(h\\) and motion \\(v\\) as a function of the number of multi-temporal data sets.
The solid lines in Figure 3 shows the systematic error of the estimated topography (Figure 3_a_)) and motion (Figure 3_b_)). For a low number of data sets, topography may be falsified up to \\(\\Delta h=10\\) m, the estimated motion up to \\(\\Delta v=0.4\\) cm/day. With increasing number of data sets \\(\\Delta h\\) and \\(\\Delta v\\) converge zero as expected. For investigating whether \\(\\Delta h\\) and \\(\\Delta v\\) differ significantly from zero, a significance test is performed. The dashed lines in Figure 3 represent the upper acceptance limit for the null hypothesis. Values lying above the dashed line are significant, values below insignificant. Figure 3 shows that in arctic regions systematic errors of the adjusted unknowns due to atmospheric effects are insignificant for all tested configurations.
#### 3.2.2 Penetration depth
The penetration depth \\(d\\) into the glacier surface depends mainly on its physical properties. As presented in (Hoen, 2001), C-band signals penetrate up to 27\\(\\pm\\)5 m into cold firm. The impact \\(\\phi_{pd}\\) on the interferometric phase increases with the interferometric baseline. Figure 4 shows the dependence of \\(\\phi_{pd}\\) on \\(d\\) and \\(B\\). A time independent \\(d\\) results in a
Figure 1: Theoretical standard deviations \\(\\sigma_{topo}\\) (black) and \\(\\sigma_{disp}\\) (gray) dependent on the number of data sets.
Figure 3: Effect of non-modeled atmospheric effects on the estimated unknowns topography (_a_)) and motion (_b_)) for polar regions.
Figure 2: _a_) Mean standard deviation of topography (m) and _b_) mean standard deviation of displacement parameters (mm/day) as a function of baseline constellation. The effective baseline length of \\(B_{3}\\) is set to 200 m.
systematic under-estimation of the glacier topography. The topographic height error is given by
\\[\\Delta h_{pd}=\\frac{\\lambda r\\sin(\\theta)}{4\\pi B_{\\perp}}\\phi_{pd} \\tag{9}\\]
Considering a typical baseline of B=100 m a height error \\(\\Delta h_{pd}\\) of up to 100 m might occure.
#### 3.2.3 Improperly chosen temporal model
An improperly chosen temporal model may also introduce systematic errors. In common dInSAR approaches the flow of a glacier is usually considered as constant in amplitude and direction for the complete observation period. Errors due to violations of this simplified flow model are analyzed based on simulated data sets. The topography related phase is simulated based on a DEM of a test glacier (Sonklar glacier, Hall-Island, Franz-Josef Land). The motion part of the phase increases linearly with time and is derived from Equation (6). The surface velocity field at time \\(t=0\\) is simulated based on the known velocity field of the same test glacier. Tests with varying acceleration rates showed that the resulting systematic errors are significant for all test scenarios and for both, topography and motion. Increasing the number of data sets allows to significantly reduce the topography error \\(\\Delta h\\), but motion errors appear nearly unchanged.
## 4 Validation
In this section the presented estimation method is validated based on multi-temporal data sets of a test area in the high Russian arctic. Hall Island is one of the largest islands of the Russian archipelago Franz-Josef Land. Approximately situated at 80\\({}^{\\circ}\\)10\\({}^{\\prime}\\) N and 58\\({}^{\\circ}\\)05\\({}^{\\prime}\\) E, its variety of topographical and glaciological forms as well as drastic changes of its environment attracts the attention of glaciologists. An impression of Hall Island is given in Figure 5.
Approximately 90 % of the island is covered by the so called areas in Figure 5). Nearly 50 % of the shoreline is dominated by the calving ice fronts of the large outlet glaciers. Some information about the most important outlet glaciers on Hall Island is included in Table 1. The numbers in the first column of Table 1 correspond to the numbering in Figure 5.
### Data sets
During the tandem- and ice-phase mission of the ERS1/2 satellite system several data sets were acquired over the area of interest. 6 ERS SAR SLC's recorded during the ERS1/2 tandem mission were chosen for further processing. Meteorological data has been included in the selection process to warrant comparable atmospheric conditions for all observations. All images are acquired in 1995 and cover the period from summer until winter of the mentioned year. The properties of the data sets are shown in Table 2.
The interferometric processing of the complex SAR images has been accomplished with the _Doris v3.6_ software developed at the _Delft Institute for Earth-oriented Space Research_ of the Technical University Delft, NL. The Phase and coherency image of a subset of the September interferogram that covers Hall Island, are shown in Figure 6. Areas with large displacement related phase components are marked with white frames. The framed regions correspond to the catchment areas of the outlet glaciers Sonklar, Nr. 7, Nr. 8, Nr. 12 and Nr. 17 (consult Table 1 and Figure 5).
In order to utilize the data for usage in the estimation process, some preprocessing steps are necessary. The most important step is the unwrapping of the initially ambiguous interferometric phase values. In addition the data sets are co-registered and phase ramps originating from orbit errors are removed based on control point information.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Number** & **Name** & **Area [\\(km^{*}\\)]** & **Length [\\(km\\)]** \\\\ \\hline \\hline
1 & Sonklar & 54.0 & 9.0 \\\\
2 & Nr. 2 & 25.2 & 5.5 \\\\
3 & Nr. 3 & 6.0 & 2.6 \\\\
4 & Nr. 4 & 13.3 & 4.5 \\\\
5 & Nr. 7 & 47.1 & 9.8 \\\\
6 & Nr. 8 & 67.0 & 6.1 \\\\
7 & Nr. 12 & 56.7 & 8.5 \\\\
8 & Nr. 16 & 23.1 & 7.7 \\\\
9 & Nr. 17 & 33.6 & 5.8 \\\\
10 & Moscow & 504.7 & 30.0 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Information about the largest outlet glacier on Hall Island according to (Vinogradov and Pasyyova, 1965).
Figure 4: Effect of penetration depth on the interferometric phase [rad].
Figure 5: KATE-200 image of Hall Island, Franz-Josef Land superimposed by a coarse information about its coastline and glacier margins taken from (Vinogradov and Pasyyova, 1965).
Figure 6: Phase image (\\(a\\))) and coherency image (\\(b\\))) of the September interferogram of Hall Island. The baseline of the interferogram is \\(\\approx 50\\) m.
### Estimation results
The preprocessed data sets form the vector of observations \\(b\\) in the estimation approach. Within the adjustment the unknown topography and motion parameters are estimated in the nodes of the chosen spatial model (see Section 2.3.3). To reduce the computational load the area of interest is separated into 14\\(\\times\\)14 tiles, which are evaluated separately and re-merged afterwards. Finally, topography and motion values in all resolution cells \\((i,j)\\) are interpolated from the estimated unknowns in the bilinear grid based on the mapping function of the spatial model. Thus, an area-wide DSM and velocity field is available after the process.
#### 4.2.1 Digital surface model of Hall Island
Figure 7 shows the DSM of Hall Island derived from the interferometric phase using the proposed method.
The topographic height values shown in Figure 7 are referenced to the WGS4 ellipsoid and vary within 0 m - 500 m. The topography gradient is small in the glacitated terrain. Rough terrain only appears in the mountainous regions in the south-western and eastern parts of the island.
Standard deviations for the height values are extracted from the covariance matrix \\(\\hat{K}_{\\hat{x}\\pm}\\), which is estimated within the adjustment. \\(\\hat{K}_{\\hat{x}\\pm}\\) is defined by
\\[\\hat{K}_{\\hat{x}\\pm}=\\hat{\\sigma}_{0}^{2}Q_{\\hat{x}\\pm}=\\hat{\\sigma}_{0}^{2} \\left(A^{T}P_{bb}A\\right)^{-1}\\text{ with }\\hat{\\sigma}_{0}^{2}=\\frac{\\hat{\\varepsilon}^{T}P_{bb}\\hat{ \\varepsilon}}{n-u} \\tag{10}\\]
Equation (10) shows that the standard deviations of the adjusted unknowns are a function of the a priori defined functional and stochastic model as well as of the variance factor \\(\\hat{\\sigma}_{0}^{2}\\), which is estimated within the adjustment. \\(\\hat{\\sigma}_{0}^{2}\\) may be interpreted as link between the implemented models and the real data. Its values indicate how far the chosen functional and stochastic model fit the measured data sets. In the adjustment the a priori value of \\(\\sigma_{0}^{2}\\) is set to 1. Significant deviations of \\(\\hat{\\sigma}_{0}^{2}\\) from 1 indicate errors in the models or gross errors in the data. A separate variance factor is estimated for each of the 14\\(\\times\\)14 tiles.
The standard deviations of the adjusted topographic heights are presented in Figure 8, split into the theoretical standard deviations (diagonal elements of matrix \\(Q_{\\hat{x}\\pm}\\)) and the estimated variance factors for each of the 14\\(\\times\\)14 tiles. The parted representation entails several advantages. The theoretical standard deviations offer an insight into the properties of the adjustment's configuration (compare Figure 8_a_). They illustrate the spatial distribution of the achievable height accuracy assuming a precise functional and stochastic model. Figure 8_a_) shows that the design of the adjustment allows to estimate topography with high accuracy all over the island. The estimated variance factors presented in Figure 8_b_) indicate, in which parts of the island the measured data sets are sufficiently described by the implemented models. Low values for \\(\\hat{\\sigma}_{0}^{2}\\) appearing in large parts of the Moscow Ice Dome depict good agreement between model and data. Large values, which are visible in mountainous areas and in the catchment of the glaciers Sonklar, Nr. 7 and Nr. 8, indicate model errors. In the mountainous areas this errors are due to phase unwrapping problems. In the glacier catchments this errors are caused by an insufficient flow model.
The real standard deviations of the estimated topographic heights (diagonal of \\(\\hat{K}_{\\hat{x}\\pm}\\)) reach \\(\\sigma_{h}\\approx 1\\) m to \\(\\sigma_{h}\\approx 2\\) m in the glacitated terrain and lie between \\(\\sigma_{h}\\approx 2\\) m and \\(\\sigma_{h}\\approx 8\\) m in mountainous areas. In glaciated regions the standard deviations are in the range of the theoretical values estimated in Section 3 on the basis of theoretical data. Due to model errors the standard deviations in mountainous terrain are larger than the simulated ones.
#### 4.2.2 Velocity field of Moscow Ice Dome
To the best of the author's knowledge no detailed velocity map of the glaciers on Hall Island has been published up to now. Thus, the results of these study provide a new insight into the rheology and the physical properties of the ice masses on Hall Island. The estimates for the line-of-sight velocity components of the Moscow Ice Dome are presented in Figure 9.
The surface velocity is given in m/a. As expected mainly the glaciers Sonklar, Nr. 7, Nr. 8, Nr. 12, Nr. 16 and Nr. 17 are reflected in the interferometric phase. This is because the flowing direction of these glaciers is approximately directed in the sensor's line-of-sight. Besides, the movement of
Figure 8: _a_) Theoretical standard deviations of the estimated DSM [m]. _b_) Adjusted variance factors for Hall Island. One variance factor is estimated for each of the 14\\(\\times\\)14 tiles.
Figure 7: DSM of Hall Island derived from the interferometric phase. The black arrow indicates the viewing direction of the sensor. The positions of the largest outlet glaciers are indicated.
Figure 9: Velocity field of Hall Island derived from the interferometric phase. The black arrow indicates the viewing direction of the sensor. The positions of the largest outlet glacier are indicated.
glacier Nr. 4 is visible in addition. The transition from stable to moving ice is smooth for all glaciers. The absolute value of the line-of-sight velocities of Moscow Ice Dome varies from 0 m/a in ice-free areas and in the center of the island up to 43 m/a near the front of some of the outlet glaciers. The velocity of all glaciers increases from the center of the island towards the glacier terminus. The frontal part of the largest outlet glaciers suffers from strong temporal decorrelation. Thus, processing of glaciers velocities in the frontal parts of some glaciers was not possible.
The standard deviation of the line-of-sight velocity field is shown in Figure 10. The parameters are again separated into two parts, the theoretical standard deviations (Figure 10\\(a\\))) and the a posteriori variance factors (Figure 10\\(b\\)). Figure 8\\(b\\)) and 10\\(b\\)) are identical. Nevertheless, the parameters are presented for the sake of completeness. The distribution of theoretical standard deviations of the estimated velocities differs from the structure of the according topography values. This is due to the fact, that velocity estimates are mainly defined by interferograms with short baselines, whereas topography is especially influenced by interferograms with long baselines. In glaciated regions the real standard
### Interpretation of the residuals
During the estimation process several gross errors and model errors may occur that differ in origin and caused effect. Errors during SAR data acquisition, processing and phase unwrapping, wrongly determined stochastic properties, and insufficient functional relations are the most prominent. Hence, the development of a reliable estimation method, which allows to reveal gross errors in the data, is one of the most important goals of system design. The properties of the presented method regarding robustness and reliability are analyzed based on several indicators. All of them base on the equation
\\[\\Delta\\hat{\\varepsilon}=-\\left(Q_{bb}-A(A^{T}P_{bb}A)^{-1}A^{T}\\right)P_{bb} \\Delta b=-\\Upsilon\\Delta b \\tag{11}\\]
that describes how gross errors in the observations and model errors \\(\\Delta b\\) are reflected in the vector of adjusted residuals \\(\\hat{\\varepsilon}\\). The matrix \\(\\Upsilon\\) that maps \\(\\Delta b\\) onto the vector of adjusted residuals is presented in Figure 11. The structure of matrix \\(\\Upsilon\\) entails some convenient properties. The diagonal elements of \\(\\Upsilon\\) are close to unity, thus gross errors have a strong impact on \\(\\hat{\\varepsilon}\\) and are therefore easily detectable. The off-diagonal elements are small. Hence, an error in observation \\(i\\) only affects its associated residual and a dispersion of errors doesn't occur.
Because of this properties of the approach the vector of residuals \\(\\hat{\\varepsilon}\\) can be consulted for analyzing gross errors in the data and the models. An analysis of \\(\\hat{\\varepsilon}\\), which results during the estimation process indicates evidence for several error sources. These are
* Phase unwrapping errors, mainly in mountainous regions
* Non-modeled changes of the glacier topography in small isolated areas
* Errors due to an insufficient flow model in the caption of Sonklar Glacier
* Low frequency phase variations due to atmospheric effects
## 5 Summary
The presented approach allows an improved separation of topography- and displacement-related contributions to the interferometric phase by combining multi-temporal SAR interferograms in a least squares adjustment. The interpretability of the adjusted parameters is significantly increased by a systematic model-based quantification of all influences on the interferometric signal. The capability of the method to improve the accuracy of topography and displacement estimates, as well as the possibility to reveal gross errors in the observations has been demonstrated. A brief analysis of possible error sources has been presented. A validation using real data from an island in the Russian arctic confirms the approach.
## References
* Bamler and Hartl (1998) Bamler, R. and Hartl, P., 1998. Synthetic Aperture RADAR Interferometry. Inverse Problems 14, pp. R1-R54.
* Fatland and Lingle (1998) Fatland, D. and Lingle, C., 1998. Analysis of the 1993-95 Bering Glacier (Alaska) surge using differential SAR interferometry. Journal of Glaciology 44(148), pp. 532-546.
* Frolich and Doake (1998) Frolich, R. and Doake, C., 1998. Synthetic aperture radar interferometry over Rutford Ice Stream and Carlson Inlet, Antarctica. Journal of Glaciology 44(146), pp. 77-92.
* Gray et al. (1997) Gray, A., Mattar, K., Geudtner, D. and Vachon, P., 1997. Experiments at the CCRS using ERS tandem mode data. Proceedings of 3rd ERS Symposium, Florence.
* Hanssen (2001) Hanssen, R., 2001. Radar Interferometry: Data Interpretation and Error Analysis. Vol. 2, 1 edn, Kluwer Academic Publishers.
* Hoen (2001) Hoen, E., 2001. A correlation-based approach to modeling interferometric RADAR observations of the Greenland ice sheet. PhD thesis, Department of Applied Physics, Stanford University.
* Lee et al. (1994) Lee, J.-S., Hoppel, K. and Mango, S., 1994. Intensity and Phase Statistics of Multilook Polarimetric and Interferometric SAR Imagery. IEEE Transactions on Geoscience and Remote Sensing 32(5), pp. 1017-1028.
* Mikhail (1976) Mikhail, E., 1976. Observations and Least Squares. IEP New York.
* Zemlya Frantsa-Iosifa. Vol. 3, Gidrometeoizdat Press, Moskau.
Figure 11: Structure of matrix \\(\\Upsilon\\) for a subset of the area of interest consisting of 10\\(\\times\\)10 pixel size.
Figure 10: _a_) Theoretical standard deviations of the estimated velocity field [m/a]. _b_) Adjusted variance factors for Hall Island. One variance factor is estimated for each of the 14\\(\\times\\)14 tiles. deviations (diagonal of \\(\\hat{K}_{\\pm\\hat{\\varepsilon}}\\)) of the velocity estimates vary between 0.1 m/a and 0.7 m/a and are in the range of the theoretical values estimated in Section 3. Due to model errors the standard deviations in mountainous terrain are larger than the simulated ones. | This paper describes a new technique to simultaneously estimate topography and motion of polar glaciers from multi-temporal SAR interferograms. The approach is based on a combination of several SAR interferograms in a least-squares adjustment using the Gauss-Markov model. For connecting the multi-temporal data sets, a spatio-temporal model is proposed that describes the properties of the surface and its temporal evolution. Rigorous mathematical modeling of functional and stochastic relations allows for a systematic description of the processing chain. It also is an optimal tool to parameterize the statistics of every individual processing step, and the propagation of errors into the final results. Within the paper theoretical standard deviations of the unknowns are calculated depending on the configuration of the data sets. The influence of gross errors in the observations and the effect of non-modeled error sources on the unknowns are estimated. A validation of the approach based on real data concludes the paper.
SAR, Interferometer, Adjustment, Algorithms, Radar, Modelling, Multitemporal, Environment | Condense the content of the following passage. | 206 |
arxiv-format/2007_00729v1.md | # Learning Geocentric Object Pose in Oblique Monocular Images
Gordon Christie\\({}^{1,*}\\) Rodrigo Rene Rai Munoz Abujder\\({}^{1,*}\\) Kevin Foster\\({}^{1}\\) Shea Hagstrom\\({}^{1}\\)
Gregory D. Hager\\({}^{2}\\) Myron Z. Brown\\({}^{1}\\)
\\({}^{1}\\)The Johns Hopkins University Applied Physics Laboratory \\({}^{2}\\)The Johns Hopkins University
{gordon.christie,rai.munoz,kevin.foster, shea.hagstrom, myron.brown}@jhuapl.edu
[email protected]
## 1 Introduction
In this paper, we study the problem of rectifying oblique monocular images from overhead cameras to remove observed object parallax with respect to ground, enabling accurate object localization for Earth observation tasks including semantic mapping [6], map alignment [32, 3], change detection [7], and vision-aided navigation [11]. Current state-of-the-art methods for these tasks focus on near-nadir images without the confounding effect of parallax; however, the vast majority of overhead imagery is oblique. For response to natural disasters and other dynamic world events, often only oblique images can be made available in a timely manner. The ability to rectify oblique monocular images to remove parallax will enable a dramatic increase in utility of these methods to address real-world problems.
To address this very challenging problem, we first draw inspiration from Gupta et al. [12] who proposed geocentric pose, or height above ground and orientation with respect to gravity, as a powerful representation to impose real-world structure on object detection, segmentation, and localization tasks using RGBD images. Hand-crafted features based on
Figure 1: Our method takes monocular RGB images, predicts object height (meters) and geocentric pose, and rectifies height to geospatially accurate 3D models suitable for reliable alignment by a conventional method.
this idea have even featured prominently in state-of-the-art deep learning methods for indoor semantic segmentation [13, 20, 14, 4, 24, 18, 22, 30]. For close-range vision tasks, height and orientation have been derived directly from stereo-computed depth and more recently from monocular depth predicted by deep networks [17]. For long-range vision tasks such as Earth observation, depth cannot be reliably estimated with monocular images, so we further draw inspiration from recent work in monocular height above ground prediction [26, 21, 10, 1, 2, 16, 15, 33] and optical flow prediction from static images [23, 29, 28, 9]. We develop an encoding of geocentric pose and train a deep network to compute the representation densely. Our model jointly learns to predict height above ground and dense flow vectors mapping surface features to ground level. Figure 1 illustrates the use of our method to rectify overhead images taken from very different oblique viewpoints and then align the rectified height images - for this example, by affine homography. Height and flow for this example were derived from lidar, but example predictions from our model are shown in Section 4.3. While our experimental results are demonstrated for satellite images, we believe our method can also be successfully applied to airborne cameras and even ground-based cameras.
Our contributions are summarized as follows:
* We adopt geocentric pose as a general representation for geometry in oblique monocular images and are the first to report the following: 1) a method to supervise its learning, and 2) a method for prediction without reliance on depth estimates which cannot be reliably determined from monocular images at longer ranges.
* We extend the Urban Semantic 3D (US3D) dataset [2] to include labels for the geocentric pose task, enabling public research and comparative analysis of methods. We further extend US3D to include additional images with a wide range of oblique viewing angles from the SpaceNet 4 (SN4) contest [31] to enable more comprehensive parametric evaluation of this task.
* We demonstrate that our model designed to jointly learn height and orientation performs better than a model trained for each task independently, and increases efficiency through shared weights. We further demonstrate the need for rotation augmentations to overcome bias from severely limited viewpoint diversity due to sun-synchronous satellite orbits.
* We demonstrate the efficacy of our method for image rectification to improve intersection over union (IoU) scores for semantic segmentation with oblique images.
* All of our data and code are publicly available.
## 2 Related Work
Our approach draws inspiration from a large body of work exploiting object height and orientation with respect to ground to improve semantic segmentation and related tasks for RGBD images. Our encoding of this representation in a deep network is inspired by recent progress in predicting height above ground from single images and predicting optical flow from static images. Before introducing the details of our method, we review these motivating works.
### Geocentric Pose
Gupta et al. [12] proposed geocentric pose - height and orientation with respect to ground - as a general feature for object recognition and scene classification. Gupta et al. [13] further proposed to encode horizontal disparity (or depth), height above ground, and orientation with respect to gravity as the popular three-channel HHA representation and demonstrated significant performance improvements for object detection, instance segmentation, and semantic segmentation tasks. Hand-crafted HHA features have since featured prominently even in deep learning state-of-the-art methods for indoor semantic segmentation [20, 14, 4, 24, 18, 22, 30] as well as object detection [20, 25] and semantic scene completion [19]. All of these works involve close-range indoor vision tasks and derive geocentric pose from depth, with height above ground approximated relative to the lowest point in an image [12]. In our work, we learn to predict these attributes directly in complex outdoor environments based on appearance without depth which is difficult to estimate reliably from images captured at long range. We also accurately predict absolute height above ground from monocular images. This is necessary for accurately rectifying the images, removing observed object parallax to improve accuracy of localization and enable accurate alignment of multiple images taken from very different oblique viewpoints.
### Monocular Height Prediction
The successes of deep learning methods for monocular depth prediction [17] have motivated recent work to directly learn to predict height from appearance in a single image. The earliest work to our knowledge was conducted by Srivastava et al. (2017) who proposed a multi-task convolutional neural network (CNN) for joint height estimation and semantic segmentation of monocular aerial images [26]. Mou and Zhu (2018) also proposed a CNN for height estimation and demonstrated its use for instance segmentation of buildings [21]. Each of these early works was evaluated using a single overhead image mosaic from a single city. Ghamisi and Yokoya (2018) proposed a conditional generative adversarial network (cGAN) for image to height translation and reported results with a single image from each of three cities [10]. Amirkolaee and Arefi (2019) proposed a CNN trained with post-earthquake lidar and demonstrated its use to detect collapsed buildings by comparing model predictions for pre- and post-event images [1]. To promote research with larger-scale supervision, Bosch et al. (2019) produced the Urban Semantic 3D (US3D) dataset which includes sixty-nine satellite images over Jacksonville, FL and Omaha, NE, each covering approximately one hundred square kilometers [2]. Le Saux et al. (2019) leveraged this dataset to conduct the 2019 Data Fusion Contest focused on semantic 3D reconstruction, including a novel challenge track for single-view semantic 3D [16]. The winning solutions by Kunwar [15] and Zheng et al. [33] both exploited semantic labels as priors for height prediction. In this work, we demonstrate comparable accuracy without semantic priors. We also show improved height prediction accuracy by jointly learning to predict orientation flow vectors. In addition to our experiments, we leverage and extend the US3D dataset using public satellite images from the 2018 SpaceNet 4 (SN4) contest that span a wide range of viewing angles over Atlanta, GA [31], and we demonstrate that our method to predict geocentric pose significantly improves building segmentation accuracy for oblique images.
### Optical Flow Prediction from a Static Image
Our approach to learning geocentric pose is inspired by recently demonstrated methods to predict dense optical flow fields from static images with self-supervision from optical flow methods applied to videos. Pintea et al. (2014) proposed regression of dense optical flow fields from static images using structured random forests [23]. Walker (2015) proposed a CNN for ordinal regression to better generalize over diverse domains [29]. Walker et al. (2016) proposed a generative model using a variational auto-encoder (VAE) for learning motion trajectories from static images [28]. Gao et al. (2018) also explored a generative model using a cGAN but reported state-of-the-art results for optical flow prediction and action recognition with their Im2Flow regression model, a modified U-Net CNN encoder/decoder trained by minimizing both a pixel L2 loss and a motion content loss derived from a separate action recognition network that regularizes the regression network to produce realistic motion patterns [9]. To learn geocentric pose, we employ a similar U-Net architecture and demonstrate improved performance by jointly learning to predict height. We also highlight orientation bias for our task by performing rotation augmentations during training. We produce reference flow fields for supervision automatically using lidar as discussed in Section 3.3.
## 3 Learning Geocentric Pose
### Representation
Our representation of geocentric pose encodes height above ground and flow vectors that map surface features to ground level. A satellite pushbroom sensor model is well-approximated locally by affine projection which preserves the invariant property of parallelism [5]. We exploit this property in representing flow fields with pixel-level magnitudes and image-level orientation. Similar to [9], we represent orientation (\\(\\theta\\)) as a two-element vector, [\\(\\sin(\\theta)\\), \\(\\cos(\\theta)\\)], representing the horizontal and vertical components of the flow vectors. We observe that each feature's height above ground is intrinsic and the magnitude of its flow vector is related to that height by each image's projection. We thus employ height as a prior in our model for learning magnitude.
### Model
Our model, illustrated in Figure 2, jointly predicts image-level orientation, as well as dense above-ground-level heights and flow vector magnitudes. The base architecture utilizes a U-Net decoder with a ResNet34 encoder. At the last layer of the encoder, the image-level orientation is predicted as \\(\\sin(\\theta)\\) and \\(\\cos(\\theta)\\). The output of the decoder is used to predict heights, which are concatenated with the decoder output for predicting magnitudes. MSE is used for all output heads (image-level orientation, magnitude, and height), where each loss is weighted equally during training. At test time, flow vectors can be calculated by multiplying the predictions of image-level orientation and per-pixel magnitudes. We present an ablation study where height prediction is removed from the model to show its importance for learning to predict orientation and magnitude. Height is intrinsic to objects in the image, where pixels representing the same physical locations on a building in different images should have the same heights. However, magnitudes for these pixels will vary with changes to viewing geometry. We believe the intrinsic properties of height provide valuable context for predicting magnitude. We also show that the accuracy of our height predictions is comparable to state-of-the-art solutions for a public challenge dataset, and note that our network shares weights for multiple tasks, making it more efficient than having separate networks for each task.
### Supervision
To enable supervised learning of our model, we have developed a pipeline for producing non-overlapping overhead RGB image tiles with lidar-derived attributes projected into each oblique image pixel, as illustrated in Figure 3. We utilized this pipeline to produce training and test datasets for our task, augmenting public data from US3D [2] and SN4 [31]. For each geographic tile, we first align each overhead image with lidar intensity using the mutual information metric and update the image translation terms in the RPC camera metadata [5]. To improve reliability of image matching, we cast shadows in each lidar intensity image using solar angle image metadata to match the shad ows observed in the RGB image. Layers produced include UTM geographic coordinates, ground-level height from the Digital Terrain Model (DTM), surface-level height from the Digital Surface Model (DSM), height above ground computed from the difference of the DSM and DTM, the shadow mask produced for image matching, and image flow vectors mapping surface-level feature pixels to their ground-level pixel coordinates. Our representation of geocentric pose is composed of height above ground and orientation with respect to ground as defined by the dense flow vectors. Both rely on knowledge of ground level in the DTM. For the lidar data used in our experiments, DTM layers were produced by professional surveyors with manual edits, but automated methods for ground classification in lidar and even in DSMs produced using satellite images also work well [8].
For our experiments, we also employ semantic labels derived from public map data to demonstrate the value of our model for rectifying map features in oblique images. We project this map data into each image with the same procedure used for lidar attributes. Layers include semantic label for each pixel and ground-level footprints for buildings. Building facades are labeled separately from roofs.
## 4 Experiments
### Datasets
For our experiments, we extended two publicly-available datasets - US3D [2] and SN4 [31] - using the method described in Section 3.3 and illustrated in Figure 3. We train with the full resolution for each dataset.
* DFC19. We use the same 2,783 training images and 50 testing images of Jacksonville, FL and Omaha, NE from US3D used for the 2019 Data Fusion Contest [16]. We also use an extended test set with 300 images including more view diversity for the same geographic tiles. Images are each 2048x2048 pixels.
* ATL-SN4. We produced 25,500 training images and 17,554 testing images of Atlanta, GA using public un-rectified source images to closely match the rectified image tiles used for SN4, as shown in Figure 4. We used 7,702 training images and 310 testing images, cropped to 1024x1024 pixels, for our experiments.
Viewpoint diversity and pixel resolution for images in the DFC19 and ATL-SN4 datasets are shown in Figure 5. Jacksonville and Omaha images were collected by MAXAR's WorldView-3 satellite on multiple dates with a variety of azimuth angles
Figure 3: Lidar-derived attributes for each RGB image include height above ground, geocentric pose flow vectors, and shadow masks. Map attributes include semantic labels, building facades, and ground-level building footprints.
Figure 2: This shows the architecture of our full approach, which uses a U-Net decoder with a ResNet34 encoder. At the last layer of the encoder, we predict the image-level orientation as \\(\\sin(\\theta)\\) and \\(\\cos(\\theta)\\). At the output of the decoder, we predict per-pixel above-ground-level height values, which are concatenated with the decoder’s output and used to predict per-pixel magnitudes. An MSE loss is used for all output heads. At test time, flow vectors can be calculated by multiplying image-level orientation predictions with the per-pixel magnitudes.
gles. ATL-SN4 images were collected by MAXAR's WorldView-2 satellite during a single orbit with very limited azimuth diversity and a wide range of off-nadir angles. Together, these datasets enable thorough evaluation.
### Methods
Flow Vector RegressionFor each test set, we present four sets of results. These include combinations of models trained with and without height supervision, and with and without train-time rotation augmentations. As discussed earlier, our datasets consist of orientation bias because of the sun-synchronous satellite orbits. To make our model generalizable to unseen orientations, we perform train-time flips and rotations randomly, which can introduce new orientation ground truth for each image at different epochs during training. Our approaches are described as follows:
* **flow** Model with height prediction head removed and trained without augmentations.
* **flow-h** Full model trained without augmentations.
* **flow-a** flow trained with augmentations.
* **flow-h** flow-h trained with augmentations.
For completeness, we present image-level orientation (angle) and pixel-level magnitude (mag) errors for our predictions, as they are learned separately during training. Orientation errors are measured in degrees, while magnitude errors are measured in pixels. However, we note that orientation and magnitude are typically not appropriate metrics for this task. As an example, in a nadir image where all pixel magnitudes are zero, predicting the orientation is meaningless. Similarly, in a highly-oblique image where the magnitudes are high, it is extremely important to predict the orientation accurately. We therefore measure per-pixel endpoint errors (EPE), which measure the Euclidean distance between the endpoints of the predicted and ground truth flow vectors. However, note that mag errors are equal to EPE when orientation is known from the sensor metadata, which is sometimes the case with satellite imagery. Therefore, mag errors can be an appropriate metric when orientation is known.
These metrics are calculated with and without test-time rotations to show how models that do not include train-time rotations over-fit to the limited set of orientations in the train set. We also calculate per-category EPE to show how semantics affect performance. Categories from DFC19 are used, as well as a separate layer with shadow masks.
Building Footprint ExtractionOne of the goals of this work is to enable more accurate automated-mapping from overhead imagery. With our flow vector predictions, outputs from any segmenter or detector can be input into our model and transformed to ground level. To demonstrate the accuracy of our model, we use building annotations and footprints from the DFC19 and ATL-SN4 test sets. Building annotations consist of the roof and facade labels in the image, while the footprints represent the base of the building identified from top-down lidar. Using our predicted flow vectors, we warp the building annotations to ground level and compare to the ground truth footprints.
We also demonstrate the reverse capability, where we start with footprints and warp them into building annotations using our predicted flow vectors. This is useful in situations where there is a desire to overlay map data (e.g., OpenStreetMap) on imagery as an initial set of annotations. For example, when a new image is captured of an area actively being developed, we may want to pull in existing annotations so annotators do not start from scratch.
We compare three results for each of the two tasks: 1) transform building annotations to footprints, and 2) transform footprints to building annotations. First, we measure
Figure 4: Train (blue) and test (red) tiles for the ATL-SN4 unrectified images (right) were selected to closely match the split for SpaceNet 4 orthorectified image tiles (left). Images shown are from Google Earth.
Figure 5: Azimuth angle, off-nadir angle, and resolution (meters) is shown for Jacksonville and Omaha images from DFC19 (top) and ATL-SN4 images (bottom).
IoU between the building annotations and the footprints to understand what the accuracy is when we do nothing. Second, we warp the source mask (building annotations or footprints) to the target mask using the ground truth flow vectors to get an upper bound for the IoU on what can be achieved if we perfectly predict the flow vectors. Note that we do not get perfect overlap in this case because of occluded ground pixels. Finally, we measure IoU for the warped versions of the source masks using our predicted flow vectors.
### Results
Height PredictionWe assess our current method, which takes the height outputs of flow-h, compared to two recent strong baselines [15, 33] for the very challenging DFC19 test set [16], measuring mean and root mean square (RMS) error (meters) for height predictions compared to above ground height measured from lidar. Results are shown in Table 1. Both baseline methods anchor height predictions using semantic category, and both exploit test-time ensembles to improve performance. While semantic anchors appear to improve accuracy for categories with low height variance, they do not account for the variance observed in urban scenes. Our model performs better overall without semantic priors or test-time ensembles.
Figure 6 depicts building height statistics for the train and test sets, with some building heights approaching 200 meters. Achieving more reliable predictions for those rare tall objects is a topic for ongoing research. Height prediction performance in the presence of significant terrain relief has also yet to be characterized. Statistics for ground-level terrain height variation in the DFC19 and ATL-SN4 data sets are shown in Figure 7.
Flow Vector RegressionOur results for each of the approaches on the DFC19 test set without test-time augmentations can be seen in Table 2. The results from the same approaches applied to the test set containing rotation augmentations are shown in Table 3. The per-category results are EPE. Results in shadows, which are a separate layer (i.e., not included as part of the DFC category layer) are also included. Table 4 and Table 5 show similar results for ATL-SN4, but exclude a semantic breakdown, as the same human-validated semantic labels are not available for this dataset. The test sets consist of the original DFC19 and ATL-SN4 test sets along with 9 additional rotation augmentations per image at intervals of 36 degrees.
Two key observations can be made about these results. 1) It is clear from Table 3 and Table 5 that models trained without rotation augmentations over-fit to the orientation bias of the train set, and that train-time rotation augmentations are currently needed to create generalizable models for this task. 2) Jointly learning to predict above-ground-height improves metrics across most categories when test-time rotations are applied. Unsurprisingly, we observe the lowest EPE values for ground pixels, and some of the highest EPE errors on facades, roofs, and elevated roads, where ground truth magnitudes are highest.
We show the importance of train-time rotations qualitatively in Figure 8. In the first column, where no test-time rotation was performed, we can qualitatively observe similar performance between flow-h and flow-h. However, in the second column, when we rotate the image to an orientation not originally represented in the train set, we see flow-h qualitatively performing worse than flow-h.
Building Footprint ExtractionIn this section, we demonstrate the ability to transform semantic segmentations in the image space to ground-level map data, as well as pulling map data into imagery. Table 6 and Table 7 show
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline & **mean** & **mean bldgs** & **RMS** & **RMS bldgs** \\\\ \\hline Kunwar [15] & **2.69** & 8.33 & 9.26 & 19.65 \\\\ Zheng et al. [33] & 2.94 & 8.72 & 9.24 & 19.32 \\\\ Ours & 2.98 & **7.73** & **8.23** & **16.87** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Our regression model produces height predictions with lower RMS error (meters) than baseline models that anchor height predictions with semantic category.
Figure 6: Height distributions in train and test sets are comparable, with some buildings approaching 200 meters.
Figure 7: Ground-level terrain height variation statistics.
IoU for DFC19 and ATL-SN4, respectively. Unrectified is the comparison between the building annotations and the footprints without warping. Ours is the comparison between warped versions of the original mask and target mask using the predicted flow vectors. GT follows the same process as Ours, but with the ground truth flow vectors.
As seen from Table 6 and Table 7, our results better capture the footprints in these datasets than the original building annotations. Note that occluded pixels prevent GT from reaching an IoU score of 1. GT represents an upper bound on what can be achieved with perfect flow vector prediction.
Map AlignmentRectifying semantic labels to ground level simplifies the task of aligning maps and oblique im
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline & **Building to** & **Footprint to** \\\\ & **Footprint** & **Building** \\\\ \\hline Unrectified & 0.74 (89.2\\%) & 0.78 (90.7\\%) \\\\ Ours & 0.83 (98.8\\%) & 0.82 (95.3\\%) \\\\ GT & 0.84 & 0.86 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: IoU and percentage of GT for transforming building annotations to footprints and vice versa for DFC19.
\\begin{table}
\\begin{tabular}{c|c c c|c c c c c|c} \\hline \\hline
**Method** & **mag** & **angle** & **EPE** & **ground** & **veg** & **roof** & **water** & **elevated roads** & **facade** & **shadow** \\\\ \\hline flow & 2.71 & 16.11 & 3.08 & 1.39 & **3.68** & **5.44** & 1.78 & 6.86 & **7.11** & **4.03** \\\\ flow-h & **2.40** & 16.14 & **2.92** & **0.92** & 3.86 & 5.70 & 1.54 & **6.42** & 7.37 & 3.98 \\\\ flow-a & 2.91 & 17.52 & 3.24 & 1.15 & 4.04 & 6.17 & 1.57 & 7.66 & 8.32 & 4.42 \\\\ flow-ha & 2.69 & **15.09** & 3.04 & 1.06 & 4.06 & 5.89 & **1.41** & 6.89 & 7.83 & 4.25 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Results **without** test-time rotations for DFC19. Lower is better for all numbers. Per-category values are all end point errors (EPE). This table highlights that models trained to generalize perform worse than models that learn the orientation bias of the train set. However, we note that the model trained without rotation augmentations and with height supervision has the best overall EPE.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Method** & **mag** & **angle** & **EPE** \\\\ \\hline flow & 3.88 & 9.64 & 4.17 \\\\ flow-h & **3.78** & **7.38** & **3.99** \\\\ flow-a & 5.37 & 15.76 & 6.03 \\\\ flow-ha & 4.79 & 16.57 & 5.38 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Results **without** test-time rotations for ATL-SN4. Similar to Table 2, we see that the model trained without rotation augmentations, but with height supervision, performs best when the test set contains orientation bias.
\\begin{table}
\\begin{tabular}{c|c c c} \\hline \\hline
**Method** & **mag** & **angle** & **EPE** \\\\ \\hline flow & 6.04 & 77.31 & 8.79 \\\\ flow-h & 6.30 & 81.34 & 9.04 \\\\ flow-a & 4.81 & **15.77** & 5.39 \\\\ flow-ha & **4.22** & 23.19 & **5.15** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Results **with** test-time rotations for ATL-SN4. Similar to Table 3, we see that train-time rotations and height supervision are important when test-time rotations are applied.
\\begin{table}
\\begin{tabular}{c|c c c} \\hline \\hline
**Method** & **mag** & **angle** & **EPE** \\\\ \\hline flow & 6.04 & 77.31 & 8.79 \\\\ flow-h & 6.30 & 81.34 & 9.04 \\\\ flow-a & 4.81 & **15.77** & 5.39 \\\\ flow-ha & **4.22** & 23.19 & **5.15** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Results **with** test-time rotations for ATL-SN4. Similar to Table 2, we see that the model trained without rotation augmentations, but with height supervision, performs best when the test set contains orientation bias.
ages as shown in Figure 1. To demonstrate this, we apply the MATLAB imregdemons function, an efficient implementation of non-parametric image registration [27], to estimate dense displacement fields between pairs of images in the DFC19 test set. We do this for aligning RGB images as a baseline and then for rectified height images to demonstrate improved alignment. Table 8 shows mean IoU scores for reference building segmentation labels rectified to ground level and compared with the reference footprints after alignment. Mean IoU is significantly improved, and the fraction of images with IoU greater than 0.5 is significantly improved.
## 5 Discussion
In this paper, we have introduced the novel task of learning geocentric pose, defined as height above ground and orientation with respect to gravity, for above-ground objects in oblique monocular images. While we have shown the value of this representation for rectifying above-ground features in oblique satellite images, we believe that with minor modifications our method can also be successfully applied to airborne cameras and even ground-based cameras to address a broad range of outdoor mapping, change detection, and vision-aided navigation tasks for which a single ground plane cannot be assumed.
Much of the prior work on geocentric pose has focused on its exploitation as hand-crafted features for semantic segmentation. In this work, we have focused on its exploitation to rectify building segmentations to ground level, enabling geospatially accurate mapping with oblique images. Similar to much prior work with the HHA representation, we expect that our representation will also provide an effective prior for regularizing semantic segmentation predictions.
While our current results clearly indicate the efficacy of the proposed method, much remains unexplored. We expect that more explicitly employing intuitive cues such as shadows and building facades will help reduce prediction error for the height variation observed in urban scenes. Further, while our rotation augmentations help account for orientation bias in satellite images, we expect that more fully accounting for true geometry and appearance variation will help address current observed failure cases. We plan to explore these ideas in future work, and we will publicly release all of our code and data.
## Acknowledgments
This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) contract no. 2017-17032700004. This work was further supported by the National Geospatial-Intelligence Agency (NGA) and approved for public release, 20-316, with distribution statement A - approved for public release; distribution is unlimited. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, NGA, or the U.S. Government.
Figure 8: Orientation augmentations in training our model help to reduce bias in the satellite viewing angles. Height and flow vector ground truth and predictions from models trained with and without augmentations are shown for an example from ATL-SN4.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline & **Mean** & **IoU \\(>\\) 0.5** \\\\ \\hline Unaligned & 0.46 & 0.40 \\\\ RGB aligned & 0.66 & 0.85 \\\\ flow-ha & 0.69 & 0.93 \\\\ flow-ha fixed angle & **0.69** & **0.94** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 8: IoU values for transforming per-pixel building annotations to footprints in other overlapping images.
## References
* [1] Hamed Amini Amirkolae and Hossein Arefi. CNN-based estimation of pre-and post-earthquake height models from single optical images for identification of collapsed buildings. _Remote Sensing Letters_, 2019.
* [2] Marc Bosch, Kevin Foster, Gordon Christie, Sean Wang, Gregory D Hager, and Myron Brown. Semantic Stereo for Incidental Satellite Images. In _WACV_, 2019.
* [3] Honglie Chen, Weidi Xie, Andrea Vedaldi, and Andrew Zisserman. AutoCorrect: Deep Inductive Alignment of Noisy Geometric Annotations. _BMVC_, 2019.
* [4] Yanhua Cheng, Rui Cai, Zhiwei Li, Xin Zhao, and Kaiqi Huang. Locality-Sensitive Deconvolution Networks with Gated Fusionfor RGB-D Indoor Semantic Segmentation. In _CVPR_, 2017.
* [5] Carlo de Franchis, Enric Meinhardt-Llopis, Julien Michel, J-M Morel, and Gabriele Facciolo. On stereo-rectification of pushbroom images. In _ICIP_, 2014.
* [6] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raska. DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images. In _CVPRW_, 2018.
* [7] Jigar Doshi, Saikat Basu, and Guan Pang. From Satellite Imagery to Disaster Insights. _NeurIPS Workshops_, 2018.
* [8] Liuyun Duan, Mathieu Desbrun, Anne Giraud, Frederic Trastour, and Lionel Laurore. Large-Scale DTM Generation From Satellite Data. In _CVPRW_, 2019.
* [9] Ruohan Gao, Bo Xiong, and Kristen Grauman. Im2Flow: Motion Hallucination from Static Images for Action Recognition. In _CVPR_, 2018.
* [10] Pedram Ghamisi and Naoto Yokoya. IMG2DSM: Height Simulation From Single ImageryUsing Conditional Generative Adversarial Net. _IEEE Geoscience and Remote Sensing Letters_, 2018.
* [11] Hunter Goforth and Simon Lucey. GPS-Denied UAV Localization using Pre-existing Satellite Imagery. In _ICRA_, 2019.
* [12] Saurabh Gupta, Pablo Arbelaez, and Jitendra Malik. Perceptual organization and recognition of indoor scenes from RGB-D images. In _CVPR_, 2013.
* [13] Saurabh Gupta, Ross Girshick, Pablo Arbelaez, and Jitendra Malik. Learning rich features from RGB-D images for object detection and segmentation. In _ECCV_, 2014.
* [14] Saurabh Gupta, Judy Hoffman, and Jitendra Malik. Cross Modal Distillation for Supervision Transfer. In _CVPR_, 2016.
* [15] Saket Kunwar. U-Net Ensemble for Semantic and Height Estimation Using Coarse-Map Initialization. In _IGARSS_, 2019.
* [16] Bertrand Le Saux, Naoto Yokoya, Ronny Hansch, Myron Brown, and Greg Hager. 2019 Data Fusion Contest [technical committees]. _IEEE Geoscience and Remote Sensing Magazine_, 2019.
* [17] Zhengqi Li and Noah Snavely. MegaDepth: Learning Single-View Depth Prediction from Internet Photos. In _CVPR_, 2018.
* [18] Di Lin, Guangyong Chen, Daniel Cohen-Or, Pheng-Ann Heng, and Hui Huang. Cascaded Feature Network for Semantic Segmentation of RGB-D Images. In _ICCV_, 2017.
* [19] Shice Liu, Yu Hu, Yiming Zeng, Qiankun Tang, Beibei Jin, Yinhe Han, and Xiaowei Li. See and Think: Disentangling Semantic Scene Completion. In _NeurIPS_, 2018.
* [20] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully Convolutional Networks for Semantic Segmentation. In _CVPR_, 2015.
* [21] Lichao Mou and Xiao Xiang Zhu. IM2HEIGHT: Height Estimation from Single Monocular Imagery via Fully Residual Convolutional-Deconvolutional Network. _arXiv:1802.10249_, 2018.
* [22] Seong-Jin Park, Ki-Sang Hong, and Seungyong Lee. RDFNet: RGB-D Multi-level Residual Feature Fusion forIndoor Semantic Segmentation. In _ICCV_, 2017.
* [23] Silvia L Pintea, Jan C van Gemert, and Arnold WM Smeulders. Deja vu. In _ECCV_, 2014.
* [24] Xiaojuan Qi, Renjie Liao, Jiaya Jia, Sanja Fidler, and Raquel Urtasun. 3D Graph Neural Networks for RGBD Semantic Segmentation. In _ICCV_, 2017.
* [25] Max Schwarz, Anton Milan, Arul Selvam Periyasamy, and Sven Behnke. RGB-D Object Detection and Semantic Segmentation for Autonomous Manipulation in Clutter. _IJRR_, 2018.
* [26] Shivangi Srivastava, Michele Volpi, and Devis Tuia. Joint height estimation and semantic labeling of monocular aerial images with cnns. In _IGARSS_, 2017.
* [27] Tom Vercauteren, Xavier Pennec, Aymeric Perchant, and Nicholas Ayache. Diffeomorphic demons: Efficient non-parametric image registration. _NeuroImage_, 2009.
* [28] Jacob Walker, Carl Doersch, Abhinav Gupta, and Martial Hebert. An Uncertain Future: Forecasting from Static Images using Variational Autoencoders. In _ECCV_, 2016.
* [29] Jacob Walker, Abhinav Gupta, and Martial Hebert. Dense Optical Flow Prediction from a Static Image. In _ICCV_, 2015.
* [30] Weiyue Wang and Ulrich Neumann. Depth-aware CNN for RGB-D Segmentation. In _ECCV_, 2018.
* [31] Nicholas Weir, David Lindenbaum, Alexei Bastidas, Adam Van Etten, Sean McPherson, Jacob Shermeyer, Varun Kumar, and Hanlin Tang. SpaceNet MVOI: a Multi-View Overhead Imagery Dataset. In _ICCV_, 2019.
* [32] Armand Zampieri, Guillaume Charpiat, Nicolas Girard, and Yuliya Tarabalka. Multimodal image alignment through a multiscale chain of neural networks with application to remote sensing. In _ECCV_, 2018.
* [33] Zhuo Zheng, Yanfei Zhong, and Junjue Wang. PopNet: Encoder-Dual Decoder for Semantic Segmentation and Single-View Height Estimation. In _IGARSS_, 2019. | An object's geocentric pose, defined as the height above ground and orientation with respect to gravity, is a powerful representation of real-world structure for object detection, segmentation, and localization tasks using RGBD images. For close-range vision tasks, height and orientation have been derived directly from stereo-computed depth and more recently from monocular depth predicted by deep networks. For long-range vision tasks such as Earth observation, depth cannot be reliably estimated with monocular images. Inspired by recent work in monocular height above ground prediction and optical flow prediction from static images, we develop an encoding of geocentric pose to address this challenge and train a deep network to compute the representation densely, supervised by publicly available airborne lidar. We exploit these attributes to rectify oblique images and remove observed object parallax to dramatically improve the accuracy of localization and to enable accurate alignment of multiple images taken from very different oblique viewpoints. We demonstrate the value of our approach by extending two large-scale public datasets for semantic segmentation in oblique satellite images. All of our data and code are publicly available1.
Footnote 1: [https://github.com/pubgeo/monocular-geocentric-pose](https://github.com/pubgeo/monocular-geocentric-pose)
* denotes equal contribution | Provide a brief summary of the text. | 260 |
arxiv-format/2106_11239v2.md | # 2D vs. 3D LiDAR-based Person Detection on Mobile Robots
Dan Jia
Alexander Hermans
Bastian Leibe
*All authors are with the Visual Computing Institute, RWTH Aachen {jia, hermans, leibe}@vision.rwth-aachen.de An earlier version with additional experiments can be found at [https://arxiv.org/abs/2106.11239vL](https://arxiv.org/abs/2106.11239vL).
## I Introduction
Person detection is an important task in many robotic applications, including safe autonomous navigation in human-populated environments or during human-robot interactions. LiDAR sensors are well-suited for person detection, thanks to their accurate depth measurements, long sensing range, and large field of view. There are many successful object detection methods using 3D LiDAR sensors in driving scenarios [53, 46, 48, 23, 40, 38, 47, 49], where pedestrian is typically one of the classes to be detected. Meanwhile, in mobile robot scenarios persons have successfully been detected using range scans from 2D LiDAR sensors [2, 32, 24, 5, 20, 21].
In this work, we present a comparative study on 2D and 3D LiDAR-based person detection. Properly understanding the performance differences (detection accuracy and runtime) between these two sensor types plays an important role for well-informed robot design [28]. We focus specifically on scenarios that are encountered by _e.g._ social robots or service robots, which are becoming increasingly relevant in recent years.1 Using the state-of-the-art CenterPoint [49] and DR-SPAAM [20] as representatives for methods based on 3D and 2D LiDAR sensors, we conduct a series of experiments on the large-scale, publicly available JackRabbot dataset [31], comparing the performance differences between these two sensor types.
Footnote 1: These scenarios differ significantly from driving scenarios in terms of density and proximity of surrounding persons, sensor height, encountered objects, _etc._
Our experiments reveal that the 3D LiDAR-based CenterPoint provides superior detection accuracy compared to the 2D LiDAR-based DR-SPAAM, but when only considering persons visible in the sensor data, these two methods perform on par. In applications where detecting nearby persons (which are most likely to be visible) is the primary requirement, 2D LiDAR sensors, which are often readily available on many mobile robots for mapping and safety purposes, can be a sufficient detection source. Additionally, 2D LiDAR-based DR-SPAAM has the advantage of higher inference speed, which is beneficial for mobile robots with limited onboard compute or power. General purpose person detection, however, is better accomplished with 3D LiDAR sensors, which scan a 3D sensing volume (not limited to a single scan plane as 2D LiDAR sensor) and are thus more robust towards occlusion. Our experiments also show that both methods can deliver well-localized detections and have similarly robustness against dense crowds. A closer examination on positive and negative predictions show that a non-negligible amount of persons are only detected by one of the detectors, implying that one should opt for an ensemble-based approach when maximum reliability is the primary design objective.
## II Related Work
**Person detectors based on 3D LiDAR data** estimate 3D bounding boxes of persons in a scene. Most such detectors have been designed for multi-class detection relying on autonomous driving datasets [16, 6, 42, 22], in which the pedestrian class is one of the classes to be detected. Existing deep-learning-based methods can largely be grouped into two categories. Single-stage detectors [14, 53, 46, 23, 52, 29, 44, 9, 47, 50, 49, 51, 18] use backbone
Fig. 1: Scans from a 3D and a 2D LiDAR sensor with person bounding boxes annotations. The vast difference between the scans are apparent, but how do these two sensors compare, when it comes to person detection? Understanding how these sensors affect this task is crucial for designing robots that are intended to operate around humans.
networks, either based on points [34, 43] or voxels [53, 23, 17, 11] to process the scene and generate bounding boxes directly from the extracted features. Two-stage detectors [39, 33, 48, 40, 10, 38] additionally introduce a bounding box refinement stage, which pools features within the box proposals and generates refined predictions. There exist, in addition, person detection methods that use hand-crafted features [45], which typically do not perform as well as the end-to-end learned detectors.
The KITTI dataset [16] is a forerunner in providing a standardized benchmark for testing LiDAR-based 3D object detectors, but it is now succeeded by newer and larger datasets, including the nuScenes dataset [6] and the Waymo Open Dataset [42]. However, these driving datasets differ from the mobile robot scenarios, where the robot moves in close proximity to humans (_e.g._ the JackRabbot dataset [31]).
In this work we use CenterPoint [49] as the representative for 3D LiDAR-based detectors. At the time of writing, it ranks second on the nuScenes benchmark [6] (the source-code for the higher ranking method [18] is not available), and thus represents the current available state-of-the-art in the area of 3D detection.
There are, in addition, 3D methods developed primarily on datasets collected by scanning static indoor scenes with RGB-D cameras [41, 8, 12, 1]. While in theory some of these approaches could be adapted for detecting persons in LiDAR data, we do not experiment with them in this work.
**Person detectors based on 2D LiDAR data** focus on estimating centroids of persons in the scene, parametrized as \\(x,y\\) coordinates on the LiDAR scan plane. It has long been a relevant task in the robotics community, given it is crucial in order for a mobile agent to autonomously navigate in human-populated environments. Early approaches [15, 36, 37] detect blobs with manually engineered heuristics, and track these blobs in sequential scans, leveraging motion as a clue for detection. Later works [2, 32, 24] improved the detection stage, replacing the heuristics with learned classifiers (_e.g._ AdaBoost) on hand-crafted features, and still rely on motion-based filtering to obtain high quality detections [24, 32].
Most recent developments [4, 5, 20, 21] resorted to deep-learning techniques by applying 1D CNNs to range data, and no longer require motion-based filtering. The first of these approaches is the DROW detector [4]. It was originally designed to detect walking aids, and was later extended to also detect persons [5]. The current state-of-the-art method is the DR-SPAAM detector [20, 21], which augments the DROW detector with a temporal aggregation paradigm, incorporating information over sequential LiDAR scans in order to improve the detection performance.
**Most similar to our work**, is the recent study of Linder _et al_. [28] who compare a range of different 2D, 3D, and RGB-D-based person detectors. While they compare a large number of older and more recent detectors, we limit our experiments to the two current state-of-the-art (and open-source) approaches and perform our experiments on the larger, public, and more diverse JackRabbot dataset [31]. Their findings with respect to 2D and 3D LiDAR-based person detection are in line with ours, however, we delve deeper into the differences, yielding interesting additional insights in how these two types of sensors compare for the task of person detection.
## III Person Detectors
The experiments in this paper are conducted with CenterPoint [49] and DR-SPAAM [20], state-of-the-art detectors based on 3D and 2D LiDAR sensors respectively. While we here very briefly recap the main ideas of the two detectors, we refer the interested reader to the main publications for a detailed overview.
**CenterPoint**[49] takes a voxelized 3D point cloud as input and uses either a VoxelNet [53] or PointPillars [23] to extract a 2D feature map on the bird's-eye-view (BEV) plane. From the extracted features, a center head is used to produce heatmaps, corresponding to \\(x,y\\) locations of bounding box centers on the BEV plane. This center head is supervised with 2D Gaussians produced by projecting bounding box centers onto the BEV plane, together with the focal loss [26]. In addition, regression heads are used to obtain \\(x,y\\) center refinement, center elevation, box dimensions, and orientation (encoded with sine and cosine values). These regression heads are supervised at the ground truth centers with an \\(L1\\) loss. In this work, we use the CenterPoint with a VoxelNet backbone for our experiments.
**DR-SPAAM**[20] takes as input the range component of a 2D LiDAR scan encoded using polar coordinates. A preprocessing step is used to extract points within small angular windows (called _cutouts_), and each window is normalized and passed into a 1D CNN. The network has two branches: a classification branch, which classifies if the window center is in the proximity of a person, and a regression branch, which regresses an offset from the window center to the \\(x,y\\) center location of the person. The classification branch is supervised with a binary cross-entropy loss, whereas the regression branch is supervised only for positive windows, using an \\(L2\\) loss. Finally, the predictions from all windows are post-processed with a distance-based non-maximum-suppression to obtain the final detections.
## IV Experimental Setup
### _Datasets_
Our main experiments are conducted with the JackRabbot Dataset and Benchmark (JRDB) [31], however, we additionally use the nuScenes dataset [6] for some pretraining experiments.
**JRDB**[31] contains 54 sequences collected with a mobile robot (the _JackRabbot_) moving in both indoor and outdoor environments on university campuses. These sequences are split into 27 sequences for training and validation, and 27 for testing. The robot is equipped with two 3D LiDAR sensors (Velodyne 16 Puck LITE), on the upper and lower part of the robot respectively, each producing scans with approximately 18,000 points. Persons in the environment are annotated with 3D bounding boxes, with a total of roughly 1.8M annotationsacross the dataset. At the moment of writing, JRDB is the only large-scale dataset that focuses on mobile robot scenarios, while also featuring LiDAR point cloud with 3D person annotations.
In addition, the JackRabbot is equipped with two 2D LiDAR sensors (SICK LMS500-20000), front and rear facing respectively. They are mounted at the height of the lower legs, and their scans are merged to a single 360\\({}^{\\circ}\\) scan, having 1091 points. Jia _et al_. [21] used these scans to evaluate 2D LiDAR-based person detectors by synchronizing and aligning them to the 3D annotations. In this work, we use 2D LiDAR scans from JRDB, following the same data preparation procedure. All our reported numbers are based on the JRDB validation set obtained from the standard train-validation split.
**The nuScenes dataset**[6] contains 1,000 short sequences (approximately 20 seconds each) collected from driving vehicles. These sequences are split into a train, validation, and test set, having 700, 150, 150 sequences respectively. The dataset is captured using a 32 beam LiDAR, producing scans with approximately 30,000 points at 20 Hz. Compared to JRDB, these scans cover a larger area and have significantly sparser point clouds. Every tenth frame (0.5 seconds apart) is annotated with 3D bounding boxes, with a total of 10 classes, one of which being pedestrian. A common practice is to combine the unlabeled scans to obtain a denser point cloud [6, 54, 47, 49] during training and inference.
### _Training Setup_
**For CenterPoint**, we mostly use the same hyperparameters from [49]. We follow the same training procedures and data augmentation scheme, with the AdamW optimizer [30], but train the network for 40 epochs with batch size 32, max learning rate \\(1e-3\\), weight decay 0.01, and momentum 0.85 to 0.95. For the voxelization of the input point cloud, we found that the default settings for outdoor scenarios are not optimal for JRDB. We experiment with two settings: one where we uses the parameters for nuScenes: (0.1m, 0.1m, 0.2m) voxel grids, and limited detection range of [-51.2m, 51.2m] for the \\(x\\) and \\(y\\) axes, and [-5m, 3m] for the \\(z\\) axis. Additionally, we use a more fine-grained voxelization using a (0.05m, 0.05m, 0.2m) grid and a limited detection range of [-25.6m, 25.6m] for the \\(x\\) and \\(y\\) axes, and [-2m, 8m] for the \\(z\\) axis. This reduction in voxel size and detection range is motivated by the fact that scenes in JRDB are typically of a smaller scale compared to those in nuScenes.
When fine-tuning a network pretrained on nuScenes, we reduce the training duration to 10 epochs. This resulted in the same performance as a complete 40 epoch training schedule on top of the pretrained network. Since nuScenes provides additional time increments and reflectance features for every point, we cannot directly fine-tune checkpoints provided by the authors. Instead we retrain the network on nuScenes, using only \\(x,y,z\\) coordinate of the points as input features, with the default training hyperparameters from [49], and fine-tune this network.
**To train DR-SPAAM**, we follow the procedures from [21], and train the network for 20 epochs with a batch size of 6, using the same pre and postprocessing hyperparameters. The original DR-SPAAM only predicts \\(x,y\\) locations of the person, providing no information related to bounding boxes. In order to compare with 3D-LiDAR-based methods, we experiment with modifying the regression branch, additionally predicting the bounding box width, length, and orientation. The width and length are parameterized as \\(log(w/\\bar{w})\\) and \\(log(l/\\bar{l})\\), where \\(\\bar{w}=0.5\\) and \\(\\bar{l}=0.9\\), which represent the average box width and length in the JRDB training set. The orientation is parameterized by its sine and cosine as in [49]. Both the size and orientation regression are supervised with an \\(L2\\) loss, with a weighting factor of 0.2 for the orientation. In addition, assuming persons do not vary significantly in height, we experiment with generating 3D bounding boxes, using the predicted BEV boxes and the average height from the training set. We did not experiment with regressing box height from the 2D LiDAR data.
### _Metrics_
Following the standard for object detection, we use the Average Precision (AP) as the main evaluation metric. We use three variants, AP\\({}_{\\text{box}}\\), AP\\({}_{\\text{BEV}}\\), and AP\\({}_{\\text{centroid}}\\), each having different criteria for assigning detections to ground truth boxes. For AP\\({}_{\\text{box}}\\), a detection can be assigned to a ground truth, if their 3D IoU is above \\(0.3\\) (following the JRDB convention [31]). AP\\({}_{\\text{BEV}}\\) relaxes the requirement to a 2D IoU criterion on the bird's-eye-view boxes, discarding the requirement related to box height and elevation. AP\\({}_{\\text{centroid}}\\) focuses on the center localization only, assigning a detection to a ground truth if their \\(x,y\\) location difference is smaller than \\(0.5\\)m [24, 5, 20], dropping the requirements on box size and orientation. In all these three variants, a ground truth can only be matched with one detection.
Since ground truth boxes have different visibility to 3D and 2D LiDAR sensors, we compute all three APs under two settings. In the first setting, we evaluate detections against ground truth boxes that contain at least 10 points from the 3D LiDAR. This corresponds to the default evaluation procedure of JRDB and we thus refer to it as the _default_ evaluation. In the second setting, which we term _2D-visible_, we evaluate detections against ground truth boxes which have at least 5 points from 2D LiDAR within a \\(0.5\\)m radius from the box \\(x,y\\) centroid. These two evaluation settings enable us to examine the detector performance from both types of sensors, independent of factors that cause persons to be completely invisible to the sensor (thus being impossible to detect).
## V Results
### _2D vs. 3D LiDAR-based Detection Performance_
The performance of CenterPoint and DR-SPAAM are compared in Table I. Under the default evaluation setting, DR-SPAAM trails CenterPoint trained on JRDB (with fine-grained voxelization) by a significant 31.9% AP\\({}_{\\text{box}}\\) and 26.3% AP\\({}_{\\text{BEV}}\\) (the gap on AP\\({}_{\\text{box}}\\) is greater since DR-SPAAMcan not estimate the bounding box height). However, when evaluated against ground truth that is visible to 2D LiDAR sensors, this gap shrinks to 9.2% AP\\({}_{\\text{BEV}}\\). For the AP\\({}_{\\text{centroid}}\\), DR-SPAAM and CenterPoint differ by 3.3%. A DR-SPAAM that regresses centroid only (as it originally is in [20]) has a further improved performance, leaving a small performance gap of 0.9% AP\\({}_{\\text{centroid}}\\) to CenterPoint trained only with JRDB.
These numbers show that, when only the centroid locations of visible persons are concerned, detectors using 2D or 3D LiDAR sensors have a similar performance, despite the vast information gap between the detector input (a simple planar scan of the legs _vs._ a full-body surface scan). Thus, for tasks like person avoidance or following, a 2D LiDAR sensor is sufficient. General purpose detection, however, is better carried out with 3D LiDAR-based detectors, since it is easy for persons to be fully occluded, and thus impossible to detect, from the scan plane of 2D LiDAR sensors. When designing robots, the choice between 2D and 3D LiDAR sensors should be made with task requirements in view.
The first row of Table I furthermore shows that a quite big domain gap exists between autonomous driving and mobile robot scenarios. There are many pre-trained 3D LiDAR-based detectors for driving scenarios, but it is unlikely that they will perform well on robotic tasks involving close contact with persons. Fortunately, this domain gap can be bridged by fine-tuning and adapting suited voxel sizes. These numbers highlight the importance of training on data similar to that will be encountered during deployment. For 2D LiDAR sensors, estimates about this domain gap are not directly possible, since they are rarely used in outdoor driving scenarios.
### _Detailed Detector Comparisons_
The remainder of this section focuses on comparison of CenterPoint and DR-SPAAM in specific aspects. The best variants of each detector (CenterPoint pretrained on nuScenes, fine-tuned on JRDB with fine-grained voxelization, and centroid-only DR-SPAAM) are used for these comparisons with AP\\({}_{\\text{centroid}}\\) as the evaluation metric.
#### Iv-B1 Localization Accuracy
The default ground truth association radius of 0.5m used by AP\\({}_{\\text{centroid}}\\) does not capture how well localized the different detections are. To further evaluate the localization accuracy, we run the evaluation with a varying association radius and plot the development of the performance in Fig. 2. Note that for this evaluation the gradient of a curve is interesting and not so much the absolute performance.
After a quick increase, the performances slowly saturate, suggesting that most detections are made with a reasonable accuracy. The main difference between CenterPoint and DR-SPAAM is the slope of the curves for small association radii. Here the performance for CenterPoint increases faster, suggesting a slightly better localization of its detections. Furthermore, the CenterPoint curves are almost saturated after an association radius of \\(\\sim\\)0.3m, indicating an upper bound for its localization inaccuracy. Here the DR-SPAAM curves are still slightly increasing, meaning that some of the detections are less well localized. Note that small inaccuracies exist in the annotations. Thus, detection scores obtained with an overly small association radius (_e.g._ lower than 0.1m) do not provide meaningful information.
#### Iv-B2 Distance-based Evaluation
The density of LiDAR data varies significantly with the distance to the sensor, which can affect the performance of detectors. Because of that we run an additional sliding-window evaluation, where we only consider ground truth annotations within a 2m window at varying distances from the sensor. For a meaningful evaluation we also constrain the detections to be in or near
\\begin{table}
\\begin{tabular}{l l c c c c c c} \\hline \\hline & & \\multicolumn{2}{c}{AP\\({}_{\\text{box}}\\)} & \\multicolumn{2}{c}{AP\\({}_{\\text{BEV}}\\)} & \\multicolumn{2}{c}{AP\\({}_{\\text{centroid}}\\)} \\\\ \\cline{3-8} & & Default & 2D-visible & Default & 2D-visible & Default & 2D-visible \\\\ \\hline \\multirow{4}{*}{CenterPoint} & trained on nuScenes & 26.3 & 26.0 & 30.9 & 28.5 & 35.3 & 32.1 \\\\ & + JRDB fine-tuning & 58.2 & 64.2 & 61.2 & 67.1 & 69.5 & 75.0 \\\\ & trained on JRDB & 60.0 & 68.2 & 62.6 & 69.9 & 67.9 & 75.1 \\\\ & trained on JRDB (fine-grained voxelization) & 66.0 & 75.0 & 67.1 & 76.5 & 70.1 & 78.1 \\\\ & + nuScenes pretraining & 70.0 & 78.6 & 71.4 & 80.7 & 74.9 & 82.7 \\\\ \\hline \\multirow{2}{*}{DR-SPAAM} & BEV & 34.1\\({}^{*}\\) & 58.2\\({}^{*}\\) & 40.8 & 67.3 & 46.6 & 74.8 \\\\ & Centroid only & – & – & – & – & – & 47.6 & 77.2 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: Performance of different CenterPoint and DR-SPAAM variants on the JRDB validation set. \\({}^{*}\\):3D Bounding boxes are obtained using predicted BEV boxes and the average box height from the training set.
Fig. 2: AP\\({}_{\\text{centroid}}\\) evaluated at different ground truth association radii. The CenterPoint curves are slightly steeper at the start, suggesting a better localization of the detections.
the evaluation window, as to not report unnecessary false positives. We do this by extending the window by 0.5m (the association radius) to the front and back and only consider detections within this larger window. This will still introduce additional false positives but also increase the true positive count, effectively being a trade-off between precision and recall. For this reason the overall performance is lower than reported in Table I, but here the relative performance between the two detectors is the interesting part.
The top and middle plots in Fig. 3 shows how the detector performances behave for this experiment. In all evaluations the performance is best within the first few meters and drops at higher distances. For the default evaluation, CenterPoint performs significantly more stable across the whole range, whereas the DR-SPAAM performance decreases more at higher ranges. This means that CenterPoint is better equipped to deal with the large variation of point densities, potentially due to the fact that objects are still visible to multiple LiDAR beams, whereas the 2D LiDAR only uses a single beam. For the 2D-visible evaluation, the performance of both detectors is surprisingly similar across the whole range, suggesting that for visible persons, both detectors have similar range-robustness.
The bottom plot in Fig. 3 shows how many ground truth annotations are present within the windows for both types of evaluations. Here it becomes apparent that, from 2m and onward, a significant number of the persons are invisible to 2D LiDAR scans, and thus impossible to be detected. The largest chunk of annotations is found within the first 12m. After that only few annotations remain, for both the default and 2D-visible evaluation, and the evaluation becomes less meaningful. This highlights another difference between JRDB and autonomous driving datasets, where persons are regularly perceived at much greater distances.
#### Iii-B3 The Influence of Crowds
People are often perceived while interacting amongst each other, typically resulting in them being quite close together. This results in many occlusions and could potentially lead to a more difficult task of detecting the separate persons properly. To investigate how the detection performance changes with the object density of a scene, we look at a radius of 1.5m around every ground truth annotation and count how many other annotations can be found. We categorize all annotations by this \"group size\" and perform a separate evaluation for all the different sizes. Fig. 4 shows the result of this experiment. Even though the overall detection performance seems to be reducing with larger group sizes, the detectors do not completely fail for larger group sizes and the effect seems to be very similar for both detectors. What Fig. 4 also shows, is that a significant amount of people in the JRDB dataset are observed in the
Fig. 4: Evaluation split up by how crowded it is around annotations. While the performance for larger group sizes decreases a little, the effect is less pronounced than might be expected. We additionally plot how many groups of a specific size are present in the data, showing that JRDB is indeed fairly crowded.
Fig. 3: AP\\({}_{\\text{centroid}}\\) evaluated within a 2m window at different distances from the sensor. The top and middle plot show how the performance develops across the distance for the default and 2D-visible evaluation respectively. The bottom plot shows the number of annotations within the windows for both evaluations.
vicinity of at least one other person making it significantly more realistic than some older datasets where one would often only observe a single person in an empty scene.
#### V-B4 Runtime
The runtime is a typical practical constraint when deploying detectors. Especially in robotic applications, computational resources are often limited and we cannot rely on powerful desktop GPUs. Table II shows the runtime of the evaluated detectors on three different platforms: a powerful desktop machine, a laptop equipped with a decent GPU, and the lower-powered Jetson AGX Xavier. Preprocessing (_e.g_. computing _cutouts_, or voxelization) and postprocessing (_e.g_. non-maximum-suppression) steps are included in the measurement, and no batching is used for inference. In other words, these numbers reflect the end-to-end runtime when the detector is deployed. Both DR-SPAAM and CenterPoint achieve real-time performance on strong desktop or laptop GPUs, but not on embedded GPUs. In general, DR-SPAAM is roughly twice as fast as CenterPoint, with the margin being smaller on the Jetson AGX.2
Footnote 2: We additionally measured the memory usage on the Jetson AGX, 5311MB for DR-SPAAM, 6318MB for CenterPoint. These numbers are likely to vary depending on specific system setups (we used L4T 32.4.4 and PyTorch 1.6).
DR-SPAAM can, thanks to its _cutout_-based design, obtain higher runtime by subsampling the scan, without significantly lowering the detection performance [20]. Table III shows the performance and runtime of DR-SPAAM with different subsampling factors on the Jetson AGX. With 3 times subsampling, DR-SPAAM can run at 26.6 FPS, while losing only 1.4% AP. In applications where the onboard computation is limited, DR-SPAAM presents a more favorable trade-off between performance and runtime, compared to the more accurate, yet slower CenterPoint.
### _Qualitative Results_
Fig. 5 shows several qualitative results of both detectors. These results are obtained by using a detection threshold resulting in an equal error rate (EER), meaning the precision is equal to the recall for that threshold. We show cases where both detectors, one of the two, or neither detected a person. We specifically focus on the true positives and false negatives as these cases should contain a person. The predicted centroids, either coming from CenterPoint or DR-SPAAM, are well-localized to the actual person. The error in orientation estimation sometimes leads to misaligned boxes for CenterPoint, but the overlap with ground truth boxes is often sufficiently high.
Even though the numbers so far suggested that CenterPoint typically outperforms DR-SPAAM, we can here see that there are also cases where only one of the two detectors is able to detect a person. While Fig. 5 shows some cases where it clearly makes sense that a detector fails, _e.g_. due to missing points or partial occlusions. Most of the cases where only one of the detectors fails clearly show a person though and it is unclear why it was not detected. Interestingly, many ground truth bounding boxes missed by both detectors indeed do not look like they contain a person, potentially suggesting annotation errors.
To show these are not only a small set of cherry picked cases, Table IV shows exactly how often these four different cases happen. While CenterPoint is more often the only detector to detect a person, there is a small but significant number of people only detected by DR-SPAAM, too. This suggests that an ensemble of both methods could be an interesting approach when both sensors are available.
## VI Discussion
We performed all experiments with two state-of-the-art LiDAR-based detectors, while we could have tried to adapt CenterPoint to run on 2D LiDAR data, allowing for a more direct comparison. However, this would require a significant amount of tuning to make sure CenterPoint yields a representative performance on 2D LiDAR and our goal is not to develop a new 2D LiDAR-based detector. Instead, we here rely on an existing and well-tuned state-of-the-art method. With our detector evaluations, we have set a lower-bound for the person detection performance on JRDB using 2D and 3D LiDAR sensors, which will be further improved by future developments.
Apart from LiDAR-based person detection, one could consider other sensor modalities. In particular, image-based person detectors have a long history. While the person class is now typically seen as one of many classes by most deep-learning-based detectors [35, 7], a whole range of person-specific detectors existed before [13, 3]. Furthermore, RGB-D based person detectors are frequently used in robotics, which have the additional advantage that a person's position can be estimated [19, 25, 27]. While both types of image-based detectors can perform robust person detection, they have the drawback that the field-of-view of most cameras is fairly limited, as also shown in [28]. Image-based detectors will likely remain a viable source for person detections, however, a single LiDAR sensor can be sufficient to cover the complete surrounding, which would
\\begin{table}
\\begin{tabular}{l c c c} \\hline \\hline Subsampling factor & \\begin{tabular}{c} \\(\\mathrm{AP}_{\\mathrm{centroid}}\\) \\\\ (Default) \\\\ \\end{tabular} &
\\begin{tabular}{c} \\(\\mathrm{AP}_{\\mathrm{centroid}}\\) \\\\ (2D-visible) \\\\ \\end{tabular} & FPS \\\\ \\hline
1 & 47.6 & 77.3 & 8.9 \\\\
2 & 46.5 & 76.5 & 18.2 (\\(\\times 2.0\\)) \\\\
3 & 45.2 & 75.9 & 26.6 (\\(\\times 3.0\\)) \\\\
4 & 43.7 & 74.9 & 32.5 (\\(\\times 3.7\\)) \\\\
5 & 42.0 & 73.3 & 36.4 (\\(\\times 4.1\\)) \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE III: DR-SPAAM performance and inference speed (frames per second) on Jetson AGX Xavier with different spatial subsampling.
\\begin{table}
\\begin{tabular}{l c c c} \\hline \\hline & Desktop & \\begin{tabular}{c} Laptop \\\\ (TTAN RTX) \\\\ \\end{tabular} &
\\begin{tabular}{c} Jetson AGX \\\\ Xavier \\\\ \\end{tabular} \\\\ \\hline \\hline CenterPoint & 32.4 & 19.8 & 6.0 \\\\ DR-SPAAM & 59.1 & 37.1 & 8.9 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE IV: Detector inference speed (frames per second) on three different platforms.
typically require multiple cameras, significantly increasing compute requirements.
## VII Conclusion
In this paper we investigated differences between 2D and 3D LiDAR-based person detection for mobile robots. For this we perform direct comparisons betweeen the state-of-the-art CenterPoint [49] and DR-SPAAM [20] detectors on the JackRabbot dataset [31]. We found that, when only visible persons are considered, detectors from both 2D and 3D LiDAR sensors perform on a similar level, but in 2D LiDAR sensors persons are more prone to being invisible to the sensor (and thus impossible to detect). While 3D LiDAR sensors overall provide a more robust solution to person detection, 2D LiDAR sensors provide better a trade-off between performance and runtime, a favorable trait for mobile robots with limited onboard computations.
Further analysis of the detectors showed that overall both their detections are well-localized and they are not significantly affected by crowds. Since a significant number of people are only detected by one of the two detectors, if computational resources allow it, an ensemble of the detectors could be an interesting possibility to further boost the detection performance. While one might have assumed the superiority of 3D LiDAR-based person detection, the similar performance and robustness on visible persons for these two sensor types comes as somewhat of a surprise. We believe these insights are valuable during sensor and detector selection for robots that will navigate around persons.
**Acknowledgements:** We would like to thank Timm Linder and Jen Jen Chung for valuable feedback. This project was funded by the EU H2020 project \"CROWDBOT\" (779942). Most experiments were performed on the RWTH Aachen University CLAIX 2018 GPU Cluster (rwth0485).
## References
* [1] Iro Armeni, Sasha Sax, Amir R Zamir, and Silvio Savarese. Joint 2D-3D-Semantic Data for Indoor Scene Understanding. _arXiv:1702.01105_, 2017.
* [2] Kai O Arras, Oscar Martinez Mozos, and Wolfram Burgard. Using Boosted Features for the detection of People in 2D Range Data. In _ICRA_, 2007.
* [3] Rodrigo Benenson, Markus Mathias, Radu Timofte, and Luc Van Gool. Pedestrian detection at 100 frames per second. In _CVPR_, 2012.
* [4] Lucas Beyer, Alexander Hermans*, and Bastian Leibe. DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range Data. _RA-L_, 2(2):585-592, 2016.
* [5] Lucas Beyer, Alexander Hermans, Timm Linder, Kai Oliver Arras, and Bastian Leibe. Deep Person Detection in 2D Range Data. _RA-L_, 3(3):2726-2733, 2018.
\\begin{table}
\\begin{tabular}{l c c c c c c c} \\hline \\hline & \\multicolumn{6}{c}{Detected by} \\\\ \\cline{3-8} & Total GT & \\multicolumn{2}{c}{Both} & \\multicolumn{2}{c}{CenterPoint} & \\multicolumn{2}{c}{DR-SPAAM} & \\multicolumn{2}{c}{None} \\\\ \\hline Default & 189 579 & 96 039 & (50.7\\%) & 95 711 & (50.5\\%) & 53 136 & (28.0\\%) & 7 212 & (3.8\\%) & 33 520 & (17.7\\%) \\\\
2D-visible & 96 039 & (50.7\\%) & 71 619 & (74.6\\%) & 12 432 & (12.9\\%) & 5 263 & (5.5\\%) & 6 725 & (7.0\\%) \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE IV: Detection statistics obtained using an equal error rate threshold for both CenterPoint and DR-SPAAM. The majority of persons is detected by both detectors, whereas a significant amount is only detected by one of the two.
Fig. 5: Qualitative results from both detectors evaluated at equal error rates. Green boxes represent the ground truth, yellow boxes/spheres are detections by CenterPoint and DR-SPAAM respectively. The blue and red spheres are the 3D and 2D LiDAR points. These examples are picked from the less crowded areas to avoid visual clutter. Note that in many cases where one of the detectors fails, the data seems to be sufficient for a detection to be possible.
* [6] Holger Caesar, Varam Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuScenes: A multimodal dataset for autonomous driving. In _CVPR_, 2020.
* [7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-End Object Detection with Transformers. In _ECCV_, 2020.
* [8] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3D: Learning from RGB-D Data in Indoor Environments. In _3DV_, 2017.
* [9] Q. Chen, L. Sun, Z. Wang, Kui Jia, and A. Yuille. Object as Hotspots: An Anchor-Free 3D Object Detection Approach via Firing of Hotspots. In _ECCV_, 2020.
* [10] Yilun Chen, Shu Liu, Xiaoyong Shen, and Jiaya Jia. Fast Point R-CNN. In _ICCV_, 2019.
* [11] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. In _CVPR_, 2019.
* [12] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Niessner. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In _CVPR_, 2017.
* [13] Navnet Dalal and Bill Triggs. Histograms of Oriented Gradients for Human Detection. In _CVPR_, 2005.
* [14] Martin Engelcke, Dushyant Rao, D. Wang, C. Tong, and I. Posner. Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks. In _ICRA_, 2017.
* [15] Ajo Fod, Andrew Howard, and Maja J. Mataric. A laser-based people tracker. In _ICRA_, 2002.
* [16] A Geiger, P Lenz, C Stiller, and R Urtasun. Vision meets robotics: The KITTI dataset. _URR_, 32(11):1231-1237, 2013.
* [17] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. In _CVPR_, 2018.
* [18] Yihan Hu, Zhuangzhuang Ding, Runzhou Ge, Wenxin Shao, Li Huang, Kun Li, and Qiang Liu. AFDetV2: Rethinking the Necessity of the Second Stage for Object Detection from Point Clouds. In _AAAI_, 2022.
* [19] Omid Hosseini Jafari, Dennis Mitzel, and Bastian Leibe. Real-Time RGB-D based People Detection and Tracking for Mobile Robots and Head-Worn Cameras. In _ICRA_, 2014.
* [20] Dan Jia, Alexander Hermans, and Bastian Leibe. DR-SPAAM: A Spatial-Attention and Auto-regressive Model for Person Detection in 2D Range Data. In _IROS_, 2020.
* [21] Dan Jia, Matsu Steinweg, Alexander Hermans, and Bastian Leibe. Self-Supervised Person Detection in 2D Range Data using a Calibrated Camera. In _ICRA_, 2021.
* [22] R. Kesten, M. Usman, J. Houston, T. Pandya, K. Nadhammui, A. Ferreira, M. Yuan, B. Low, A. Jain, P. Ondruska, S. Omari, S. Shah, A. Kulkarni, A. Kazakova, C. Tao, L. Platinsky, W. Jiang, and V. Shet. Lyft Level 5 Perception Dataset 2020. [https://level5.lyft.com/dataset/](https://level5.lyft.com/dataset/), 2019.
* [23] Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. PointPillars: Fast Encoders for Object Detection From Point Clouds. In _CVPR_, 2019.
* [24] Angus Leigh, Joelle Pineau, Nicolas Olmedo, and Hong Zhang. Person tracking and Following with 2D Laser Scanners. In _ICRA_, 2015.
* [25] Benjamin Lewandowski, Jonathan Liebner, Tim Wengefeld, Steffen Mueller, and Horst Michael Gross. A Fast and Robust 3D Person Detector and Posture Estimator for Mobile Robotic Application. In _ICRA_, 2019.
* [26] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Kaiming He. Focal Loss for Dense Object Detection. In _ICCV_, 2017.
* [27] Timm Linder, Kilian Y Pfeiffer, Narunas Vaskevicius, Robert Schirmer, and Kai O Arras. Accurate detection and 3D localization of humans using a novel YOLO-based RGB-D fusion approach and synthetic training data. In _ICRA_, 2020.
* [28] Timm Linder, Narunas Vaskevicius, Robert Schirmer, and Kai O Arras. Cross-Modal Analysis of Human Detection for Robotics: An Industrial Case Study. In _IROS_, 2021.
* [29] Zhe Liu, Xin Zhao, Tengteng Huang, Ruolan Hu, Yu Zhou, and Xiang Bai. TANet: Robust 3D Object Detection from Point Clouds with Triple Attention. In _AAAI_, 2020.
* [30] Ilya Loshchilov and Frank Hutter. Decoupled Weight Decay Regularization. In _ICLR_, 2019.
* [31] Roberto Martin-Martin\\({}^{\\circ}\\), Mihir Patel\\({}^{\\circ}\\), Hamid Rezatofighi\\({}^{\\circ}\\), Abhijeet Shenol, JunYoung Gwak, Eric Frankel, Amir Sadeghian, and Silvio Savarese. JIRD: A Dataset and Benchmark for Visual Perception for Navigation in Human Environments. _PAMI_, 2021.
* [32] Caroline Pantoraru. ROS leg-detector package. [https://wiki.ros.org/leg_detector](https://wiki.ros.org/leg_detector), 2010. Accessed 2018-02-22.
* [33] Charles R Qi, Or Litany, Kaiming He, and Leonidas J Guibas. Deep Hough Voting for 3D Object Detection in Point Clouds. In _ICCV_, 2019.
* [34] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J. Guibas. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In _NeurIPS_, 2017.
* [35] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In _NeurIPS_, 2015.
* [36] Matthias Scheutz, J. McRaven, and Gyorgy Cserey. Fast, reliable, adaptive, bimodal people tracking for indoor environments. In _IROS_, 2004.
* [37] Dirk Schulz, Wolfram Burgard, Dieter Fox, and Armin B. Cremers. People Tracking with Mobile Robots Using Sample-Based Joint Probabilistic Data Association Filters. _IJRR_, 22(2):99-116, 2003.
* [38] Shaoshuai Shi, Chaoxu Guo, L. Jiang, Zhe Wang, Jianping Shi, Xiaogang Wang, and Hongbingg Li. PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. In _CVPR_, 2020.
* [39] Shaoshuai Shi, Xiaogang Wang, and Hongbingg Li. PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud. In _CVPR_, 2019.
* [40] Shaoshuai Shi, Zhe Wang, Jianping Shi, Xiaogang Wang, and Hongsheng Li. From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network. _PAMI_, 2020.
* [41] Shurna Song, Samuel P. Lichtenberg, and J. Xiao. SUN RGB-D: A RGB-D scene understanding benchmark suite. In _CVPR_, 2015.
* [42] Pei Sun, Henrik Kretzschmar, Xerseis Dotiwalla, Aurelien Chouard, Vijayasi Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In _CVPR_, 2020.
* [43] H. Thomas, C. Qi, Jean-Emmanuel Deschaud, B. Marcotegui, F. Goulette, and L. Guibas. KPConv: Flexible and Deformable Convolution for Point Clouds. In _ICCV_, 2019.
* [44] Yue Wang, A. Fathi, Abhijit Kundu, David A. Ross, C. Pantofaru, T. Funkhouser, and J. Solomon. Pillar-based Object Detection for Autonomous Driving. In _ECCV_, 2020.
* [45] Zhi Yan, Tom Duckett, and Nicola Bellotto. Online Learning for 3D LiDAR-based Human Detection: Experimental Analysis of Point Cloud Clustering and Classification Methods. _Autonomous Robots_, 44(2):147-164, 2020.
* [46] Bo Li Yan Yan, Yuxing Mao. SECOND: Sparsely Embedded Convolutional Detection. _Sensors_, 2018.
* [47] Zetong Yang, Yanan Sun, Shu Liu, and Jiaya Jia. 3DSSD: Point-based 3D Single Stage Object Detector. In _CVPR_, 2020.
* [48] Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, and Jiaya Jia. STD: Sparse-to-Dense 3D Object Detector for Point Cloud. In _ICCV_, 2019.
* [49] Tianwei Yin, Xingyi Zhou, and Philipp Krahenbuhl. Center-based 3D Object Detection and Tracking. In _CVPR_, 2021.
* [50] Wu Zheng, Weiliang Tang, Sijin Chen, Li Jiang, and Chi-Wing Fu. CIA-SSD: Confident IoU-Aware Single-Stage Object Detector From Point Cloud. In _AAAI_, 2021.
* [51] Wu Zheng, Weiliang Tang, Li Jiang, and Chi-Wing Fu. SE-SSD: Self-Ensembling Single-Stage Object Detector From Point Cloud. In _CVPR_, 2021.
* [52] Y. Zhou, P Sun, Y. Zhang, Dragomir Anguelov, J. Gao, Tom Ouyang, J. Guo, J. Ngiam, and Vijay Vasudevan. End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds. In _CoRL_, 2019.
* [53] Yin Zhou and Oncel Tuzel. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In _CVPR_, 2017.
* [54] Benjin Zhu, Zhengkai Jiang, Xiangxin Zhou, Zeming Li, and Gang Yu. Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection. _arXiv:1908.09492_, 2019. | Person detection is a crucial task for mobile robots navigating in human-populated environments. LiDAR sensors are promising for this task, thanks to their accurate depth measurements and large field of view. Two types of LiDAR sensors exist: the 2D LiDAR sensors, which scan a single plane, and the 3D LiDAR sensors, which scan multiple planes, thus forming a volume. How do they compare for the task of person detection? To answer this, we conduct a series of experiments, using the public, large-scale JackRabbot dataset and the state-of-the-art 2D and 3D LiDAR-based person detectors (DR-SPAAM and CenterPoint respectively). Our experiments include multiple aspects, ranging from the basic performance and speed comparison, to more detailed analysis on localization accuracy and robustness against distance and scene clutter. The insights from these experiments highlight the strengths and weaknesses of 2D and 3D LiDAR sensors as sources for person detection, and are especially valuable for designing mobile robots that will operate in close proximity to surrounding humans (e.g. service or social robot). | Summarize the following text. | 230 |
arxiv-format/2107_07186v2.md | # Region-of-Interest Prioritised Sampling for Constrained Autonomous Exploration Systems
Protim Bhattacharjee, Martin Burger, Anko Borner, and Veniamin I. Morgenshtern
P. Bhattacharjee is with the Department of Optical Sensor Systems, German Aerospace Center (DLR), Berlin, Germany (e-mail: [email protected]).M. Burger is with the Department of Mathematics, Friedrich Alexander Universitat Erlangen-Nurnberg, Erlangen, Germany (e-mail: [email protected]).A. Borner is with the Department of Optical Sensor Systems, German Aerospace Center (DLR), Berlin, Germany (e-mail: [email protected]).V. I. Morgenshtern is with the Department of Electrical-Electronic-Communication Engineering, Friedrich Alexander Universitat Erlangen-Nurnberg, Erlangen, Germany (e-mail: [email protected]).
## I Introduction
Image acquisition in resource constrained environments is a challenging task. Activities such as space exploration and investigation of disaster sites are carried out by robotic platforms that have limited electrical power and payload capabilities. Being far away from the ground-station/human operator, exploration platforms, e.g., rovers and copters, require autonomous operation for achieving mission objectives. This necessitates new on-board sensing protocols, object and event detection algorithms, and data processing frameworks [1, 2]. The imaging systems on autonomous exploration platforms are required to acquire maximum information about the scene with minimum resources. Limitations on electrical power impact the number of measurements that can be acquired and the number of computations that can be performed by the imaging system. For such constrained systems efficient utilization of their limited resources is important. The main purpose of the exploring rovers is to provide a survey of the scene, so as to recognise areas of importance, and possibly, to provide initial data for further examination by more precise instruments. This reduces the requirement of acquiring the entire scene at the highest resolution of the camera on-board the robotic platform. Regions-of-Interest (RoIs) can be recognized and acquired at the resolution of the camera leaving the background at lower resolutions.
To ensure that measurements are expended on RoIs with more information it is necessary to prioritise RoIs for acquisition on the basis of their information content. Moreover, this should be performed in an online manner. However, existing methods of acquisition and reconstruction in constrained systems either try to reconstruct the entire scene at low resolutions [3, 4] or require pre-defined resolutions for each RoI [5]. These methods do not dynamically distribute their measurements among RoIs based on their information content. Also, they may require prior information (human intervention or some other source) to decide on the spatial resolution of each RoI.
Dynamic real-time multi-resolution RoI acquisition based on the information content of the RoI is an important step forward for introducing mission oriented autonomy to exploration systems (E4 level autonomy) [6]. There are various ways of characterising the information content of a scene or RoI. In this work an adaptive multi-scale approach is used. An estimator, called the Refinement Indicator (RI), is developed to estimate changes in the information content of a RoI at consecutive spatial resolutions. The RoI with the largest change is refined to a finer spatial resolution. With the help of the RI and the basic framework for sampling suggested in [7] for measurement-constrained systems an algorithm for multi-scale sampling is proposed and named RoI Prioritised Sampling (RPS). The RI provides a structured way to prioritise RoIs for acquisition at the cost of a small number of overhead measurements at each spatial resolution. These additional measurements are also used for reconstruction of the RoIs using compressed sensing [8, 9].
The main contributions of this work are \\(1)\\) the development of an online estimator for information change across spatial scales for RoIs, \\(2)\\) the design of an acquisition algorithm for constrained systems that incorporates this estimator to prioritise RoIs for acquisition and accordingly distributes the limited measurement budget, and \\(3)\\) the proposal of multi-scale acquisition methodologies based on classical [10, 11] and multi-level compressed sensing [12, 11, 13]. The algorithm is tested on images from airborne Earth observation sensor platforms and the Mars Science Laboratory (MSL) on-board the Curiosity rover on Mars. The proposed algorithm is found to perform better at reconstructing informative RoIs than existing algorithms at the considered sub-sampling rates.
The next section reviews existing work on autonomousspace exploration and compressed sensing based acquisition in constrained systems. Sec. III discusses the camera model used in this work and the sampling methodology suggested in [7]. The RI is motivated and derived in Sec. IV. Sec. V discusses the datasets and results as applied to these datasets. The article ends with a conclusion and the direction for future work.
## II Literature Review
### _Autonomous Space Exploration_
Autonomous Exploration for Gathering Increased Science (AEGIS) [2] was developed for the Opportunity rover [14] on Mars to increase the scientific output of the mission by allowing autonomous acquisition of mission relevant data. The aim of the AEGIS system is to detect rocks by analysing images from the navigation camera of the rover and to direct scientific instruments, like the ChemCam [15], to acquire scientific data from the surroundings of the rover without consultation with Earth scientists. Earlier all images were downlinked and analysed on Earth and then the rovers were instructed to perform extra measurements or revisit an area from which they had moved away. However, with the AEGIS system new scientific information could be gathered autonomously by the rover leading to increased scientific throughput [16]. The current work builds on the idea of autonomous scientific data collection and provides a method for prioritising (ranking) the RoIs based on spatial frequency variations without any prior knowledge about the surrounding terrain.
### _Compressed Sensing for Constrained Systems_
Multi-resolution techniques have previously been used to address concerns of constrained systems. In [3] the aim is to provide low resolution previews in computationally-constrained and data-streaming systems where computational machinery for compressed sensing reconstruction is not available. The authors propose an orthogonal sampling matrix known as the Sum-To-One (STOne) transform. This matrix allows one to recover low resolution previews at the Nyquist-rate in real-time. The sampling matrix contains only \\(\\pm 1\\) entries and is suitable for implementation on a bi-stable Digital Micromirror Device (DMD) used in Single Pixel Cameras (SPCs). The nature of sampling is such that when computational resources are available (at the ground/base-station) the measurements used for the preview can also be used to generate a high resolution image at sub-Nyquist rates. The previews can be generated at any resolution that is a power of 2 limited by the size of the DMD. The STOne transform thus tries to acquire and reconstruct the entire scene in a computationally constrained system. This scenario can also be thought of as a measurement-constrained problem where the limited measurements are designed such that the exploration system can produce low resolution images for autonomous functions and high resolution recovery is possible only at a ground-station. Authors in [4] cast the measurement-constrained reconstruction problem into a Multi-Resolution Approximate Message Passing (MR-AMP) framework. The measurement model is transformed in such a way that the low resolution image is an optimization variable. Properly designed down-sampling and up-sampling matrices are required to map the image from the native resolution of the sensor to the lower resolution and vice-versa. A down-sampling factor can be used to choose the down-sampling ratio according to the available number of measurements. As in the case of the STOne transform, MR-AMP tries to reconstruct the complete scene at the desired resolution. Multi-resolution in RoIs in the context of compressed sensing camera architectures was proposed in [5]. Measurements for the entire scene are acquired at the native resolution of the DMD. The original image is then split into a number of pre-defined RoIs, and with the help of down-sampling and up-sampling matrices, different RoIs are reconstructed at different resolutions. The RoIs need to be chosen off-line and are reconstructed from the same measurements. The three techniques described model the reconstruction problem in a manner that enables multi-resolution recovery. In [7], the authors propose a different acquisition approach for measurement-constrained systems. A three-step methodology for acquiring RoIs in a scene is proposed that includes a \"Low resolution acquisition and reconstruction\" step, \"RoI detection and segmentation\" step, and a \"Multi-level sampling\" step. The entire process is online; it is performed in-situ without any external guidance or human intervention. The segmentation procedure is performed on the basis of mission objectives and multi-level compressed sensing is used to recover the RoIs. Each RoI is assigned a measurement budget on the basis of its size and is acquired individually. However, the RoI selection procedure is empirical and the algorithm acquires RoIs based on their sizes and tries to reconstruct each RoI at the native resolution of the DMD in one shot with no regard to the information content of the RoI. We propose to estimate the change in information content across spatial scales in the RoIs in an online fashion and distribute the measurement budget accordingly. A step-by-step increase in resolution is obtained in the RoIs by acquiring them at different resolutions instead of reconstructing them directly from measurements at the native resolution of the DMD.
### _Bregman Distance_
In this work, Bregman distance [17] will be used to calculate error estimates. For a convex functional \\(J\\) with a subgradient \\(p\\in\\partial J(x)\\), the (generalized) Bregman distance between two vectors \\(z\\) and \\(x\\) is defined as \\(D_{J}^{p}(z,x)=J(z)-J(x)-\\langle p,z-x\\rangle,\\) where \\(\\partial J(x)\\) is the subdifferential of \\(J\\) at \\(x\\). The Bregman distance is the distance between \\(J(z)\\) and the tangent to \\(J\\) at \\(x\\) evaluated at \\(z\\). This is shown in Fig. 1. For a convex functional \\(J\\), the non-negativity is evident from the definition of \\(J\\). However, the Bregman distance is not necessarily symmetric; a symmetric Bregman distance [17] with respect to two subgradients of \\(J\\), \\(p\\in\\partial J(z)\\) and \\(q\\in\\partial J(x)\\), is defined as \\(D_{J}^{p,q}(z,x)=D_{J}^{p}(x,z)+D_{J}^{q}(z,x)=\\langle p-q,z-x\\rangle\\). This symmetric form of the Bregman distance will be used to estimate the error in reconstruction of RoIs at various spatial scales. Similar methods for deriving error estimates for regularisation problems [18] and image restoration problems [19]based on Bregman distances have been studied previously. For \\(l_{1}\\) regularised problem, common in compressed sensing, the Bregman distance is related to the sparsity of the signal [19, Proposition 8.2].
## III Single Pixel Camera Model and Acquisition Methodology
### _Single Pixel Camera_
Different applications require acquisition of the scene at different wavelengths of the electromagnetic spectrum. For example, fire or temperature detection of objects require near-to-mid infrared imaging [20]. Terahertz radiation is used in security applications [21]. Building large sensor arrays for imaging in longer wavelengths is difficult and expensive. The alternative is to use a single detector appropriate for the incident wavelength. Such a detector when calibrated is known as a radiometer [22, 23, 24]. A mechanical system can be used to raster scan the scene with the radiometer to generate a 2D image of the scene. However, in space applications mechanical systems are susceptible to damage due to stress and vibrations during rocket launch and landing. SPCs [25] are best suited for such exploration scenarios as they avoid large sensor arrays and mechanical scanning. In this work, the SPC architecture is used as the model for the imaging system. It consists of a camera lens that focusses the incoming radiation onto the spatial light modulator that is placed at the virtual imaging plane of the camera lens. The DMD [26] is used as the spatial light modulator. It provides the necessary \"software\" scan of the scene replacing mechanical raster scanning. The modulated radiation from the DMD may be filtered through a wavelength selective filter or a colour filter. The collective optics focusses the modulated filtered radiation onto the single pixel detector. At each measurement step, the DMD implements a measurement mask and the incoming radiation is encoded with the mask. This encoded radiation is received by the detector and generates one measurement. A sequence of such masks is displayed on the DMD with each mask generating a new measurement. A bi-stable DMD where the micromirrors can only have an 'ON' or 'OFF' state is used. By convention, the 'ON' state directs the incident radiation towards the detector and the 'OFF' state deflects radiation away from the detector. For a measurement mask consisting of \\(0/1\\) entries, the \"0\" pixels are mapped to the 'OFF' state and the \"1\" to the 'ON' state. Such masks are used to perform random-macro-pixel sampling as discussed in the next subsection. To implement \\(\\pm 1\\) entries of a measurement mask, two physical measurement cycles are required on the DMD. The first cycle maps all \\(+1\\)s to the 'ON' state and the \\(-1\\)s to the 'OFF' state. In the next measurement cycle all the \\(+1\\)s are mapped to the 'OFF' state and the \\(-1\\)s to the 'ON' state. Subtracting the two measurements thus obtained leads to one measurement generated by the measurement mask with \\(\\pm 1\\) entries. Walsh transforms [27] require such implementations on the DMD. The basic single pixel camera model is shown in Fig. 2.
The scene is acquired using the three step acquisition methodology proposed in [7]. The first step is the low resolution acquisition that is performed using macro pixels formed by binning micromirrors of the DMD. A \\(8\\times 8\\) macro pixel is formed by binning 64 micromirrors underlying a \\(8\\times 8\\) window on the DMD and assigning them the same 'ON' or 'OFF' state. A \\(256\\times 256\\) DMD has 1024 non-overlapping \\(8\\times 8\\) macro pixels. Similarly, it has 4096 non-overlapping \\(4\\times 4\\) macro pixels. For each measurement, these macro pixels are randomly assigned the value of \\(0/1\\) or \\(\\pm 1\\) to simulate random-macro-pixel sampling. Details on the low resolution reconstruction will follow in Sec. III-D. The second step is the detection and segmentation based on the low resolution reconstruction. This process depends on the mission objectives. For example, in infrared imaging, one may use temperature as a parameter for detection and segmentation of RoIs; in visible wavelength imaging, one may use the contrast of the scene. The process is same as in [7]. First, the low resolution reconstruction is resized to its macro pixel image dimensions. For example, a \\(256\\times 256\\) image with a macro pixel size of \\(8\\times 8\\) would be resized to a \\(32\\times 32\\) image by averaging each \\(8\\times 8\\) block in the low resolution reconstruction. A seed pixel is assigned to be the brightest pixel in the resized image. A square region is grown around the seed pixel by increasing the region size by one pixel in each direction until the contrast of the region is higher than a user defined threshold. This forms one RoI. A new seed pixel is assigned to be the brightest pixel outside the already selected region. Regions within a merge radius of each other are coalesced forming a larger RoI. This seed pixel selection and region growing is continued until the user defined total number of RoIs is reached. Once all the RoIs are selected, the segmented image is resized to the original size of the low resolution reconstruction, equal to the DMD size. Thereafter,
Fig. 1: Bregman Distance is the distance, displayed in red, between \\(J(z)\\) and the tangent to \\(J(\\cdot)\\) at \\(x\\) evaluated at \\(z\\).
Fig. 2: Single Pixel Camera model.
each RoI is expanded to its immediate higher dyadic size. If this leads to more than 99% overlap between RoIs, they are merged once more. The total number of regions to be selected, the contrast cut-off, the merge radius, and the overlap in the dyadic sizes are user defined. Fig. 1 in the supplementary material describes the process. The third step of the algorithm in [7] is the sequential acquisition of RoIs detected in the second step at the full resolution of the DMD using multi-level sampling [12]. The RoIs are arranged in a decreasing manner according to their size and are resolved to their full resolution in a one-shot manner. In this work we propose to estimate the change of information content in spatial scales of the RoIs and develop a procedure for step-by-step increase in the resolution of the RoIs. We propose two ways in which this step-by-step increase can be performed. One is through multi-scale random-macro-pixel sampling and the other is through the Walsh transform. The RoIs are acquired sequentially based on their information content and the number of measurements needed at each spatial scale depends on the RoI size and the current spatial resolution. The workflow of the proposed acquisition methodology is shown in Fig. 3. We first discuss the methods of acquisition of higher spatial frequencies as it will be required for the development of the estimator.
### _Multi-scale Random-macro-pixel Sampling_
The process of binning DMD micromirrors to form macro pixels was discussed in the previous section. Each macro pixel sums the incoming radiation from its field of view and the reciprocal of the size of the macro pixel acts as a cut-off for the spatial frequencies acquired by the measurement masks of that macro pixel size. By reducing the macro pixel size one acquires higher spatial frequencies. The macro pixel size of \\(1\\times 1\\) defines the native resolution of the DMD. Examples of DMD mirrors binned by different macro pixel size are shown in Fig. 4. This method of multi-scale random-macro-pixel sampling will be used to acquire RoIs at different spatial resolutions.
### _Multi-scale Multi-level Sampling with Walsh Transform_
An alternate way to sample in a multi-scale manner is to use structured matrices like the Walsh transform. Structured matrices have been used in compressed sensing acquisition as they can be implemented on bi-stable DMDs, and have fast forward and inverse transform implementations [11, 28]. Another advantage of using Walsh transforms and other structured matrices is that their multi-scale decomposition can be adapted to best leverage the sparsity structure of the signal to be acquired. Multi-level sampling was suggested in [11] and was further developed in [12, 13, 29]. In multi-level sampling the frequency space of the measurement matrix is divided into a number of levels or regions based on its coherence structure and the number of measurements are assigned to these levels in an asymptotically decreasing manner. The Transform Point Spread Function (TPSF) [29] is used as a metric to generate sampling maps for different sized RoIs. In this work, square regions (as shown in Fig. 5) were chosen instead of the regular circular regions as they allowed easier decomposition of the frequency space into multiple levels. The Walsh frequency grid for each RoI is divided into three levels corresponding to low, mid, and high spatial frequencies. To perform multi-scale sampling we start from the low frequency region followed by level-wise spatial Walsh frequency measurements to achieve a step-by-step increase in resolution. The term multi-scale is a misnomer for multi-level sampling as multi-level sampling itself is defined in a multi-scale manner; however, we use it in the current context to make the stepwise multi-scale acquisition of the scene explicit. Sampling maps for three different size of RoIs are shown in Fig. 5. The spatial frequency regions may be divided into more than three regions depending upon the application at hand. More information about the number of measurements performed in each spatial frequency region is provided in Sec. V.
Fig. 4: Illustration of macro pixels. _Top panel_: Example of macro pixel measurement masks formed by binning DMD micromirrors. The macro pixels are randomly assigned 0 or 1 values. _Bottom panel_: Macro pixel representation of the Cuprite Vis dataset obtained by replacing each macro pixel block with the mean of the corresponding block from the original image. Macro pixel size (a) \\(1\\times 1\\) (Original), (b) \\(2\\times 2\\), (c) \\(4\\times 4\\), and (d) \\(8\\times 8\\).
Fig. 3: Proposed acquisition methodology.
### _Compressed Sensing Algorithms_
To complete the discussion on acquisition methodology the compressed sensing reconstruction algorithms used are discussed. Two algorithms are used, the Analysis BPDN [30, 31] and the Analysis+TV [29, 30]. The Analysis BPDN can be written as
\\[\\underset{x}{\\mathrm{minimize}}\\quad\\|Wx\\|_{1}+\\gamma\\|y-Ax\\|_{2}^{2}, \\tag{1}\\]
where \\(A\\) is the measurement matrix with normalized columns, \\(W\\) is the sparsity basis, \\(y\\) is the vector of measurements, \\(x\\) is the signal to be recovered, and \\(\\gamma\\) is the regularisation parameter. For a signal \\(x=[x_{1},x_{2},\\ldots,x_{N}]^{T}\\) of length \\(N\\), the \\(l_{1}\\) norm is defined as \\(\\|x\\|_{1}=\\sum_{i\\in N}|x_{i}|\\) and the \\(l_{2}\\) norm is defined as \\(\\|x\\|_{2}=\\sqrt{\\sum_{i\\in N}|x_{i}|^{2}}\\). Algorithm (1) is used for reconstruction of the low resolution image in the first step of the acquisition process with 2D-DCT as the sparsity basis.
The Analysis+TV algorithm
\\[\\underset{x}{\\mathrm{minimize}}\\quad\\beta_{1}\\|Wx\\|_{1}+\\beta_{2}\\|x\\|_{TV} \\tag{2}\\] \\[\\mathrm{subject\\ to}\\quad\\|y-Ax\\|_{2}^{2}\\leq\\eta,\\]
adds a Total Variation (TV) regularisation term to the Analysis BPDN. The TV norm for images is calculated along the horizontal and vertical directions [32], \\(\\|x\\|_{TV}=\\sum_{i,j\\in N}\\sqrt{|x_{i+1,j}-x_{i,j}|^{2}+|x_{i,j+1}-x_{i,j}|^{2}}\\). Such a model is used when the smoothness, promoted by the TV norm, and sparsity constraints, promoted by the \\(W\\) basis, are required simultaneously. The terms \\(\\beta_{1}\\) and \\(\\beta_{2}\\) are used to balance the contributions of the two prior terms, and \\(\\eta\\) is used to control the size of the noise.
In this section the camera model and the methods for step-by-step increase in the acquisition of spatial frequencies were discussed. The next section motivates and derives the information change estimator, which is used to develop the proposed RoI prioritisation and acquisition algorithm.
## IV RoI Prioritised Sampling for Constrained Systems
### _Refinement Indicator_
After the \"RoI detection and segmentation\" step in the proposed acquisition scheme shown in Fig. 3, we must distribute the limited measurement budget among the RoIs based on their information content or, in other words, prioritise the RoIs for sequential acquisition. Let \\(x\\in\\mathbf{R}^{N}\\) be the RoI to be acquired at the finest resolution. Let us start by acquiring some coarse measurements. The system of equations for the coarse measurements is
\\[y^{C}=A^{C}x\\ +\\ \\eta^{C}, \\tag{3}\\]
where \\(A^{C}=(a_{i}^{C})_{i=1,\\ldots,m}\\in\\mathbf{R}^{N}\\) is the coarse resolution measurement matrix, the coarse measurements are \\(y^{C}\\), and \\(\\eta^{C}\\) is an additive noise component. Denoting the solution to the regularised problem as \\(x^{C}\\), we have
\\[x^{C}=\\ \\underset{z}{\\mathrm{arg\\ min}}\\quad\\quad\\frac{1}{2}\\|A^{C}z-y^{C}\\|_{2}^ {2}\\ +\\ \\alpha J(z), \\tag{4}\\]
where \\(J\\) is a regularisation functional like total variation or a wavelet norm and \\(\\alpha\\geq 0\\) is a regularisation parameter. The optimality condition for (4) [33, 34] is given by
\\[(A^{C})^{*}(A^{C}x^{C}-y^{C})\\ +\\ \\alpha p^{C}=0,\\ p^{C}\\in\\partial J(x^{C}), \\tag{5}\\]
where \\(p^{C}\\) is a subgradient [34] of \\(J\\) at \\(x^{C}\\) and \\(\\partial J(x^{C})\\) denotes the subdifferential [34] of \\(J\\) at \\(x^{C}\\). The adjoint of \\(A^{C}\\) is denoted by \\((A^{C})^{*}\\). Let us ignore the noise term for the moment so that \\(y^{C}=A^{C}x\\). To improve the resolution we acquire extra measurements with a refinement matrix \\(B\\). This matrix is composed of sampling patterns that acquire finer spatial frequencies than \\(A^{C}\\). For example, if measurements in \\(A^{C}\\) have a macro pixel size of \\(8\\times 8\\) then \\(B\\) would contain random measurements with macro pixel size of \\(4\\times 4\\). The measurement matrix with the additional refined measurements can be written as
\\[A^{R}=\\left(\\begin{array}{c}A^{C}\\\\ B\\end{array}\\right). \\tag{6}\\]
The refined measurements are given by \\(y^{R}=A^{R}x\\). Again the refined solution \\(x^{R}\\) satisfies the optimality conditions
\\[(A^{R})^{*}(A^{R}x^{R}-y^{R})\\ +\\ \\alpha p^{R}\\ =\\ 0,\\ p^{R}\\in\\partial J(x^{R}), \\tag{7}\\]
where \\(p^{R}\\) is a subgradient of \\(J\\) at \\(x^{R}\\) and \\(\\partial J(x^{R})\\) is the subdifferential of \\(J\\) at \\(x^{R}\\). To be explicit the refined solution \\(x^{R}\\) contains spatial frequencies unique to the acquisition through \\(B\\) along with those acquired previously through \\(A^{C}\\). To estimate the change in information of the RoI when acquired at different resolutions we would like to derive an _a-posteriori_ error estimate between the coarse solution, \\(x^{C}\\), and the refined solution, \\(x^{R}\\), without the evaluation of the latter. This would quantify the information change across scales for the RoI. We follow the procedure for calculating error estimates in [17, 18]. From (6) we get,
\\[(A^{R})^{*}A^{R}=(A^{C})^{*}A^{C}+B^{*}B. \\tag{8}\\]
Subtracting (7) from (5) we get,
\\[(A^{C})^{*}A^{C}x^{C} - (A^{C})^{*}y^{C}\\ +\\ \\alpha(p^{C}-p^{R}) \\tag{9}\\] \\[- (A^{R})^{*}A^{R}x^{R}\\ +\\ (A^{R})^{*}y^{R}=0.\\]
Using \\(y^{C}=A^{C}x\\), \\(y^{R}=A^{R}x\\), and the value of \\((A^{C})^{*}A^{C}\\) from (8) we get,
\\[(A^{R})^{*}A^{R}(x^{C}-x^{R})\\ +\\ \\alpha(p^{C}-p^{R})=B^{*}B(x^{C}-x). \\tag{10}\\]
Fig. 5: Walsh transform multi-level sampling maps for RoI of size (a) \\(32\\times 32\\), (b) \\(64\\times 64\\), and (c) \\(128\\times 128\\). Top left corner denotes the DC frequency. Number of measurements for each size is 20% of the total number of pixels in the RoI. The Walsh frequency grids are scaled to the same size for visualisation.
Taking a scalar product of the above with \\((x^{C}-x^{R})\\) yields
\\[\\|A^{R}(x^{C}-x^{R})\\|_{2}^{2}~{}+~{}\\alpha D_{J}(x^{C},x^{R})\\\\ =\\langle B(x^{C}-x),B(x^{C}-x^{R})\\rangle, \\tag{11}\\]
where \\(D_{J}=\\langle p^{C}-p^{R},x^{C}-x^{R}\\rangle\\) is the symmetric Bregman distance [17]. Applying Young's Inequality, \\(\\langle u,v\\rangle\\leq\\frac{1}{2}\\|u\\|_{2}^{2}+\\frac{1}{2}\\|v\\|_{2}^{2}\\), to the right hand side of (11) yields
\\[\\|A^{R}(x^{C}-x^{R})\\|_{2}^{2}~{}+~{}\\alpha D_{J}(x^{C},x^{R})\\\\ \\leq\\frac{1}{2}\\|B(x^{C}-x)\\|_{2}^{2}~{}+~{}\\frac{1}{2}\\|B(x^{C}- x^{R})\\|_{2}^{2}. \\tag{12}\\]
Using the fact that, \\(\\|Bv\\|_{2}^{2}\\leq\\|Bv\\|_{2}^{2}+\\|A^{C}v\\|_{2}^{2}=\\|A^{R}v\\|_{2}^{2}\\) for any \\(v\\), (12) becomes
\\[\\frac{1}{2}\\|A^{R}(x^{C}-x^{R})\\|_{2}^{2}~{}+~{}\\alpha D_{J}(x^{C},x^{R})\\\\ \\leq\\frac{1}{2}\\|y^{B}-Bx^{C}\\|_{2}^{2}, \\tag{13}\\]
where \\(y^{B}=Bx\\) are the novel measurements. The second term on the left-hand side, \\(D_{J}(x^{C},x^{R})\\), is the error between the two solutions at coarse and fine resolutions with respect to the functional \\(J\\). For \\(J=\\|.\\|_{1}\\), the symmetric Bregman distance, \\(D_{J}(x^{C},x^{R})=2\\Sigma_{sign(x^{C}_{i})\
eq sign(x^{R}_{i})}|x^{R}_{i}-x^{ C}_{i}|\\), measures the deviation between entries of the two solutions that differ in their signs [17, 18]. For a wavelet norm, the symmetric Bregman distance will measure the deviation in the sparsity pattern of the wavelet representations of the two solutions [19]. If the entries of the wavelet representations have the same sign, \\(D_{J}(x^{C},x^{R})=0\\) and it is non-negative if the entries differ in their signs. Further, the first term on the left-hand side of (13) measures how well we estimate the magnitudes of the entries of each of the two solutions, the residual between the two solutions. The behaviour of the Bregman distance with respect to the parameter \\(\\alpha\\) and its asymptotic are well understood and not further discussed here, see [17]. The right-hand side of (13) is an a-posteriori estimator for the change between \\(x^{C}\\) and \\(x^{R}\\) given only the coarse solution \\(x^{C}\\). It is an a-posteriori error estimator in the _computational_ sense, it can be computed without solving a fine (the refined) scale problem. It is not an a-posteriori in the _measurement_ sense, since we need to collect at least some fine scale measurements (refined measurements). Taking into consideration that \\(B\\) is random, we can take expectation on both sides of (13)
\\[\\frac{1}{2}\\mathbb{E}[\\,\\|A^{R}(x^{C}-x^{R})\\|_{2}^{2}]~{}+~{} \\alpha\\mathbb{E}[\\,D_{J}(x^{C},x^{R})]\\\\ \\leq\\frac{1}{2}\\mathbb{E}[\\,\\|y^{B}-Bx^{C}\\|_{2}^{2}\\,. \\tag{14}\\]
This shows that in expectation the maximum error between \\(x^{C}\\) and \\(x^{R}\\) occurs when the expected deviation between \\(y^{B}\\) and \\(Bx^{C}\\) is maximal. Thus, the Refinement Indicator (RI) can be defined as \\(\\|y^{B}-Bx^{C}\\|_{2}^{2}\\). The RI can be calculated for each RoI in the scene and the RoI with the largest value of RI can be refined. Explicitly, to refine an RoI is to calculate the higher spatial resolution solution from the measurements \\(y^{R}\\) obtained through the measurement matrix \\(A^{R}\\). Intuitively, for a flat region, \\(x^{C}\\) will be a good approximation of the underlying region, therefore, the novel measurements, \\(y^{B}\\), and the _simulated_ measurements obtained through \\(B\\) by assuming \\(x^{C}\\) to be the groundtruth; i.e. \\(Bx^{C}\\), will be similar and the value of RI will be small. On the other hand, if the underlying region is textured, \\(x^{C}\\) will be a poor approximation to the region and the novel measurements and _simulated_ measurements will differ leading to a larger value of RI. Thus, the RI can be used to prioritise RoIs for acquisition.
### _Adaptive Refinement of Measurements_
The RI developed in the previous section is used to develop an acquisition algorithm for measurement-constrained systems. The three step sampling procedure discussed in [7] and further developed in Sec. III is used as the basic framework for acquisition. The inputs to the algorithm are the initial macro pixel size of the low resolution acquisition in the first step and a fixed number of measurements, called the measurement budget. After the \"Low resolution acquisition and reconstruction\" and \"RoI detection and segmentation\" steps we obtain the acquisition masks for each RoI. Coarse measurements for each RoI are performed and the coarse solution \\(x^{C}\\) is calculated. These coarse measurements differ from the ones acquired in the \"Low resolution acquisition and reconstruction\" step because the low resolution measurements are multiplexed measurements from the entire scene and recovering individual RoIs from multiplexed measurements is not possible. Additional random measurements \\(y^{B}\\) refining the previous coarse measurements are acquired for each RoI. The RI is calculated for all the RoIs, the RoI with the largest RI is chosen and the refined solution, \\(x^{R}\\), is calculated for that RoI and it is set as the new coarse level for the chosen RoI. New refinement measurements are acquired at a higher spatial resolution for the current refined RoI and the new RI is calculated from the new refined measurements. RI values for the other RoIs are brought forward and again the RoI with the largest value of RI is chosen for refinement. This process continues until all the RoIs are in the native resolution of the DMD or until the measurement budget is exhausted. This results in prioritisation of RoIs in terms of the information content change across spatial resolutions. The limited measurements are spent on RoIs with greater change in information content. If measurements are not available for refining a particular RoI then it is removed from the prioritisation list and the RIs of the remaining RoIs are compared for refinement. The complete algorithm is shown in Fig. 6 and is named RoI Prioritised Sampling (RPS).
### _Discussion_
One can consider a naive approach in which one acquires RoIs at the coarse resolution, performs coarse reconstructions and acquires further refined measurements for all the RoIs. One can reconstruct all the RoIs at this refined resolution and then select the RoI that produced maximum change in information content between the coarse and the refined image of the RoI for further refinement, expecting the trend of increase in information content over scales to continue. However, the decision on which RoI should be refined further can only be made _after the reconstruction of_ all RoIs at the refined resolution. The extra reconstructions may be expensive for a constrained system. This problem is alleviated with the RPS, as RI allows us to make the same decision _without_ calculation of the refined solution for _all_ RoIs. In limited measurement settings, like remote exploration, acquired refined measurements can be sent to a ground-station where resources would be available for reconstructions. Thus, we focus specifically on a measurement-constrained setting. Also, the RPS is concerned with utilising the limited measurement budget optimally and not specifically with reducing the number of measurements required to acquire the scene/RoI.
## V Results
To demonstrate that the RI does in fact quantify information content change at different spatial scales, three different regions of the cameraman image were selected with varying amount of textures. The RoIs and their three-level Haar wavelet decompositions are shown in Fig. 7. Random-macro-pixel measurements with \\(0/1\\) entries are performed on each RoI starting from a macro pixel size of \\(8\\times 8\\) and the \\(8\\times 8\\) reconstruction is calculated. Thereafter, a refinement matrix \\(B\\) is generated by using random-macro-pixel sampling with a macro pixel size of \\(4\\times 4\\) and novel measurements are generated for each RoI. _Simulated_ measurements are also generated using the reconstruction of the \\(8\\times 8\\) macro pixel acquisition and the refinement matrix \\(B\\). The two measurements are subtracted and the \\(l_{2}\\) norm of the error vector is calculated to form the RI. The process is repeated for macro pixel size of \\(2\\times 2\\). The RI values for the RoIs are recorded in Table I. The RI for spatial resolution \\(8\\times 8\\) denotes the information change in a RoI when measurements are refined from a resolution of \\(8\\times 8\\) to \\(4\\times 4\\). The last row in the table defines the RI value of refining a RoI from \\(2\\times 2\\) to \\(1\\times 1\\), the native resolution of the DMD, and there is no RI for spatial resolution of \\(1\\times 1\\) macro pixel. The largest change in the value of the RI is consistently obtained for the RoI that has the largest number of high frequency components, i.e., for RoI (c) in Fig. 7. The smallest change in RI value is observed for RoI (a) that has the smallest number of high frequency components. Thus, the RI estimates the change in information content across spatial scales and is a relevant metric that can be used to prioritise RoIs for acquisition in measurement-constrained systems.
Further, the RPS algorithm developed in the previous section was tested on images derived from different sensors. The experiments are designed to reflect remote exploration scenarios from Earth observation applications and extra-terrestrial exploration. After a short overview of the datasets the results of applying RPS on the datasets will be discussed and the efficacy of the proposed method in prioritising RoIs in measurement-constrained systems will be verified. The algorithm was implemented in MATLAB 2018a and simul
Fig. 6: RPS Algorithm for RoI prioritisation using RI in measurement-constrained autonomous exploration systems.
Fig. 7: _Top Panel_: Three different RoIs from the cameraman image with different amount of textures. _Bottom Panel_: The three-level Haar wavelet decomposition of the corresponding RoIs. (a) RoI with minimal textures (flat region), (b) RoI with some textures, and (c) RoI with rich textures.
on a Windows system with 16 GB RAM and an Intel(R) i7-6700 CPU @ 3.4GHz. Code for reproducing the results can be found at [https://github.com/protim1191/RoI-Prioritised-Sampling.git](https://github.com/protim1191/RoI-Prioritised-Sampling.git).
### _Datasets_
#### Iv-A1 Cuprite
The Cuprite geological dataset is a snapshot of the cuprite ores in the state of Nevada, USA. The hyperspectral datacube was acquired by the AVIRIS [35] sensor that collects data in the wavelength range of 400 to 2500 nm with a nominal channel bandwidth of 10 nm. The ground sampling area is 20 m\\({}^{2}\\) and the radiometric resolution is 16 bits. The datacube is divided into two, the visible and the infrared. The visible section consists of wavelengths 400 to 800 nm and is called the \"Cuprite Vis\" dataset; the infrared dataset consists of wavelengths from 900 to 2500 nm and is called the \"Cuprite IR\" dataset.
#### Iv-A2 Gulf of Mexico
The Gulf of Mexico dataset is a part of baseline datasets provided by SpecTIR [36]. The sensor acquires data in the wavelength range of 395 to 2450 nm with a nominal channel bandwidth of \\(5\\) nm. The ground sampling distance is 2 m. As before the datacube is divided into visible and infrared regions. The visible region consists of wavelengths from 400 to 800 nm and is called the \"Gulf Vis\" dataset. The infrared region consists of wavelengths from 900 to 2400 nm and is called \"Gulf IR\" dataset. Only the Gulf Vis dataset is used for experiments as the spatial variations in the two datasets are similar.
#### Iv-A3 University of Pavia
This scene was acquired by the ROSIS sensor [37] during a flight campaign over Pavia in northern Italy. The detector is sensitive to wavelengths ranging from 430 to 860 nm with a nominal bandwidth of 5 nm. Certain pixels did not contain any information and were discarded. The entire dataset is considered to be a part of the visible wavelengths and is called \"Pavia\" in this work.
#### Iv-A4 MSL ChemCam Images
Two images from the Mars Science Laboratory (MSL) [38] on the Curiosity rover _0133_crc_ccam01133_rowatt.png_ and _0121_crc_ccam01121_stanbridge.png_ have been used to test the RPS algorithm. The test images are called \"Mars1\" and \"Mars2\" respectively. These are shown in Fig. 9.
### _Preprocessing of Datasets and Algorithm Parameters_
For all the datasets a \\(256\\times 256\\) section was used. After removing corrupted bands, pixel values in each spectral band were converted to be between 0 and 1 by using min-max normalisation. A single channel image was formed from the normalised spectral image by averaging across the spectral dimension of each pixel. The SPC architecture described in Sec. III-A was used to simulate the acquisition process. The measurement budget was different for each dataset and is specified along with their respective results. The number of low resolution measurements in the first step was fixed at 1000 and the macro pixel size used was \\(8\\times 8\\). The number of regions selected during the \"RoI detection and segmentation\" procedure was fixed at 10. Regions detected within 1 pixel radius of each other were merged. Thus, the final number of RoIs available for refinement is different for each dataset. The acquisition and recovery of each RoI is independent of
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Spatial & RI for RoI & RI for RoI & RI for RoI \\\\ Resolution & Fig. 7(a) & Fig. 7(b) & Fig. 7(c) \\\\ \\hline \\(8\\times 8\\) & 9.74 & 97.91 & 218.11 \\\\ \\hline \\(4\\times 4\\) & 5.79 & 61.58 & 110.37 \\\\ \\hline \\(2\\times 2\\) & 5.10 & 39.34 & 46.99 \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: RI values for RoIs from Fig. 7 calculated at spatial resolutions of \\(8\\times 8\\), \\(4\\times 4\\), and \\(2\\times 2\\) macro pixels. Random-macro-pixel measurements with \\(0/1\\) entries were used for each spatial resolution. The number of refined measurements at each macro pixel resolution for each RoI is 409.
Fig. 8: Remote sensing datasets used for evaluation of RPS. (a) Cuprite Vis dataset, (b) Cuprite IR dataset, (c) Gulf Vis dataset, and (d) Pavia dataset.
Fig. 9: Datasets from the Mars Science Laboratory on the Curiosity rover used for evaluation of RPS. (a) Mars1 and (b) Mars2.
the other RoIs, therefore, the overlap regions common to two RoIs is sampled twice at the same resolution. However, due to budget considerations if only one of the RoIs can be acquired, then the overlap region will be at higher resolution for the unacquired RoI as well. As the algorithm keeps track of the resolution of each RoI, further visualization or analysis of the overlap region can be restricted to the higher resolution RoI. The number of measurements used to calculate the RI was fixed at 10% of the total number of pixels in the RoI for random-macro-pixel sampling and at 20% for multi-scale multi-level sampling. The RoI with the largest value of RI is chosen for refinement at each step.
Upon experimentation algorithm (1) was found to be better at the initial low resolution reconstruction of the entire scene with 2D-DCT as the sparsity basis. For subsequent reconstruction of individual RoIs using RPS, algorithm (2) was used with Daubechies-8 wavelets [39] as the sparsity basis. The algorithms were implemented in the TFOCS [32] toolbox. The maximum number of iterations was fixed at 20000, the algorithm used was 'N07', continuation was set to 'True' and the number of continuation iterations were set to 3, the regularisation parameters \\(\\beta_{1}\\) and \\(\\beta_{2}\\) from (2) were set to 1 and 0.4 respectively for random-macro-pixel sampling and \\(\\beta_{1}=1\\) and \\(\\beta_{2}=0.6\\) for Walsh measurements. The measurement and sparsity operators were implemented in the SPOT toolbox [40]. The results of \"Low resolution acquisition and reconstruction\" and \"RoI detection and segmentation\" process for Cuprite Vis dataset is shown in Fig. 10. Results after segmentation are discussed next.
### _Results on Remote Sensing Datasets_
This section discusses the results for the remote sensing datasets, i.e., Cuprite Vis, Cuprite IR, Gulf Vis, and Pavia, shown in Fig. 8. For the following results multi-scale random-macro-pixel sampling with \\(0/1\\) entries was used. The results of applying the RPS algorithm on the Cuprite Vis dataset is shown in Fig. 11. The first image from the left in the top row is the low resolution reconstruction. The second image from the left in the top row shows the coarse reconstruction of all the detected RoIs. Continuing from left to right and top to bottom, the prioritisation of the RoIs and their refined reconstructions with the progress of the RPS is shown. Table II records the change in the RI values and the choice of RoI made at each iteration of the algorithm along with the number of available measurements at each iteration that decides whether further refinement is possible for a RoI. For the Cuprite Vis dataset a total of 9600 measurements was used for acquisition, which is 14.6% of the total number of pixels in the scene. The number of low resolution measurements is 1000 and these are used to acquire the entire scene at \\(8\\times 8\\) macro pixel resolution. Three RoIs of size \\(128\\times 128\\), \\(64\\times 64\\), and \\(32\\times 32\\) are selected after the \"RoI detection and segmentation\" process (Fig. 10). A total of 2149 (1638 + 409 + 102) coarse measurements were used.
ber of available measurements is checked to decide whether enough measurements is available for calculation of the new RI for the selected RoI, which would estimate the change in information one could expect when the RoI is refined to a macro pixel resolution of \\(2\\times 2\\) from the current \\(4\\times 4\\). As there are 4302 available measurements at iteration 1 and 1638 measurements are required for the selected RoI, refined measurements are performed on the selected RoI and the new RI is calculated. At the second iteration the same sequence of steps is followed, the RoI with the largest value of RI is selected for refinement and the new RI is calculated subject to the availability of measurements. The last RI calculation is done for estimating the change in information when refining from \\(2\\times 2\\) to \\(1\\times 1\\) macro pixel resolution. If the number of available measurements allowed this acquisition then the \\(1\\times 1\\) solution is always calculated, which may also be performed at the ground-station as the measurements for this reconstruction step have already been acquired for the calculation of the RI. Once a RoI is resolved to \\(1\\times 1\\) macro pixel resolution it is replaced with a '-' in the table. This can be seen in iteration 6 and 7 in Table II where at iteration 6 the red RoI is selected for refinement to a macro pixel resolution of \\(1\\times 1\\) and replaced with a '-' in iteration 7 but no measurements are utilised as no further refinement is possible for this RoI. As a result, the same number of measurements is available at iteration 7 for RI calculations as at iteration 6.
The refinement measurements used to calculate the RI are also used for calculating the refined solution. Table III records the quality of reconstruction of the selected RoIs at different macro pixel resolutions. The quality of recovery for each RoI improves with acquisition of higher spatial frequency components. Two error metrics are used, namely the Normalised Mean Squared Error (NMSE) and the Structural SIMilarity Index (SSIM) [41]. NMSE between a vector \\(x\\) and its estimate \\(\\hat{x}\\) is defined as \\(\\|\\hat{x}-x\\|_{2}^{2}/\\|x\\|_{2}^{2}\\). SSIM is a full referential perceptual visual quality metric that considers local luminance, contrast, and structural variance to calculate similarity between the estimate and the reference image. The visual improvements in the reconstructions can be seen in Fig. 2 in the supplementary material.
Results of applying RPS on the Cuprite IR dataset are shown in Fig. 12. As shown in Table IV, the online distribution of measurements is more evident here. After coarse reconstruction the RI values for each RoI is calculated. The RoI with the largest RI value, marked in red, is selected for refinement. The available number of measurements allows to calculate the RI for a macro pixel resolution of \\(2\\times 2\\). In the second iteration, the RoI of size \\(64\\times 64\\), marked in orange in Fig. 12, has the largest value of RI and is refined to a macro pixel resolution of \\(4\\times 4\\). In the third iteration, the red RoI has the largest RI value and is refined to a macro pixel resolution of \\(2\\times 2\\). However, there is not enough measurements to calculate the RI for a refinement to a macro pixel resolution of \\(1\\times 1\\). Therefore, the red RoI is removed from the list of RoIs due to unavailability of measurements and replaced with a '-'. The same number of measurements is available at iteration 4 as at iteration 3 because no refinement measurements were used in iteration 3. Thereafter, the process of selecting the RoI with the largest value of RI, calculation of the refined solution, and calculation of the RI subject to available measurements continues till iteration 7. At iteration 7 the yellow RoI uses the last measurements to calculate the RI for a refinement to a macro pixel resolution of \\(1\\times 1\\). At iteration 8 the orange RoI is resolved to \\(1\\times 1\\) macro pixel resolution as the required measurements were already acquired at iteration 5. At iteration 9 the green RoI is resolved to a macro pixel resolution of \\(2\\times 2\\) but further refinement is not possible as the measurement budget has been exhausted. Therefore, the final macro pixel resolution of the green RoI is \\(2\\times 2\\). Finally, the yellow RoI is resolved to the native resolution of the DMD by using the refinement measurements collected at iteration 7. This shows the dynamic distribution of the measurements across the RoIs by the RPS algorithm. For acquisition of all the RoIs at the native resolution of the DMD, one would need 10000 measurements. Thus, the RPS algorithm selects the most informative RoI at each iteration and better utilises the limited measurement budget (8060 measurements).
Due to space limitations the application of RPS on the Gulf Vis dataset is shown in Fig. 3 and Table A in the supplementary material. The results for Pavia are provided in Fig. 4 and Table B in the supplementary material. In the case of Pavia we can see that a smaller RoI (RoI 1 and RoI 4) with more information change across the spatial scales is prioritised
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|} \\hline & \\begin{tabular}{c} RoI \\\\ (Red) \\\\ \\end{tabular} & \\begin{tabular}{c} RoI 2 \\\\ (Orange) \\\\ \\end{tabular} & \\begin{tabular}{c} RoI 3 \\\\ (Yellow) \\\\ (\\(32\\times 32\\)) \\\\ \\end{tabular} \\\\ \\cline{2-6} &
\\begin{tabular}{c} NMSE \\\\ \\end{tabular} & SSIM & NMSE & SSIM & NMSE & SSIM \\\\ \\hline \\(8\\times 8\\) & 0.032 & 0.40 & 0.041 & 0.36 & 0.024 & 0.38 \\\\ \\hline \\(4\\times 4\\) & 0.015 & 0.64 & 0.016 & 0.64 & 0.015 & 0.56 \\\\ \\hline \\(2\\times 2\\) & 0.008 & 0.79 & 0.009 & 0.79 & 0.006 & 0.78 \\\\ \\hline \\(1\\times 1\\) & 0.007 & 0.83 & 0.007 & 0.83 & 0.005 & 0.82 \\\\ \\hline \\end{tabular}
\\end{table} TABLE III: Error metrics for the RoIs for Cuprite Vis dataset at various macro pixel resolution.
Fig. 12: Prioritisation of RoIs with RI for Cuprite IR dataset. The total number of measurements is 8060 (12.3% of the total number of pixels).
over a larger sized RoI (RoI 3).
### _Experiments with Different Measurement Matrices_
The RPS is agnostic to the nature of the measurement matrix. The only requirement is that the measurement ensemble can be deployed in multi-scale manner. Fig. 13 shows the result of application of the RPS on the Cuprite Vis dataset with Rademacher (\\(\\pm 1\\) with equal probability) [42] random-macro-pixel measurements. The RoIs and their order of prioritisation changes, though the overall result is similar to \\(0/1\\) random-macro-pixel sampling. The corresponding error metrics are shown in Table V. The first step of low resolution acquisition for detecting RoIs is still performed with \\(0/1\\) random-macro-pixel sampling. Implementation of Rademacher random-macro-pixel measurements requires two acquisition cycles on the DMD due to the \\(\\pm 1\\) nature of the Rademacher random variable. The total number of physical measurements required for Rademacher measurements is twice the number of realisations of the measurement vectors. Let the number of measurements required for calculation of RI at each spatial scale for the \\(i^{th}\\) RoI be \\(p_{i}\\)% of the total number of pixels in the RoI. Then, for \\(M\\) RoIs with \\(R\\) resolution levels, the total number of \\(0/1\\) random-macro-pixel measurements will be \\(CM+R\\sum_{i=1}^{M}\\left\\lfloor\\frac{p_{i}}{100}n_{i}\\right\\rfloor\\), where \\(CM\\) is the number of low resolution measurements common for all RoIs and \\(n_{i}\\) denotes the total number of pixels in each RoI. For Rademacher random-macro-pixel measurements, the total number of physical measurements, i.e., the number of acquisition cycles performed on the DMD, will be equal to \\(CM+2R\\sum_{i=1}^{M}\\left\\lfloor\\frac{p_{i}}{100}n_{i}\\right\\rfloor\\).
One can also use Walsh transforms in a multi-scale manner. The Walsh sampling masks from Fig. 5 are used. The number of Walsh measurements in each spatial frequency band is provided in Table VI. For each RoI size the total number of Walsh measurements is fixed beforehand while determining the sampling map. The number of measurements required for RPS with Walsh ensemble is equal to \\(CM+\\sum_{i=1}^{M}\\left\\lfloor\\frac{p_{i}}{100}n_{i}\\right\\rfloor\\). The total physical measurements cycles on the DMD is twice the second term, as the Walsh ensemble consists of \\(\\pm 1\\) measurements, plus the low resolution measurements for the entire scene. The low resolution acquisition is performed with \\(0/1\\) random-macro-pixel measurements. For RoI reconstruction only the Walsh measurements are used. For calculating the initial RI values, the low frequency Walsh measurements are used as the refined measurements and the low resolution regions corresponding to the RoIs from the low resolution reconstruction are used as the coarse reconstruction for the _simulated_ measurements. For further refinement the Walsh reconstructions at different spatial scales are used as the coarse reconstructions for the _simulated_ measurements. The results with multi-level Walsh measurements are shown in Fig. 14 and in Table VII. The reconstruction metrics are recorded in Table VIII. The advantage of using the concept of multi-level sampling with the concept of RI is evident from comparing the results in Table V and Table VIII. Multi-level Walsh measurements with RI achieve similar performance as random-macro-pixel Rademacher measurements with a smaller measurement budget.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline \\multirow{3}{*}{Resolution of RoI} & \\multicolumn{2}{c|}{RoI 1} & \\multicolumn{2}{c|}{RoI 2} & \\multicolumn{2}{c|}{RoI 3} \\\\ & (Red) & \\multicolumn{2}{c|}{(Orange)} & \\multicolumn{2}{c|}{(Yellow)} \\\\ \\cline{2-7} & RoI & (\\(128\\times 128\\)) & & (\\(64\\times 64\\)) & & (\\(32\\times 32\\)) \\\\ \\cline{2-7} & NMSE & SSIM & NMSE & SSIM & NMSE & SSIM \\\\ \\hline \\(8\\times 8\\) & 0.032 & 0.39 & 0.043 & 0.35 & 0.022 & 0.4 \\\\ \\hline \\(4\\times 4\\) & 0.015 & 0.63 & 0.017 & 0.63 & 0.015 & 0.54 \\\\ \\hline \\(2\\times 2\\) & 0.008 & 0.78 & 0.009 & 0.79 & 0.007 & 0.77 \\\\ \\hline \\(1\\times 1\\) & 0.006 & 0.83 & 0.007 & 0.83 & 0.005 & 0.83 \\\\ \\hline \\end{tabular}
\\end{table} TABLE V: Error metrics for the RoIs for Cuprite Vis dataset with random-macro-pixel Rademacher measurements (Fig. 13). Resolution is in macro pixel size.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline \\multirow{3}{*}{\\begin{tabular}{c} Iteration \\\\ No. \\\\ \\end{tabular} } & \\multicolumn{3}{c|}{RoI 1} & \\multicolumn{2}{c|}{RoI 2} & \\multicolumn{1}{c|}{RoI 3} & \\multicolumn{1}{c|}{RoI 4} & \\multirow{3}{*}{\\begin{tabular}{c} Refined \\\\ (Red) \\\\ \\end{tabular} } & \\multirow{3}{*}{
\\begin{tabular}{c} Available \\\\ Macro \\\\ \\end{tabular} } \\\\ & \\(128\\times 128\\) & (\\(64\\times 64\\)) & (\\(32\\times 32\\)) & (\\(32\\times 32\\)) \\\\ \\hline \\(1\\) & \\(27\\times 1\\) & 139,14 & 8.39 & 29.55 & \\(4\\times 4\\) & 2762 \\\\ \\hline \\(2\\) & 41,43 & 39,14 & 8.39 & 29.55 & \\(4\\times 4\\) & 1124 \\\\ \\hline \\(3\\) & 41,43 & 14,04 & 8.39 & 29.55 & \\(2\\times 2\\) & 715 \\\\ \\hline \\(4\\) & - & 14.04 & 8.39 & 29.55 & \\(4\\times 4\\) & 715 \\\\ \\hline \\(5\\) & - & 14.04 & 8.39 & 4.13 & \\(2\\times 2\\) & 613 \\\\ \\hline \\(6\\) & - & 3.44 & 8.90 & 4.13 & \\(4\\times 4\\) & 204 \\\\ \\hline \\(7\\) & - & 3.44 & 1.80 & 4.13 & \\(2\\times 2\\) & 102 \\\\ \\hline \\(8\\) & - & **3.44** & 1.80 & 1.10 & \\(1\\times 1\\) & 0 \\\\ \\hline \\(9\\) & - & - & **1.80** & 1.10 & \\(2\\times 2\\) & 0 \\\\ \\hline \\(10\\) & - & - & - & **1.10** & \\(1\\times 1\\) & 0 \\\\ \\hline \\(11\\) & - & - & - & - & - & 0 \\\\ \\hline \\end{tabular}
\\end{table} TABLE IV: Evolution of RI for Cuprite IR dataset in Fig. 12. The highlighted RoI indicates the one selected at each iteration. The refined macro pixel resolution of the selected RoI is tabulated. The total number of measurements is 8060.
Fig. 13: Prioritisation of RoIs with RI for Cuprite Vis dataset with random-macro-pixel Rademacher measurements. The total number of measurements is 9188 (14% of the total number of pixels). The number of physical measurements is 18376 (28% of the total number of pixels).
### _Results on MSL Images_
This section reports the results of using RPS to reconstruct the Mars1 and Mars2 images from Walsh measurements. The prioritisation of RoIs for Mars1 image is shown in Fig. 15. The evolution of RI values and RoI prioritisation are recorded in Table IX. A limited budget experiment is also conducted with the Mars1 image where the number of measurements is not enough to resolve each RoI to the native resolution of the DMD. Fig. 16 and Table X show the prioritisation of the RoIs and the evolution of the RI values respectively. The orange RoI is resolved using only the low frequency Walsh measurements denoted by the green region in Fig. 5(c) and the corresponding number of measurements is given in Table VI. As there is not enough measurements for further refinement of the orange RoI, it is not resolved using the higher frequency Walsh measurements and is removed from the list. Only the red RoI is resolved to the native resolution of the DMD. Thus, the RI is able to prioritise RoIs on the basis of the information change across spatial scales of the RoIs from Mars rover camera images as well. Error metrics for both measurement budget scenarios of the Mars1 image are provided in Table C and Table D
output of the \"detection and segmentation\" step may lead to different RoI selections with the noise realisations, but we expect the first ones to remain more stable. Deviations from an idealised solution are rather a model error than due to our adaptive algorithm, the issue of simultaneously treating modelling errors is beyond the scope of the paper and left to future research. Examples of RPS with noisy observations using the Cuprite Vis dataset are provided in Fig. V, VI, VII, and VIII in the supplementary material. In case of low light conditions, if the photon count is not too low then the noise can be modelled as a Gaussian with a variable variance and the RI is applicable. Analysing scenarios dominated by Poisson noise is beyond the scope of the article, it will be considered in future work along the lines of [43].
The required processing time depends on the underlying compressed sensing solver. This can be tailored to the required application and may be iterative or learned in nature. If the application admits structured measurement matrices, such as the Walsh or Fourier matrices, then the FFT algorithm can be exploited to accelerate the reconstruction. The complexity analysis of compressed sensing solvers can be found in [32] and the references therein. Calculation of the RI involves a vector subtraction that takes \\(N\\) FLOPs and a calculation of a squared \\(l_{2}\\) norm that takes \\(2N-1\\) FLOPs, where \\(N\\) is the total number of pixels in the RoI. The total number of FLOPs required for RI calculation is \\(3N-1\\).
### _Comparison with Other Techniques_
As discussed previously, for exploration scenarios region based acquisition is preferred over acquisition of the complete scene due to limited resources. Also a region based approach provides the opportunity to acquire the RoIs with greater precision. A comparison of the RPS with classical compressed sensing [8, 9] and multi-level compressed sensing [13] with Walsh measurements is done in Table XI. The measurement budget is 5300 for all three acquisition methods and algorithm (2) is used with Daubechies-8 wavelets as the sparsity basis.
Rademacher measurements are used for classical compressed sensing. On its own classical compressed sensing and multi-level compressed sensing recover the entire scene at the native resolution of the DMD. As expected the RoIs are
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Iteration & \\multicolumn{2}{c|}{RI} & \\multicolumn{1}{c|}{\\begin{tabular}{c} Available \\\\ Measurements \\\\ \\end{tabular} } \\\\ \\cline{2-4} & \\begin{tabular}{c} RoI 1 \\\\ (Red) \\\\ (\\(128\\times 128\\)) \\\\ \\end{tabular} &
\\begin{tabular}{c} RoI 2 \\\\ (Orange) \\\\ (\\(128\\times 128\\)) \\\\ \\end{tabular} & \\\\ \\hline
1 & 293.72 & 147.39 & 1251 \\\\ \\hline
2 & 64.84 & 147.39 & 1251 \\\\ \\hline
2 & 64.84 & 147.39 & 562 \\\\ \\hline
3 & 64.84 & - & 562 \\\\ \\hline
4 & 14.97 & - & 562 \\\\ \\hline
5 & - & - & 0 \\\\ \\hline \\end{tabular}
\\end{table} TABLE X: Evolution of RI for Mars1 image with Walsh measurements using multi-level sampling (Fig. 16) with limited measurement budget. The highlighted RoI indicates the one selected at each iteration. The total number of measurements is 6301.
Fig. 16: Prioritisation of RoIs with RI for Mars1 image with limited measurement budget. The total number of measurements is 6301 (9.6% of the total number of pixels).
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Iteration & \\multicolumn{2}{c|}{RI} & \\multicolumn{1}{c|}{\\begin{tabular}{c} Available \\\\ Measurements \\\\ \\end{tabular} } \\\\ \\cline{2-4} & \\begin{tabular}{c} RoI 1 \\\\ (Red) \\\\ (\\(128\\times 128\\)) \\\\ \\end{tabular} &
\\begin{tabular}{c} RoI 2 \\\\ (Orange) \\\\ (\\(128\\times 128\\)) \\\\ \\end{tabular} & \\\\ \\hline
1 & 293.72 & 147.39 & 2502 \\\\ \\hline
2 & 64.84 & 147.39 & 1813 \\\\ \\hline
3 & 64.84 & 49.33 & 1124 \\\\ \\hline
4 & 14.37 & 19.83 & 562 \\\\ \\hline
5 & 14.37 & 13.75 & 562 \\\\ \\hline
6 & - & 14.31 & 0 \\\\ \\hline
7 & - & - & 0 \\\\ \\hline \\end{tabular}
\\end{table} TABLE IX: Evolution of RI for Mars1 image with Walsh measurements using multi-level sampling (Fig. 15). The highlighted RoI indicates the one selected at each iteration. The total number of measurements is 7552.
Fig. 17: Prioritisation of RoIs with RI for Mars2 image. The total number of measurements is 5299 (8.1% of the total number of pixels).
best reconstructed by the RPS, as in that case the RoIs are recovered in a dedicated manner. As the background is left at a lower resolution in the RPS algorithm the recovery of the background is best with multi-level compressed sensing. Table XI shows that in an exploration scenario with limited measurement budget, region based acquisition, like the RPS, is able to distribute the measurement budget intelligently and acquire better quality RoIs, as measured by NMSE and SSIM. Fig. IX in the supplementary material shows the reconstruction comparison from the three considered methods.
Fig. 18 and Fig. 19 show the multi-resolution compressed sensing reconstructions from MR-AMP [44, 4] and the STOne transform [45, 3] respectively for the Cuprite Vis dataset. MR-AMP used the same number of measurements for reconstruction at different down-sampling factors as the RPS algorithm illustrated in Fig. 11. For STOne transformation the number of measurements must be a power of 2, therefore, we use 1024 and 4096 measurements to reconstruct two different low resolution images. These comparisons show that at the considered sub-sampling rates, a step-wise resolution refinement procedure, like the
* [7] P. Bhattacharjee and A. Borner, \"A plug-n-play framework and acquisition methodology for remote exploration systems with single pixel cameras,\" in _Proc. SPIE 11062, Digit. Opt. Technol. 2019_, Munich, Germany, Jun., pp. 228-243.
* [8] E. J. Candes, J. Romberg, and T. Tao, \"Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,\" _IEEE Trans. Inf. Theory_, vol. 52, no. 2, pp. 489-509, Feb. 2006.
* [9] D. L. Donoho, \"Compressed sensing,\" _IEEE Trans. Inf. Theory_, vol. 52, no. 4, pp. 1289-1306, Apr. 2006.
* [10] E. J. Candes and J. K. Romberg, \"Signal recovery from random projections,\" in _Proc. SPIE 5674, Comput. Imag. III_, San Jose, CA, USA, Mar. 2005, pp. 76-86.
* [11] Y. Tsagi and D. L. Donoho, \"Extensions of compressed sensing,\" _Signal Process._, vol. 86, no. 3, pp. 549-571, Mar. 2006.
* [12] B. Roman, A. Hansen, and B. Adcock, \"On asymptotic structure in compressed sensing,\" 2014, arXiv:1406.4178.
* [13] B. Adcock, A. C. Hansen, C. Poon, and B. Roman, \"Breaking the coherence barrier: A new theory for compressed sensing,\" _Forum of Mathematics, Sigma_, vol. 5, p. e4, 2017.
* [14] NASA, \"Mars exploration rovers,\" Available at [https://mars.nasa.gov/mer/mission/overview](https://mars.nasa.gov/mer/mission/overview).
* [15] S. Maurice _et al._, \"The ChemCam instrument suite on the Mars Science Laboratory (MSL) rover: Science objectives and mast unit description,\" _Space Sci. Rev._, pp. 95-166, Sep. 2012.
* [16] R. Francis _et al._, \"AEGIS autonomous targeting for ChemCam on mars science laboratory: Deployment and results of initial science team use,\" _Sci. Robot._, p. eaam4582, Jun. 2017.
* [17] M. Burger, \"Bregman distances in inverse problems and partial differential equations,\" in _Advances in Mathematical Modeling, Optimization and Optimal Control_. Switzerland: Springer International Publishing, 2016, pp. 3-33.
* [18] M. Benning and M. Burger, \"Modern regularization methods for inverse problems,\" _Acta Numerica_, vol. 27, pp. 1-111, May 2018.
* [19] M. Burger, E. Resmerita, and L. He, \"Error estimation for Bregman iterations and inverse scale space methods in image restoration,\" _Comput._, vol. 81, pp. 109-135, Nov. 2007.
* [20] E. Lorenz, S. Mitchell, T. Sauberlich, C. Paproth, W. Halle, and O. Frauenberger, \"Remote sensing of high temperature events by the FireBird mission,\" _Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci._, vol. XL-7/W3, pp. 461-467, Apr. 2015.
* [21] M. C. Kemp, P. F. Taday, B. E. Cole, J. A. Chuff, A. J. Fitzgerald, and W. R. Tribe, \"Security applications of terahertz technology,\" in _Proc. SPIE 5070, Terrable. for Mill. Secur. Appl._, Orlando, FL, USA, Jul. 2003, pp. 44-52.
* [22] E. Kopp _et al._, \"HP3-RAD: A compact radiometer design with on-site calibration for in-situ exploration,\" in _Proc. SPIE 5973, Infrared Remote Sens. Instrument. XAVI_, San Diego, CA, USA, Sep. 2016, pp. 249-254.
* [23] M. Grott _et al._, \"The MASCOT Radiometer MARA for the Hayabusa 2 mission,\" _Space Sci. Rev._, vol. 208, no. 1, pp. 413-431, Aug. 2016.
* [24] S. Augusta, J. Heiropymus, P. Jung, and H.-W. Hubes, \"Compressed sensing in a fully non-mechanical 350 GHz imaging setting,\" _J. Infrared Mill. Terrable. Waves_, vol. 36, no. 5, pp. 496-512, May 2015.
* [25] M. F. Duarte _et al._, \"Single-pixel imaging via compressive sampling,\" _IEEE Signal Process. Mag._, vol. 25, no. 2, pp. 83-91, Mar. 2008.
* [26] B. Lee, _Introduction to +/- 12 Degree Orthogonal Digital Micromirror Devices (DMDs)_, Texas Instruments, Dallas, Texas, United States, 2008.
* [27] N. Ahmed and K. R. Rao, _Orthogonal Transforms for Digital Signal Processing_. Berlin Heidelberg: Springer-Verlag, 1975.
* [28] E. Candes and J. Romberg, \"Sparsity and incoherence in compressive sampling,\" _Image Problems_, vol. 23, no. 3, pp. 969-985, Apr. 2007.
* [29] M. Lustig, D. Donoho, and J. M. Pauly, \"Sparse MRI: the application of compressed sensing for rapid MR imaging,\" _Magnetic Reson. Medicine_, vol. 58, no. 6, pp. 1182-1195, Oct. 2007.
* [30] A. Majumdar and R. K. Ward, \"On the choice of compressed sensing priors and sparsifying transforms for MR image reconstruction: An experimental study,\" _Signal Process. Image Commun._, vol. 27, no. 9, pp. 1035-1048, Oct. 2012.
* [31] E. Candes, Y. Eldar, D. Needell, and P. Randall, \"Compressed sensing with coherent and redundant dictionaries,\" _Appl. Comput. Harmon. Anal._, vol. 31, pp. 59-73, Jul. 2011.
* [32] S. R. Becker, E. J. Candes, and M. C. Grant, \"Templates for convex cone problems with applications to sparse signal recovery,\" _Math. Prog. Comp._, vol. 3, no. 3, pp. 165-218, Aug. 2011.
* [33] S. Boyd, J. Duchi, and L. Vandenberghe, Subgradients. Available at [https://web.stanford.edu/class/oe364b/lectures/subgradients_notes.pdf](https://web.stanford.edu/class/oe364b/lectures/subgradients_notes.pdf).
* [34] D. P. Bertsekas, _Convex Optimization Theory_. Belmont, MA, USA: Athena Scientific, 2009.
* [35] NASA, \"Airborne visible infrared spectrometer (AVIRIS),\" Available at [https://aviris.jpl.nasa.gov/](https://aviris.jpl.nasa.gov/).
* [36] SPECTIR, \"Free data samples,\" Available at [http://www.spectir.com/free-data-samples/](http://www.spectir.com/free-data-samples/).
* [37] A. A. Mueller, A. Hausold, and P. Strobl, \"HySens-DAIS/ROSIS imaging spectrometers at DLR,\" in _Proc. SPIE 4545, Remote Sens. Environ. Monit. GIS Appl. Geol._, Toulouse, France, Jan. 2002, pp. 225-235.
* [38] NASA, \"PDS geosciences node data and services: MSL ChemCam,\" Available at [https://pdfs-geosciences.wustl.edu/msl/ml-mcm-chem-libs-4.5-rdr-vtl/msccm_1xxx/extras/mri_contours_in_mean_mages/](https://pdfs-geosciences.wustl.edu/msl/ml-mcm-chem-libs-4.5-rdr-vtl/msccm_1xxx/extras/mri_contours_in_mean_mages/).
* [39] I. Daubechies, _Ten Lectures on Wavelets_. Philadelphia, PA, USA: SIAM, 1992.
* [40] E. van den Berg and M. P. Friedlander, \"SPOT-A linear operator toolbox,\" Available at [https://github.com/mpf/spot/](https://github.com/mpf/spot/).
* [41] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, \"Image quality assessment: from error visibility to structural similarity,\" _IEEE Trans. Image Process._, vol. 13, no. 4, pp. 600-612, Apr. 2004.
* [42] S. Foucart and H. Rauhut, _A Mathematical Introduction to Compressive Sensing_. New York, NY, USA: Birkhauser, 2013.
* [43] M. Benning and M. Burger, \"Error estimates for general fidelities,\" _Electron. Trans. Numer. Anal._, vol. 38, pp. 44-68, Mar. 2011.
* [44] X. Wang, \"Multi-resolution approximate message passing algorithm for multi-resolution compressed sensing problem,\" Available at [https://github.com/xingwangsfu/MR-AMP/](https://github.com/xingwangsfu/MR-AMP/).
* [45] T. Goldstein, \"Implementation of the STOne transform for compressed sensing video,\" Available at [https://github.com/tomgoldstein/stone/](https://github.com/tomgoldstein/stone/).
* [46] N. Vaswani and M. Lu, \"Modified-CS: Modifying compressive sensing for problems with partially known support,\" _IEEE Trans. Signal Process._, vol. 58, no. 9, pp. 4595-4607, Sep. 2010.
\\begin{tabular}{c c} & Profim Bhattacharjee was born in New Delhi, India, in 1991. He studied electronics and communications engineering at IIIT Delhi, New Delhi, India. He currently works at the German Aerospace Center (DLR), Berlin, Germany and is a doctoral student at the department of electrical engineering at the Friedrich-Alexander-University Erlangen-Nurnberg (FAU). His research interests include computational imaging, statistical estimation and signal processing methods for remote sensing. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Martin Burger was born in Weks, Austria, in 1976. He studied mathematics at Johannes Kepler University Linz and University of Milano, and finished his PhD in 2000. After working as a postdoc at Johannes Kepler University, CAM Assistant Professor at UCLA, and scientific adviser of the Johann Radon Institute for Computational and Applied Mathematics, he accepted an offer from the Westfallische-Wihelmus University in Munster, Germany, where he worked as full professor from 2006 to 2018. Since 2018 he is full professor in applied mathematics at Friedrich-Alexander-University Erlangen-Nurnberg (FAU). His research interests include inverse problems, mathematical imaging, and theoretical foundations of deep learning. Dr. Burger received numerous awards and honours for his work in applied mathematics and inverse problems such as the Calderon Prize of the International Problems International Association and an ERC Consolidator Grant in 2013. He serves in the editorial board of several journals in applied mathematics including Inverse Problems, Inverse Problems and Imaging, and Mathematics of Computations. Since 2017 he is one of the editors-in-chief of the European Journal of Applied Mathematics. \\\\ \\end{tabular}
\\\\ \\end{tabular} \\begin{tabular}{c c} & Anko Borner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing on-board satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was involved in several ESA and NASA space missions. In 2011 Dr. Borner was awarded with the DLR research grant, which allows him doing a research stay at Auckland, New Zealand. In 2018 he and his team won the Innovation Award Berlin/ Brandenburg for developing an optical GPS-free navigation system. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Veniamin I. Morgenshtern was born in Leningrad, Russia, in 1982. He received the Specialist degree in mathematics and software engineering from Saint-Petersburg State University, Russia in 2004 and the Ph.D. degree in electrical engineering form ETH Zurich, Switzerland, in 2010. From 2010 to 2012, he was a Postdoctoral Researcher with Electrical Engineering Department at ETH Zurich. From 2012 to 2016, he was a Postdoctoral Researcher with Statistics Department at Stanford University. In 2017 he was a Researcher at Helm.AI, a self-driving car startup. From 2017 to 2018 he was the Chief Scientist at Mentality.AI, an algorithmic consulting company. Since 2018, he has been a Professor in Machine Learning and Signal Processing at Friedrich-Alexander-University Erlangen-Nurnberg (FAU), Germany. His research interests include mathematical signal processing, statistical machine learning, and deep learning. Dr. Morgenshtern was a recipient of ETH Zurich Medal for outstanding doctoral thesis in 2011, and of the Fellowship for Advanced Researchers from the Swiss National Science Foundation in 2012. His other awards include the Second Prize in Thomson Reuters Eikon Tear Tagging Challenge, a machine learning and natural language processing competition, and the teaching award for the Best Laboratory Course at the Technical Faculty at FAU. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Anko Borner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing on-board satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was involved in several ESA and NASA space missions. In 2011 Dr. Borner was awarded with the DLR research grant, which allows him doing a research stay at Auckland, New Zealand. In 2018 he and his team won the Innovation Award Berlin/ Brandenburg for developing an optical GPS-free navigation system. \\\\ \\end{tabular} \\begin{tabular}{c c} & Anko Borner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing on-board satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was involved in several ESA and NASA space missions. In 2011 Dr. Borner was awarded with the DLR research grant, which allows him doing a research stay at Auckland, New Zealand. In 2018 he and his team won the Innovation Award Berlin/ Brandenburg for developing an optical GPS-free navigation system. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Veniamin I. Morgenshtern was born in Leningrad, Russia, in 1982. He received the Specialist degree in mathematics and software engineering from Saint-Petersburg State University, Russia in 2004 and the Ph.D. degree in electrical engineering form ETH Zurich, Switzerland, in 2010. From 2010 to 2012, he was a Postdoctoral Researcher with Electrical Engineering Department at ETH Zurich. From 2012 to 2016, he was a Postdoctoral Researcher with Statistics Department at Stanford University. In 2017 he was a Researcher at Helm.AI, a self-driving car startup. From 2017 to 2018 he was the Chief Scientist at Mentality.AI, an algorithmic consulting company. Since 2018, he has been a Professor in Machine Learning and Signal Processing at Friedrich-Alexander-University Erlangen-Nurnberg (FAU), Germany. His research interests include mathematical signal processing, statistical machine learning, and deep learning. Dr. Morgenshtern was a recipient of ETH Zurich Medal for outstanding doctoral thesis in 2011, and of the Fellowship for Advanced Researchers from the Swiss National Science Foundation in 2012. His other awards include the Second Prize in Thomson Reuters Eikon Tear Tagging Challenge, a machine learning and natural language processing competition, and the teaching award for the Best Laboratory Course at the Technical Faculty at FAU. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Anko Borner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing on-board satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was involved in several ESA and NASA space missions. In 2011 Dr. Borner was awarded with the DLR research grant, which allows him doing a research stay at Auckland, New Zealand. In 2018 he and his team won the Innovation Award Berlin/ Brandenburg for developing an optical GPS-free navigation system. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Veniamin I. Morgenshtern was born in Leningrad, Russia, in 1982. He received the Specialist degree in mathematics and software engineering from Saint-Petersburg State University, Russia in 2004 and the Ph.D. degree in electrical engineering form ETH Zurich, Switzerland, in 2010. From 2010 to 2012, he was a Postdoctoral Researcher with Electrical Engineering Department at ETH Zurich. From 2012 to 2016, he was a Postdoctoral Researcher with Statistics Department at Stanford University. In 2017 he was a Researcher at Helm.AI, a self-driving car startup. From 2017 to 2018 he was the Chief Scientist at Mentality.AI, an algorithmic consulting company. Since 2018, he has been a Professor in Machine Learning and Signal Processing at Friedrich-Alexander-University Erlangen-Nurnberg (FAU), Germany. His research interests include mathematical signal processing, statistical machine learning, and deep learning. Dr. Morgenshtern was a recipient of ETH Zurich Medal for outstanding doctoral thesis in 2011, and of the Fellowship for Advanced Researchers from the Swiss National Science Foundation in 2012. His other awards include the Second Prize in Thomson Reuters Eikon Tear Tagging Challenge, a machine learning and natural language processing competition, and the teaching award for the Best Laboratory Course at the Technical Faculty at FAU. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Anko Borner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing on-board satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was involved in several ESA and NASA space missions. In 2011 Dr. Borner was awarded with the DLR research grant, which allows him doing a research stay at Auckland, New Zealand. In 2018 he and his team won the Innovation Award Berlin/ Brandenburg for developing an optical GPS-free navigation system. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Veniamin I. Morgenshtern was born in Leningrad, Russia, in 1982. He received the Specialist degree in mathematics and software engineering from Saint-Petersburg State University, Russia in 2004 and the Ph.D. degree in electrical engineering form ETH Zurich, Switzerland, in 2010. From 2010 to 2012, he was a Postdoctoral Researcher with Electrical Engineering Department at ETH Zurich. From 2012 to 2016, he was a Postdoctoral Researcher with Electrical Statistics Department at Stanford University. In 2017 he was a Researcher at Helm.AI, a self-driving car startup. From 2017 to 2018 he was the Chief Scientist at Mentality.AI, an algorithmic consulting company. Since 2018, he has been a Professor in Machine Learning and Signal Processing at Friedrich-Alexander-University Erlangen-Nurnberg (FAU), Germany. His research interests include mathematical signal processing, statistical machine learning, and deep learning. Dr. Morgenshtern was a recipient of ETH Zurich Medal for outstanding doctoral thesis in 2011, and of the Fellowship for Advanced Researchers from the Swiss National Science Foundation in 2012. His other awards include the Second Prize in Thomson Reuters Eikon Tear Tagging Challenge, a machine learning and natural language processing competition, and the teaching award for the Best Laboratory Course at the Technical Faculty at FAU. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Anko Borner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing on-board satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was involved in several ESA and NASA space missions. In 2011 Dr. Borner was awarded with the DLR research grant, which allows him doing a research stay at Auckland, New Zealand. In 2018 he and his team won the Innovation Award Berlin/ Brandenburg for developing an optical GPS-free navigation system. \\\\ \\end{tabular} \\begin{tabular}{c c} & Anko Borner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing on-board satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was involved in several ESA and NASA space missions. In 2011 Dr. Borner was awarded with the DLR research grant, which allows him doing a research stay at Auckland, New Zealand. In 2018 he and his team won the Innovation Award Berlin/ Brandenburg for developing an optical GPS-free navigation system. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Veniamin I. Morgenshtern was born in Leningrad, Russia, in 1982. He received the Specialist degree in mathematics and software engineering from Saint-Petersburg State University, Russia in 2004 and the Ph.D. degree in electrical engineering form ETH Zurich, Switzerland, in 2010. From 2010 to 2012, he was a Postdoctoral Researcher with Electrical Engineering Department at ETH Zurich. From 2012 to 2016, he was a Postdoctoral Researcher with Electrical Statistics Department at Stanford University. In 2017 he was a Researcher at Helm.AI, a self-driving car startup. From 2017 to 2018 he was the Chief Scientist at Mentality.AI, an algorithmic consulting company. Since 2018, he has been a Professor in Machine Learning and Signal Processing at Friedrich-Alexander-University Erlangen-Nurnberg (FAU), Germany. His research interests include mathematical signal processing, statistical machine learning, and deep learning. Dr. Morgenshtern was a recipient of ETH Zurich Medal for outstanding doctoral thesis in 2011, and of the Fellowship for Advanced Researchers from the Swiss National Science Foundation in 2012. His other awards include the Second Prize in Thomson Reuters Eikon Tear Tagging Challenge, a machine learning and natural language processing competition, and the teaching award for the Best Laboratory Course at the Technical Faculty at FAU. \\\\ \\end{tabular}
The RoI selections may change with the noise realisations but we expect the first ones to remain more stable. Deviations from idealised solutions are due to model errors rather than our adaptive algorithm.
Fig. IX shows the comparison between classical compressed sensing, multi-level compressed, and RPS. The error metrics are tabulated in Table XI in the main article.
Fig. 1: Method for RoI detection and segmentation from the low resolution reconstruction. (a) Low resolution reconstruction at original size (\\(256\\times 256\\)). (b) Resized to macro pixel image dimension (\\(32\\times 32\\)), the considered macro pixel size is \\(8\\times 8\\). (c) Seed (brightest) pixel (Red). (d) Increase the region around the seed pixel by one pixel in all directions. Stop after contrast is above a threshold. (e) Second seed pixel (Pink). (f)-(h) Repeat steps 2-4 in the algorithm description. The total number of RoIs in this example is 3. (i) Segmented Image (\\(32\\times 32\\)). (j) Segmented Image resized to original dimensions (\\(256\\times 256\\)). (k) RoIs extended to the next dyadic size. (l) Representation of RoI on low resolution reconstruction.
Fig. 3: Prioritisation of RoIs with RI for Gulf Vis dataset. The total number of measurements is 8370 (12.8% of the total number of pixels).
Fig. 2: Reconstruction of the Cuprite Vis RoIs (Fig. 11 in the main article) at various macro pixel resolutions. All RoIs are scaled to the same size for visualization. _Top panel_: RoI 1 (Red), _middle panel_: RoI 2 (Orange), and _bottom panel_: RoI 3 (yellow).
Figure 4: Prioritisation of RoIs with RI for Pavia dataset. The total number of measurements is 5910 (9% of the total number of pixels).
Figure 7: Proiritisation of RoIs with RI for Cuprite Vis dataset with Walsh measurements and additive Gaussian noise. Noise standard deviation is \\(0.1\\).
Figure 8: Proiritisation of RoIs with RI for Cuprite Vis dataset with Walsh measurements and additive Gaussian noise. Noise standard deviation is \\(1\\). | Goal oriented autonomous operation of space rovers has been known to increase scientific output of a mission. In this work we present an algorithm, called the RoI Prioritised Sampling (RPS), that prioritises Region-of-Interests (RoIs) in an exploration scenario in order to utilise the limited resources of the imaging instrument on the rover effectively. This prioritisation is based on an estimator that evaluates the change in information content at consecutive spatial scales of the RoIs without calculating the finer scale reconstruction. The estimator, called the Refinement Indicator (RI), is motivated and derived. Multi-scale acquisition approaches, based on classical and multi-level compressed sensing, with respect to the single pixel camera architecture are discussed. The performance of the algorithm is verified on remote sensing images and compared with the state-of-the-art multi-resolution reconstruction algorithms. At the considered sub-sampling rates the RPS is shown to better utilise the system resources for reconstructing the RoIs.
Autonomous systems, image acquisition, spatial resolution, space exploration, compressed sensing | Condense the content of the following passage. | 206 |
ieee/5d88c4df_5ab8_461b_99d4_0528bbdd1120.md | # Detection of Radio-Frequency Interference
Signal Over Land From FY-3B
Microwave Radiation Imager (MWRI)
Xiaolei Zou, Juan Zhao, Fuzhong Weng, and Zhengkun Qin
Manuscript received November 29, 2011; revised February 13, 2012; accepted March 9, 2012. Date of publication May 4, 2012; date of current version November 22, 2012. This work was supported in part by Chinese Ministry of Science and Technology Project 2010CB0951600 and in part by Chinese Ministry of Finance Project GYHY200906006, and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) project. X. Zou is with the Department of Earth, Ocean and Atmospheric Sciences, Florida State University, Tallahassee, FL 32306-4520 USA (e-mail: [email protected]). J. Zhao is with the China Meteorological Administration Training Centre, Beijing 100081, China. F. Weng is with the Satellite Meteorology and Climatology Division, Center for Satellite Applications and Research, National Environmental Satellite, Data and Information Service, National Oceanic and Atmospheric Administration, Washington, DC 20233 USA. Z. Qin is with the Center of Data Assimilation for Research and Application, Nanjing University of Information Science and Technology, Nanjing 210044, China, and also with the Department of Earth, Ocean and Atmospheric Sciences, Florida State University, Tallahassee, FL 32306-4520 USA. Color versions of one or more of the figures in this paper are available online at [http://ieeexplore.ieee.org.Digital](http://ieeexplore.ieee.org.Digital) Object Identifier 10.1109/TGRS.2012.2191792
## I Introduction
The MicroWave Radiation Imager (MWRI) is on board the second-generation Chinese polar-orbiting afternoon-configured polar-orbiting satellite (FY-3B), which was launched on November 5, 2010. MWRI has most of the frequencies of the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) onboard the Aqua satellite, which was launched on May 4, 2002. AMSR-E C- (6.9 GHz) and X-band (10.7 GHz) channels are primarily designed to enhance surface-sensing capabilities [1]. In 2003, the U.S. Naval Research Laboratory launched the WindSat radiometer on the Department of Defense Coriolis satellite [2]. WindSat also provides measurements at frequencies ranging from 6 to 37 GHz with the third and fourth components of Stokes vector [3]. The successor of AMSR-E, i.e., AMSR-2, will be on board the Global Change Observation Mission 1st-Water (GCOM-W1) satellite, which is planned for launch in spring of 2012. The mission of AMSR-2 is similar to that of AMSR-E but is enhanced for RFI detection by adding the channels at 7.3 GHz, adjacent to 6.9 GHz. It is expected that a combination of MWRI and AMSR-2 can provide some improved capability of monitoring the water cycle on Earth and in the atmosphere.
Today, X-band has been occupied and used for both passive and active remote sensing. In particular, radar technologies are used in civil, military, and government institutes for weather monitoring, air traffic control, garage remote control, Global Positioning System (GPS) on the highway, defense tracking, and vehicle speed detection for law enforcement. Thus, AMSR-E C- and X-bands and MWRI X-band operate in unprotected frequency bands, and the natural thermal emission of the Earth can be easily obscured by signals from these active microwave transmitters [3, 4, 5, 6]. Satellite microwave thermal emission mixed with signals from the active sensors is referred to as radio-frequency interference (RFI). RFI is an increasingly serious problem for microwave sensing. Early examinations of AMSR-E and WindSat data showed evidence of extensive RFI signals in brightness temperature measurements at low-frequency (e.g., C-band and X-band) channels [4, 5, 6, 7, 8]. Low-frequency microwave observations are mainly used for retrievals of surface parameters, such as soil moisture, vegetation water content and surface temperature [9, 10, 11, 12], and snow cover [13]. These retrievals are best carried out with RFI-filtered data to minimize retrieval errors. In this paper, the RFI characteristics at X-band, including the magnitude, extent, and location, are further investigated using MWRI on board the Chinese FY-3B satellite. Emphasis is given to the identification of RFI signals over land throughout the seasons.
This paper is organized as follows: Section II provides a brief description of FY-3B MWRI channel characteristics. Two RFI identification methods are described in Section III. Numerical results are presented in Section IV. Section V gives a summary and some conclusions.
## II MWRI Channel Characteristics
The MWRI instrument and its calibration are described in [14] and [15]. The calibrated and geolocated brightness temperatures are produced jointly by the National Satellite Meteorological Center of China Meteorological Administration and the Shanghai Institute of Space Communication Technology.
MWRI scans the Earth conically with a viewing angle of 45\\({}^{\\circ}\\) and a swath of 1400 km. It completes a scan within a period of 1.8 s, during which the spacecraft (note that the satellite velocity is 7 km/s) travels 12 km in its subsatellite point, which is equal to a resolution of an 89-GHz antenna beam projected on the Earth's surface. The sampling interval for the Earth view is 2.08 ms, providing a total of 254 samples from Earth observations during each scanning cycle. There are some overlaps between the antenna fields of view (FOVs) for all the channels, except for the 89-GHz channel. The MWRI frequencies, polarization, bandwidths, FOV resolution, and Noise Equivalent Temperature Sensitivity (NE\\(\\Delta\\)T) for each of the ten channels are listed in Table I. The main beam efficiency is greater than 90%. The dynamic range for the MWRI brightness temperature is 3-340 K.
## III RFI Detection Methods
### _Spectral Difference Method_
Over most land surfaces, brightness temperatures at 18.7 GHz (channels 3 and 4) are higher than those at 10.65 GHz, i.e., the spectral differences \\(\\rm{TB_{10V}-TB_{18V}<0}\\) and \\(\\rm{TB_{10H}-TB_{18H}<0}\\). This is because the dielectric constant of water in soils and vegetation is dependent on frequencies, and the resulting surface emissivity increases with frequency. Natural phenomenon such as flooding and wet surface decreases the brightness temperatures, particularly at lower microwave frequencies. An RFI signal at 10.65 GHz increases the brightness temperature at 10.65 GHz, resulting in a reversed spectral gradient: \\(\\rm{TB_{10V}-TB_{18V}>0}\\) and \\(\\rm{TB_{10H}-TB_{18H}>0}\\)[4]. RFI signals typically originate from a wide variety of coherent point target sources, i.e., radiating devices and antennas, and are often directional and narrow-banded, being isolated in space and persistent in time.
Following the work in [6], the RFI over land in summer can be identified for MWRI at the 10.65-GHz vertical and horizontal polarization channels when the following inequality about RFI-sensitive spectral difference indices (differences between brightness temperatures at two different frequencies for a given polarization) computed from the brightness temperatures holds true:
\\[\\rm{TB_{10H}-TB_{18H}>5~{}K} \\tag{1}\\] \\[\\rm{TB_{10V}-TB_{18V}>5~{}K}. \\tag{2}\\]
This 5-K threshold is set based on the study in [14], in which a microwave land emissivity model is used for simulating AMSR-E brightness temperatures based on some mean emissivity spectral characteristics over various land types.
It is found that the average value of \\(\\rm{TB_{10}-TB_{18}}\\) increases significantly when snow cover is present in high latitudes. The latitude dependence is more significant in winter than in summer, and the 5-K threshold is modified into a latitudinal-dependent threshold, i.e.,
\\[\\rm{RFI^{winter}_{H1-threshold}=f(\\varphi)+c} \\tag{3}\\]
where \\(f(\\varphi)=a\\varphi+b\\) is a linear fit equation derived from all data in the domain, and \\(c\\) is a parameter whose exact value depends on channel sensitivity and natural scene in considerations.
The scattering from natural targets (e.g., snow and ice) can substantially decrease the brightness temperatures at high frequencies. The reversed spectral difference can also be related to surface snow and ice. To avoid taking snow and ice as false RFI signals from the aforementioned indices, the following scattering-sensitive spectral difference indices are added:
\\[\\rm{TB_{89H}-TB_{18H}\\geq-10~{}K} \\tag{4}\\] \\[\\rm{TB_{89V}-TB_{18V}\\geq-10~{}K}. \\tag{5}\\]
### _PCA Method_
Different from the RFI signatures, the Earth's surfaces often produce smooth and ultrawideband microwave radiation. The multichannel correlations of radiometer data from natural radiations are often high [see Fig. 9(a) and (b)]. The spectral difference technique previously described does not explicitly utilize this channel correlation information from natural radiations. On the other hand, the channel correlations between 10 GHz and the remaining MWRI channels for RFI signatures are low since RFI only increases brightness temperatures significantly only at a particular frequency. Li _et al._[3] extended the spectral difference technique using principal component analysis (PCA) of RFI indices, which linearly transforms a set of correlated RFI indices into a smaller set of uncorrelated variables to effectively separate RFI from natural radiations.
Specifically, a vector of five-component RFI indices is defined as
\\[\\vec{R}_{\\rm indices}=\\begin{pmatrix}\\text{TB}_{\\rm 10H}-\\text{TB}_{\\rm 18H}\\\\ \\text{TB}_{\\rm 18V}-\\text{TB}_{\\rm 23V}\\\\ \\text{TB}_{\\rm 18H}-\\text{TB}_{\\rm 23H}\\\\ \\text{TB}_{\\rm 23V}-\\text{TB}_{\\rm 37V}\\\\ \\text{TB}_{\\rm 23H}-\\text{TB}_{\\rm 37H}\\end{pmatrix}. \\tag{6}\\]
The data matrix for identifying RFI at 10.65-GHz horizontal polarization using PCA is defined as (7), shown at the bottom of the page, where \\(N\\) is the total number of data points over a specified region.
The covariance matrix \\(\\mathbf{S}_{5\\times 5}=\\mathbf{AA}^{T}\\) is then constructed, whose eigenvalues \\(\\lambda_{i}\\)\\((i=1,2,\\ldots,5)\\) and eigenvectors \\(\\widetilde{e}_{i}=[e_{1,i},e_{2,i},\\ldots e_{5,i}]^{T}\\) are found to satisfy the following equation:
\\[\\mathbf{S}\\widetilde{e}_{i}=\\lambda_{i}\\widetilde{e}_{i},\\qquad i=1,2,\\ldots,5 \\tag{8}\\]
where \\(i\\) indicates the \\(i\\)th PC mode \\((i=1,2,\\ldots,5)\\); \\(\\widetilde{e}_{i}\\) is the \\(i\\)th principle component mode; and \\(\\lambda_{i}\\) indicates the contribution of the \\(i\\)th PC mode to the total variance of data.
By expressing the eigenvalues and eigenvectors in matrix form
\\[\\Lambda=\\begin{pmatrix}\\lambda_{1}&\\cdots&0\\\\ \\vdots&\\ddots&\\vdots\\\\ 0&\\cdots&\\lambda_{5}\\end{pmatrix}\\quad\\mathbf{E}=[\\,\\widetilde{e}_{1}, \\widetilde{e}_{2},\\,\\ldots,\\widetilde{e}_{5}]\\]
(8) can be equivalently written as
\\[\\mathbf{SE}=\\mathbf{E}\\Lambda\\quad\\text{or}\\quad\\mathbf{S}=\\mathbf{E}\\Lambda \\mathbf{E}^{T}. \\tag{9}\\]
(Notice that \\(\\mathbf{E}^{-1}=\\mathbf{E}^{T}\\) since \\(\\mathbf{E}\\) is an orthogonal matrix.)
The projection of data matrix \\(\\mathbf{A}\\) projected onto the orthonormal space spanned by the set of basis vectors \\(\\widetilde{e}_{1},\\widetilde{e}_{2},\\ldots\\widetilde{e}_{5}\\) gives the so-called PC coefficients
\\[\\mathbf{U}_{5\\times N}=\\mathbf{E}^{T}\\mathbf{A}=\\begin{pmatrix}\\vec{u}_{1} \\\\ \\vec{u}_{2}\\\\ \\vdots\\\\ \\vec{u}_{5}\\end{pmatrix} \\tag{10}\\]
where \\(\\vec{u}_{i}=[u_{i,1},u_{i,2},\\ldots,u_{i,N}]\\) is the PC coefficient for the \\(i\\)th PC mode. In this new data space, the first basis vector (to be called the first PC mode) spans in the direction of the maximum variance in the data, and the second basic vector (e.g., the second PC mode) spans the direction of the largest variance not accounted for by the first vector.
Data matrix \\(\\mathbf{A}\\) can be reconstructed from the total five PC modes
\\[\\mathbf{A}=\\mathbf{E}\\mathbf{U}=\\sum_{i=1}^{5}\\widetilde{e}_{i}\\vec{u}_{i}. \\tag{11}\\]
The high values of the PC coefficient for the first PC mode, i.e., \\(\\vec{u}_{1}\\), indicates the presence of RFI.
Reference [3] applied the aforementioned PCA method for the identification of RFI in WindSat data over land in summer. In this paper, the PCA method is applied to both summer and winter cases. To effectively identify RFI signals in MWRI winter data using the aforementioned PCA method, the RFI indices must be normalized, i.e.,
\\[\\vec{R}_{\\rm indices}=\\begin{pmatrix}\\frac{\\text{TB}_{\\rm 10H}-\\text{TB}_{\\rm 18H}- \\mu}{\\text{TB}_{\\rm 23V}-\\mu}\\\\ \\frac{\\text{TB}_{\\rm 18V}-\\text{TB}_{\\rm 23H}-\\mu}{\\text{TB}_{\\rm 23H}-\\mu}\\\\ \\frac{\\text{TB}_{\\rm 23H}-\\text{TB}_{\\rm 23H}}{\\text{TB}_{\\rm 23H}-\\mu} \\\\ \\frac{\\text{TB}_{\\rm 23H}-\\text{TB}_{\\rm 23H}-\\mu}{\\text{TB}_{\\rm 23H}-\\mu} \\\\ \\frac{\\text{TB}_{\\rm 23H}-\\text{TB}_{\\rm 23H}-\\mu}{\\text{TB}_{\\rm 23H}-\\mu} \\\\ \\end{pmatrix} \\tag{12}\\]
where \\(\\mu\\) and \\(\\sigma\\) are the mean and standard deviation of the five RFI indices.
## IV Numerical Results
### _MWRI Brightness Temperatures at 10.65 and 18.7 GHz_
Fig. 1 shows brightness temperatures at 10.65 and 18.7 GHz with dual polarizations over eastern China and Japan on February 24, 2011, from FY-3B MWRI. The differences of brightness temperatures between these two channels (i.e., 10.65 GHz minus 18.7 GHz) with the same polarization arealso shown. It is seen that the brightness temperatures at 10.65 GHz [Fig. 1(a) and (b)] are generally lower than the brightness temperatures at 18.7 GHz [Fig. 1(c) and (d)]. The differences are more than a few degrees (\\(\\sim\\)3 K) in most land areas [Fig. 1(e) and (f)]. Places where 10.65 GHz is much higher than 18.7 GHz (\\(>\\)10 K) are scattered over Japan and large cities in China (e.g., Beijing and Shanghai). Brightness temperatures at 10.65 and 18.7 GHz with dual polarizations over Europe on February 7, 2011, are shown in Fig. 2. It seems that RFI is most pronounced over Europe but least populated over the United States. (Figure was omitted.)
The linear correlations between the 10.65- and 18.7-GHz channels are revealed in the scatter plots of Fig. 3 for data within the domains shown in Figs. 1 and 2. Data points satisfying \\(\\rm{TB}_{10H}-\\rm{TB}_{18H}\\geq 5\\) K or \\(\\rm{TB}_{10V}-\\rm{TB}_{18V}\\geq 5\\) K are indicated in black. \\(\\rm{TB}_{10H}-\\rm{TB}_{18H}\\) and \\(\\rm{TB}_{10V}-\\rm{TB}_{18V}\\) are negatively biased and linearly correlated when \\(\\rm{TB}_{10H}-\\rm{TB}_{18H}<0\\) or \\(\\rm{TB}_{10V}-\\rm{TB}_{18V}<0\\). The MWRI-observed brightness temperatures at 10.65 GHz could be more than 100 K larger than the brightness temperatures at 18.7 GHz over Europe. Some values exceed the upper limit (340 K) of the specified dynamic range of the MWRI.
Another way to examine the similarities and differences among MWRI channels 1-4 is to plot \\(\\rm{TB}_{10H}\\), \\(\\rm{TB}_{18H}\\), \\(\\rm{TB}_{10V}\\), and \\(\\rm{TB}_{18V}\\) as functions of longitude, with the latitudes of data points satisfying \\(\\rm{TB}_{10H}-\\rm{TB}_{18H}>5\\) K or \\(\\rm{TB}_{10V}-\\rm{TB}_{18V}>5\\) K indicated in color, and \\(\\rm{TB}_{10H}-\\rm{TB}_{18H}\\leq 5\\) K or \\(\\rm{TB}_{10V}-\\rm{TB}_{18V}\\leq 5\\) K indicated in black (Figs. 4 and 5). There seems a nearly constant upper bound of brightness temperatures of each of the four channels, except channels 1 and 2 of the lowest frequency. The range of observed brightness temperatures for the horizontal polarization is much larger than that for the vertical polarization, confirming the fact that vertical polarization is much less sensitive to surface variations than horizontal polarization. For 10.65-GHz channels, data satisfying \\(\\rm{TB}_{10H}-\\rm{TB}_{18H}>5\\) K or \\(\\rm{TB}_{10V}-\\rm{TB}_{18V}>5\\) K form several mountainlike distributions in Fig. 4(a) and (b), or Fig. 5(a) and (b), respectively. Are they the RFI signals, which typically
Fig. 1: Brightness temperatures at (a) and (b) 10.65 GHz and (c) and (d) 18.7 GHz with (left panels) horizontal and (right panels) vertical polarizations over Asia on February 24, 2011. (e) and (f) Differences in brightness temperature between 10.65 and 18.7 GHz with the same polarization (e.g., (e) = (a) \\(-\\) (c), (f) = (b) \\(-\\) (d)).
originate from a wide variety of coherent point target sources and are often directional, isolated, narrowband, and coherent? The following two sections describe the MWRI RFI results obtained using the spectral difference method and the PCA method.
### _RFI Distributions Detected by the Spectral Difference Method_
The scattering effects from snow and ice can lower the brightness temperatures at 18.7 GHz more than those at 10.65 GHz and change the surface spectra. In addition, the magnitudes of \\(\\rm{TB_{10}-TB_{18}}\\) due to the presence of snow can be more than 5 K. This presents a problem in simply implementing the spectral difference method that works for data in summer to winter cases. For examples, Fig. 6 presents the spatial distributions of all the data points with differences between 10- and 18-GHz channels greater than 5 K at horizontal and vertical polarizations over larger domains than Figs. 1 and 2. The spectral difference \\(\\rm{TB_{10H}-TB_{18H}>5}\\) K over a large area in the north [Fig. 6(a) and (b)] results from the presence of snow, instead of RFI. Since the scattering effects from snow and ice are much stronger at the high-frequency channels 9 and 10 (89 GHz) than the low-frequency channels 1-4 (10.65-18.7 GHz), an additional criterion \\(\\rm{TB_{89}-TB_{18}>-10}\\) K is added for winter cases. In Fig. 6, data points with \\(\\rm{TB_{10}-TB_{18}\\geq 5}\\) K are divided into two groups: one with \\(\\rm{TB_{89}-TB_{18}\\leq-10}\\) K (green) and the other with \\(\\rm{TB_{89}-TB_{18}>-10}\\) K (red).
Fig. 7 illustrates how the RFI signals are identified. The scatter plot of \\(\\rm{TB_{10H}-TB_{18H}}\\) for the first ten-day average data in summer (July 1-10, 2011) over Asia (\\(10^{\\circ}\\) N\\(-80^{\\circ}\\) N,\\(70^{\\circ}\\) E\\(-150^{\\circ}\\) E) is presented in Fig. 7(a), and that for winter (February 1-10, 2011) is presented in Fig. 7(b) and (c), with Fig. 7(b) including those data points with either \\(\\rm{TB10-TB18}<5\\) K or \\(\\rm{TB10-TB18}\\geq 5\\) K and \\(\\rm{TB89-TB18}>-10\\) K, and Fig. 7(c) including data points satisfying \\(\\rm{TB10-TB18}\\geq 5\\) K and \\(\\rm{TB89-TB18}\\leq-10\\) K. The criteria used in the spectral difference method for RFI detection are indicated by the solid line, which is defined by
Fig. 2: Same as Fig. 1, except for Europe (\\(35^{\\circ}\\) N\\(-65^{\\circ}\\) N, \\(10^{\\circ}\\) W\\(-40^{\\circ}\\) E, right panels) on February 7, 2011.
\\(\\text{TB}_{\\text{10H}}-\\text{TB}_{\\text{18H}}=5\\) K in Fig. 7(a) and \\(f(\\varphi)=a\\varphi+b+c\\) in Fig. 7(b) and (c). The values of \\(a\\), \\(b\\), and \\(c\\) in Fig. 7(b) and (c) are 0.308, \\(-\\)14.836, and 10.602, respectively. It is clearly seen that the criterion (1) [dashed line in Fig. 7(b) and (c)] does not apply to the winter case. The latitudinal-dependent criteria (4) [solid line in Fig. 7(b) and (c)] effectively remove the residual false RFIs.
Fig. 8 presents the RFI distributions at 10.65 GHz for horizontal polarization using the spectral difference method over Asia [Fig. 9(a) and (b)] and Europe [Fig. 9(c) and (d)] during July 1-10, 2011, and February 1-10, 2011, respectively. Significant RFIs are found over Japan and Europe. Very little RFIs are found over China, except over Beijing and Shanghai cities.
Since microwave radiation from the same Earth scene is measured by each channel, different MWRI channels are highly correlated. On the other hand, RFI signals are independent from the natural variations and increase brightness temperatures
Fig. 4: Brightness temperatures at (a) and (b) 10.65 GHz and (c) and (d) 18.7 GHz with (left panels) horizontal and (right panels) vertical polarizations over an area shown in Fig. 1 on February 24, 2011. Latitudes of observations are indicated in color. Data points satisfying \\(\\text{TB}_{\\text{10H}}-\\text{TB}_{\\text{18H}}\\leq 5\\) K or \\(\\text{TB}_{\\text{10V}}-\\text{TB}_{\\text{18V}}\\leq 5\\) K are in black (else in color with latitudes indicated by the color bar).
Fig. 5: Same as Fig. 4, except for Europe (see Fig. 2) on February 7, 2011.
Fig. 3: Scatter plots of (a) and (b) brightness temperatures of channel 2 (10.65 GHz, horizontal polarization) and channel 4 (18.7 GHz, horizontal polarization, left panels). (c) and (d) Same as (a) and (b), except for vertical polarization over eastern Asia (25\\({}^{\\circ}\\) N–45\\({}^{\\circ}\\) N, 110\\({}^{\\circ}\\) E-145\\({}^{\\circ}\\) E, left panels) on February 2, 2011, and Europe (35\\({}^{\\circ}\\) N–65\\({}^{\\circ}\\) N, 10\\({}^{\\circ}\\) W–40\\({}^{\\circ}\\) E, right panels) on February 7, 2011. Data points satisfying \\(\\text{TB}_{\\text{10H}}-\\text{TB}_{\\text{18H}}\\geq 5\\) K or \\(\\text{TB}_{\\text{10V}}-\\text{TB}_{\\text{18V}}\\geq 5\\) K are indicated in black.
Fig. 6: Spatial distributions of all the data points with the differences between 10- and 18-GHz channels greater than 5 K at (left panels) horizontal and (right panels) vertical polarizations over (a) and (b) Asia on February 24, 2011, and (c) and (d) Europe on February 7, 2011. Data points with \\(\\text{TB}10-\\text{TB}18\\geq 5\\) K and \\(\\text{TB}89-\\text{TB}18\\leq-10\\) K are indicated in green. Data points with \\(\\text{TB}10-\\text{TB}18\\geq 5\\) K and \\(\\text{TB}89-\\text{TB}18\\geq-10\\) K are indicated in red.
only at the X-band channels. The channel correlations between X-band channels and other channels will be lower for RFI signals. To confirm this, the correlation matrices of horizontal and vertical polarization channels from all data over Europe during February 1-10, 2011, are provided in Fig. 9. For MWRI, all the channels, except the 89-GHz one, are positively and highly correlated, and correlations are higher for horizontal polarization [Fig. 9(a)] than for vertical polarization [Fig. 9(b)]. The differences between the correlations calculated for all data and those calculated for RFI-contaminated data are provided in Fig. 9(c) and (d). It is seen that the channel correlations between X-band channels and 18.7- and 23.8-GHz channels are smaller for horizontal polarization data. For vertical polarization data, the correlations between X-band channels and all other channels, except the 89-GHz channel, are smaller.
### _RFI Distributions Detected by the PCA Method_
Ten-day composite 20-km gridded data were generated for MWRI from February 1-10, February 19-28, and July 1-10, 2011, respectively. The ten days of MWRI data are used to remove data-void areas associated with orbital gaps of the MWRI instrument. The RFI index vectors defined by (6) and (12) are first calculated based on these ten-day composite data. The statistical technique PCA is then applied. PCA is used here for cluster identification and classifications. In other words, the RFI-related principal component in multidimensional radiometer data is extracted for RFI detection. Promises of this approach for RFI detection over nonscattering surfaces were shown in [3].
Fig. 10 shows the percentage of the total accumulative variances for the five principle components of RFI indices calculated by the first ten-day average data in February 2011 without and with normalization at both horizontal and vertical polarization channels from all data over Europe during February 1-10, 2011. The correlation matrix calculated for RFI-contaminated data is shown in Fig. 9(a). The correlation matrix calculated for RFI
vertical polarizations over both Asia and Europe. Over Asia and Europe, the first PC mode explains more than 85% and 60% of the total data variances, respectively. The first three to four PCs capture the majority of data variance in both areas. The horizontal polarization channels have larger variances than the vertical polarization ones. The differences of variances between Asia and Europe are significantly smaller when the normalized RFI index is used.
The first three PCs (i.e., eigenvectors) of the RFI indices corresponding to the eigenvalues shown in Fig. 10 is provided in Fig. 11. When the RFI indices are not normalized, the first PC remains positive, reflecting the overall magnitude of all radiometer channels; the second and third PC coefficients for horizontal and vertical polarization channels have opposite sign over Asia, indicating the polarization difference. The major difference occurs in the first PC when the RFI indices are normalized. After normalization, the first PC coefficients of four cases (Asia and Europe, horizontal and vertical) decrease monotonically with frequency, reflecting the spectral gradient of the MWRI channels. The disparities in PC coefficients between Asia and Europe and between horizontal and vertical polarizations calculated from unnormalized RFI indices [Fig. 10(a), (c), (e)] are clearly reduced [Fig. 10(b), (d), (f)].
The PCA-based RFI distributions over Asia at 10.65 GHz for horizontal polarization during summer and winter 2011 are provided in Fig. 12 for both RFI index vectors (6) and (12). Without normalization [Fig. 12(a) and (b)], the PCA method can only successfully detect RFI in MWRI summer data [Fig. 12(a)] but not winter data [Fig. 12(b)]. The snow in high latitudes is mixed together with RFI signals [Fig. 12(b)]. After normalization, the PCA effectively detects winter RFI signals over not only Japan and Beijing but also Shanghai and a few more cities over the eastern coast [Fig. 12(d)]. The normalization of RFI indices does not alter the PCA-detected RFI results in summer data. The RFI distributions in Fig. 12(c) and (d) are close to those obtained using the spectral difference methods [Fig. 8(a) and (b)].
Similar results are obtained over Europe (Fig. 13). By comparing Fig. 13(b) and (d), it is seen that the snow scattering effects are removed by normalizing the RFI indices. RFI signals can be detected by using either (6) [Fig. 13(a)] or (12) [Fig. 13(c)]. The PCA-detected RFI distributions [Fig. 13(c)
Fig. 11: (a) and (b) First, (c) and (d) second, and (e) and (f) third principle components of RFI indices calculated by the first ten-day average in February 2011 (right panels) with and (left panels) without normalization at (solid) horizontal and (dashed) vertical polarizations over (circle and dot) Asia and (cross and star) Europe during February 1–10, 2011.
Fig. 10: Percentage of the total accumulative variances for the five principle components of RFI indices calculated by the first ten-day average data in February 2011 (a) without and (b) with normalization at (solid) horizontal and (dashed) vertical polarizations over (circle and dot) Asia and (cross and star) Europe.
and (d)] resemble those obtained using the spectral difference methods [Fig. 8(c) and (d)].
To illustrate the robustness of the proposed two modified RFI detection methods, we show in Fig. 14 the RFI distributions over Europe during the last ten days (February 19-28, 2011). Again, the RFI distributions obtained by the spectral difference method [Fig. 14(a) and (b)] are very similar to those detected by the PCA method. Compared with RFI distributions detected in the first ten days of MWRI data [Fig. 13(c) and (d)], the RFI over Europe are persistent in time.
## V Summary and Conclusions
In this paper, we have presented the results of a spectral difference method and a statistical PCA analysis method for detecting RFI signals in continental Asia and Europe, using two months of data collected from the X-band channels of FY-3B MWRI. An accurate RFI detection will not only enhance geophysical retrievals over land but also provide evidences of much needed protection of microwave frequency band for satellite remote-sensing technologies.
The PCA method takes advantage of the multichannel correlation, as well as the spectral indexes, for RFI detection. The methodologies developed by Wu and Weng [6] and Li _et al._[3] for C- and X-band channels of AMSR-E and WindSat are slightly modified for applications to MWRI RFI identification. A strong RFI is visible at MWRI X-band vertical or horizontal channels. A comparison of results using different methods and different periods of data suggests that many of the RFI sources are present in MWRI data and are persistent in time. The RFI signals are more populated over the European continent and Japan than Asia and the United States. In terms of geographic distributions, it is found that RFI is observed primarily within the European continent and Japan, with much less evidence seen in the U.S. The consistency in the results obtained in this study using two independent methods give a confidence in the RFI signals identified over land in the absence of a reliable validation data set of the \"true\" RFI against which to evaluate the merits of different classification schemes.
## References
* [1] T. Kawanishi, T. Sezai, Y. Ito, K. Imaoka, T. Takeshima, Y. Ishido, A. Shibata, M. Miura, H. Inahata, and W. Spencer, \"The Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), NASDA's contribution to the EOS for global energy and water cycle studies,\" _IEEE Trans. Geosci. Remote Sens._, vol. 41, no. 2, pp. 184-194, Feb. 2003.
* [2] P. W. Gaiser, K. M. St. Germain, E. M. Twarog, G. A. Poe, W. Purdy, D. Richardson, W. Grossman, W. L. Jones, D. Spencer, G. Golba, J. Cleveland, L. Choy, R. M. Bevilacqua, and P. S. Chang, \"The WindSat spaceborne polarimetric microwave radiometer: Sensor description and early orbit performance,\" _IEEE Trans. Geosci. Remote Sens._, vol. 42, no. 11, pp. 2347-2361, Nov. 2004.
* [3] L. Li, P. W. Gaiser, M. Bettenhausen, and W. Johnston, \"WindSat radio-frequency interference signature and its identification over land and ocean,\" _IEEE Trans. Geosci. Remote Sens._, vol. 44, no. 3, pp. 530-539, Mar. 2006.
* [4] L. Li, E. Njoku, E. Im, P. Chang, and K. S. Germain, \"A preliminary survey of radio-frequency interference over the U.S. in Aqua AMSR-E data,\" _IEEE Trans. Geosci. Remote Sens._, vol. 42, no. 2, pp. 380-390, Feb. 2004.
* [5] E. Njoku, P. Asheroft, T. Chan, and L. Li, \"Global survey and statistics of radio-frequency interference in AMSR-E land observations,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 5, pp. 938-947, May 2005.
* [6] Y. Wu and F. Weng, \"Detection and correction of AMSR-E radio-frequency interference (RFI),\" _Acta Meteor. Sinica_, vol. 25, no. 5, pp. 669-681, Oct. 2011.
* [7] S. W. Ellingson and J. T. Johnson, \"A polarimetric survey of radio-frequency interference in C- and X-bands in the continental United States using WindSat radiometry,\" _IEEE Trans. Geosci. Remote Sens._, vol. 44, no. 3, pp. 540-548, Mar. 2006.
* [8] D. McKague, J. J. Puckett, and C. Ruf, \"Characterization of K-band radio frequency interference from AMSR-E, WindSat and SSMM,\" in _Proc. IGARSS_, 2010, pp. 2492-2494.
* [9] E. Njoku and L. Li, \"Retrieval of land surface parameters using passive microwave measurements at 6 to 18 GHz,\" _IEEE Trans. Geosci. Remote Sens._, vol. 37, no. 1, pp. 79-93, Jan. 1999.
Fig. 14: RFI distributions at 10.65 GHz for (left panels) horizontal and (right panels) vertical polarizations using (a) and (b) the spectral difference method, and (c) and (d) the PCA method with normalization over Europe during February 19–28, 2011.
Fig. 13: Same as Fig. 12, except for Europe.
* [10] E. Njoku, T. Koike, T. Jackson, and S. Paloscia, \"Retrieval of soil moisture from AMSR data,\" in _Microwave Radiometry and Remote Sensing of the Earth's Surface and Atmosphere_, P. Pampaloni and S. Paloscia, Eds. Utrecht, the Netherlands: VSP Publ., 2000, pp. 525-533.
* [11] L. Li, P. W. Gaiser, B. C. Gao, R. M. Bevilacqua, T. J. Jackson, E. G. Njoku, C. Rudiger, J.-C. Calvet, and R. Bindlish, \"WindSat global soil moisture algorithm and validation,\" _IEEE Trans. Geosci. Remote Sens._, vol. 48, no. 5, pp. 2224-2241, May 2010.
* [12] T. J. Jackson, R. Bindlish, A. I. Gasiewski, B. Stankov, M. Klein, E. G. Njoku, D. Bosch, T. L. Coleman, C. A. Laymon, and P. Starks, \"Porimetric scanning radiometer C- and X-band microwave observations during SMEX03,\" _IEEE Trans. Geosci. Remote Sens._, vol. 43, no. 11, pp. 2418-2430, Nov. 2005.
* [13] R. Kelly, A. Chang, L. Tsang, and J. Foster, \"A prototype AMSR-E global snow area and snow depth algorithm,\" _IEEE Trans. Geosci. Remote Sens._, vol. 41, no. 2, pp. 230-242, Feb. 2003.
* [14] H. Yang, F. Weng, L. Lv, N. Lu, G. Liu, M. Bai, Q. Qian, J. He, and H. Xu, \"The FengVun-3 microwave radiation imager on-orbit verification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 11, pp. 4552-4560, Nov. 2011.
* [15] F. Weng, B. Yan, and N. Grody, \"A microwave land emissivity model,\" _J. Geophys. Res._, vol. 106, no. D17, pp. 20 115-20 123, 2001.
\\begin{tabular}{c c} & Xiaolei Zou received the Ph.D. degree from the Institute of Atmospheric Physics, Academic Sinica, Beijing, China, in 1988. She is currently a Professor with the Department of Earth, Ocean and Atmospheric Science, Florida State University (FSU), Tallahassee. During 1989-1993, she developed the 4D-Var system of the NMC (now NCEP) medium-range forecast model with full physics. She then took a Scientist position at NCAR, where she developed an MMSE 4D-Var system and worked on Global Positioning System RO data assimilation ever since. After four years (1993-1997) at NCAR, she returned to FSU in 1997 to become a Professor in the Department of Earth, Ocean and Atmospheric Sciences and worked mainly on satellite data assimilation and their applications to hurricane forecasts and climate prediction studies. She has published over 80 papers in peer-reviewed journals. Dr. Zou was the recipient of the 2008 AMS Fellow Award for her outstanding contributions to the application of satellite data in numerical weather prediction models and education in data assimilation. \\\\ \\end{tabular} \\begin{tabular}{c c} & Juan Zhao received the Ph.D. degree in meteorology from the Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, China, in 2010. She is currently with the Chinese Meteorological Administration Training Centre, Beijing. From 2011 to 2012, she was a Visiting Scholar with the Department of Earth, Ocean and Atmospheric Science, Florida State University (FSU), Tallahassee. At FSU, she conducted research in radio-frequency interference detection of satellite data from Chinese FengYun-3B MicroWave Radiation Imager (MWRI), NASA Aqua Advanced Microwave Scanning Radiometer-EOS (AMSR-E), and Navy Coriolis WindSat data. \\\\ \\end{tabular} \\begin{tabular}{c c} & Furzhong Weng received the Ph.D. degree from Colorado State University, Fort Collins, in 1992. He is the Chief of the Satellite Meteorology and Climatology Division, Center for Satellite Applications and Research, National Environmental Satellite, Data and Information Service, National Oceanic and Atmospheric Administration (NOAA), Washington, DC. In the past years, he has been leading the developments of NOAA operational satellite microwave products and algorithms from Special Sensor Microwave Imager and AMSU. He is the Sensor Science Chair of the NPP/JPSS program. He is a Science Lead in developing the community radiative transfer model, which has been successfully used in several operational data assimilation systems in the United States. He also directly contributed to the development of microwave land, snow, and sea ice emissivity models, which have significantly improved the use of satellite sounding data in numerical weather prediction (NWP) models and impacted high-latitude weather forecasts. He is currently developing new innovative techniques to advance uses of satellite measurements under cloudy and precipitation areas in NWP models. He has published more than 80 papers in international peer-reviewed journals. Dr. Weng was the first recipient of the 2000 NOAA David Johnson Award for his outstanding contributions to satellite microwave remote-sensing fields and the utilization of satellite data in NWP models and the recipient of the 2002 SPIE Scientific Achievement Award for Excellence in Developing Operational Satellite Microwave Products and Algorithms, a Bronze Medal from the U.S. Department of Commerce in 2004 for his developments of operational microwave products to improve weather and climate predictions, the Department of Commerce Gold Medal Award in 2005 for his achievement in satellite data assimilation, and the NOAA Bronze Medal for leading a successful NOAA-18 instrument calibration. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Zhengkun Qin received the B.S. degree in mathematics and the Ph.D. degree in meteorology from Nanjing University of Information Science and Technology (NUIST), Nanjing, China, in 2001 and 2007, respectively. He is currently an Assistant Professor with the Center of Data Assimilation for Research and Application, NUIST. Since 2009, he has been a Visiting Scholar with Department of Earth, Ocean and Atmospheric Science, Florida State University, Tallahassee. His research interests include GOES imager radiance data assimilation, surface data quality control, and the study of nonlinear climate trend using satellite data. \\\\ \\end{tabular} | The MicroWave Radiation Imager (MWRI) onboard the FengYun (FY)-3B satellite has five frequencies at 10.65, 18.7, 23.8, 36.5, and 89.0 GHz, each having dual channels at vertical and horizontal polarization states, respectively. It is found that radio-frequency interference (RFI) is present in MWRI data over land. The RFI signals are, in general, detectable from a spectral difference method and a principal component analysis (PCA) method. In particular, the PCA method is applied to derive RFI signals from natural radiations by using the characteristics of natural radiation measurements having all-channel correlations. In the area where data have a higher projection onto the first principle component (PC) mode, RFI is, in general, present. However, both the spectral and PCA methods cannot detect RFI reliably over frozen grounds and scattering surfaces, where the brightness temperature difference between 10.65 and 18.7 GHz is large. Thus, detection is improved through the use of normalized PCA. The new RFI detection algorithm is now working reliably for MWRI applications. It is found that RFI at 10.65 GHz distributes widely over Europe and Japan, and is less popular over the United States and China.
Microwave remote sensing, MicroWave Radiation Imager (MWRI), radio-frequency interference (RFI).
0.98 | Write a summary of the passage below. | 289 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.