ID
int64
1
16.8k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.59k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
16,501
Special Lagrangian submanifolds and cohomogeneity one actions on the complex projective space
We construct examples of cohomogeneity one special Lagrangian submanifolds in the cotangent bundle over the complex projective space, whose Calabi-Yau structure was given by Stenzel. For each example, we describe the condition of special Lagrangian as an ordinary differential equation. Our method is based on a moment map technique and the classification of cohomogeneity one actions on the complex projective space classified by Takagi.
0
0
1
0
0
0
16,502
Linear Regression with Sparsely Permuted Data
In regression analysis of multivariate data, it is tacitly assumed that response and predictor variables in each observed response-predictor pair correspond to the same entity or unit. In this paper, we consider the situation of "permuted data" in which this basic correspondence has been lost. Several recent papers have considered this situation without further assumptions on the underlying permutation. In applications, the latter is often to known to have additional structure that can be leveraged. Specifically, we herein consider the common scenario of "sparsely permuted data" in which only a small fraction of the data is affected by a mismatch between response and predictors. However, an adverse effect already observed for sparsely permuted data is that the least squares estimator as well as other estimators not accounting for such partial mismatch are inconsistent. One approach studied in detail herein is to treat permuted data as outliers which motivates the use of robust regression formulations to estimate the regression parameter. The resulting estimate can subsequently be used to recover the permutation. A notable benefit of the proposed approach is its computational simplicity given the general lack of procedures for the above problem that are both statistically sound and computationally appealing.
0
0
1
1
0
0
16,503
Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods
In this paper, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time $\widetilde{O}\left(m\log \kappa \log^2 (1/\epsilon)\right)$ where $\epsilon$ is the amount of error we are willing to tolerate. Here, $\kappa$ represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever $\kappa$ is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time $\widetilde{O}(m^{3/2} \log (1/\epsilon))$. In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving that we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient.
1
0
0
0
0
0
16,504
AP17-OLR Challenge: Data, Plan, and Baseline
We present the data profile and the evaluation plan of the second oriental language recognition (OLR) challenge AP17-OLR. Compared to the event last year (AP16-OLR), the new challenge involves more languages and focuses more on short utterances. The data is offered by SpeechOcean and the NSFC M2ASR project. Two types of baselines are constructed to assist the participants, one is based on the i-vector model and the other is based on various neural networks. We report the baseline results evaluated with various metrics defined by the AP17-OLR evaluation plan and demonstrate that the combined database is a reasonable data resource for multilingual research. All the data is free for participants, and the Kaldi recipes for the baselines have been published online.
1
0
0
0
0
0
16,505
First results from the IllustrisTNG simulations: the stellar mass content of groups and clusters of galaxies
The IllustrisTNG project is a new suite of cosmological magneto-hydrodynamical simulations of galaxy formation performed with the Arepo code and updated models for feedback physics. Here we introduce the first two simulations of the series, TNG100 and TNG300, and quantify the stellar mass content of about 4000 massive galaxy groups and clusters ($10^{13} \leq M_{\rm 200c}/M_{\rm sun} \leq 10^{15}$) at recent times ($z \leq 1$). The richest clusters have half of their total stellar mass bound to satellite galaxies, with the other half being associated with the central galaxy and the diffuse intra-cluster light. The exact ICL fraction depends sensitively on the definition of a central galaxy's mass and varies in our most massive clusters between 20 to 40% of the total stellar mass. Haloes of $5\times 10^{14}M_{\rm sun}$ and above have more diffuse stellar mass outside 100 kpc than within 100 kpc, with power-law slopes of the radial mass density distribution as shallow as the dark matter's ( $-3.5 < \alpha_{\rm 3D} < -3$). Total halo mass is a very good predictor of stellar mass, and vice versa: at $z=0$, the 3D stellar mass measured within 30 kpc scales as $\propto (M_{\rm 500c})^{0.49}$ with a $\sim 0.12$ dex scatter. This is possibly too steep in comparison to the available observational constraints, even though the abundance of TNG less massive galaxies ($< 10^{11}M_{\rm sun}$ in stars) is in good agreement with the measured galaxy stellar mass functions at recent epochs. The 3D sizes of massive galaxies fall too on a tight ($\sim$0.16 dex scatter) power-law relation with halo mass, with $r^{\rm stars}_{\rm 0.5} \propto (M_{\rm 500c})^{0.53}$. Even more fundamentally, halo mass alone is a good predictor for the whole stellar mass profiles beyond the inner few kpc, and we show how on average these can be precisely recovered given a single mass measurement of the galaxy or its halo.
0
1
0
0
0
0
16,506
A Fast Implementation of Singular Value Thresholding Algorithm using Recycling Rank Revealing Randomized Singular Value Decomposition
In this paper, we present a fast implementation of the Singular Value Thresholding (SVT) algorithm for matrix completion. A rank-revealing randomized singular value decomposition (R3SVD) algorithm is used to adaptively carry out partial singular value decomposition (SVD) to fast approximate the SVT operator given a desired, fixed precision. We extend the R3SVD algorithm to a recycling rank revealing randomized singular value decomposition (R4SVD) algorithm by reusing the left singular vectors obtained from the previous iteration as the approximate basis in the current iteration, where the computational cost for partial SVD at each SVT iteration is significantly reduced. A simulated annealing style cooling mechanism is employed to adaptively adjust the low-rank approximation precision threshold as SVT progresses. Our fast SVT implementation is effective in both large and small matrices, which is demonstrated in matrix completion applications including image recovery and movie recommendation system.
1
0
0
0
0
0
16,507
Surface magnetism of gallium arsenide nanofilms
Gallium arsenide (GaAs) is the widest used second generation semiconductor with a direct band gap and increasingly used as nanofilms. However, the magnetic properties of GaAs nanofilms have never been studied. Here we find by comprehensive density functional theory calculations that GaAs nanofilms cleaved along the <111> and <100> directions become intrinsically metallic films with strong surface magnetism and magnetoelectric (ME) effect. The surface magnetism and electrical conductivity are realized via a combined effect of transferring charge induced by spontaneous electric-polarization through the film thickness and spin-polarized surface states. The surface magnetism of <111> nanofilms can be significantly and linearly tuned by vertically applied electric field, endowing the nanofilms unexpectedly high ME coefficients, which are tens of times higher than those of ferromagnetic metals and transition metal oxides.
0
1
0
0
0
0
16,508
Monte Carlo study of the Coincidence Resolving Time of a liquid xenon PET scanner, using Cherenkov radiation
In this paper we use detailed Monte Carlo simulations to demonstrate that liquid xenon (LXe) can be used to build a Cherenkov-based TOF-PET, with an intrinsic coincidence resolving time (CRT) in the vicinity of 10 ps. This extraordinary performance is due to three facts: a) the abundant emission of Cherenkov photons by liquid xenon; b) the fact that LXe is transparent to Cherenkov light; and c) the fact that the fastest photons in LXe have wavelengths higher than 300 nm, therefore making it possible to separate the detection of scintillation and Cherenkov light. The CRT in a Cherenkov LXe TOF-PET detector is, therefore, dominated by the resolution (time jitter) introduced by the photosensors and the electronics. However, we show that for sufficiently fast photosensors (e.g, an overall 40 ps jitter, which can be achieved by current micro-channel plate photomultipliers) the overall CRT varies between 30 and 55 ps, depending of the detection efficiency. This is still one order of magnitude better than commercial CRT devices and improves by a factor 3 the best CRT obtained with small laboratory prototypes.
0
1
0
0
0
0
16,509
Detecting Hierarchical Ties Using Link-Analysis Ranking at Different Levels of Time Granularity
Social networks contain implicit knowledge that can be used to infer hierarchical relations that are not explicitly present in the available data. Interaction patterns are typically affected by users' social relations. We present an approach to inferring such information that applies a link-analysis ranking algorithm at different levels of time granularity. In addition, a voting scheme is employed for obtaining the hierarchical relations. The approach is evaluated on two datasets: the Enron email data set, where the goal is to infer manager-subordinate relationships, and the Co-author data set, where the goal is to infer PhD advisor-advisee relations. The experimental results indicate that the proposed approach outperforms more traditional approaches to inferring hierarchical relations from social networks.
1
1
0
0
0
0
16,510
Optimal stimulation protocol in a bistable synaptic consolidation model
Consolidation of synaptic changes in response to neural activity is thought to be fundamental for memory maintenance over a timescale of hours. In experiments, synaptic consolidation can be induced by repeatedly stimulating presynaptic neurons. However, the effectiveness of such protocols depends crucially on the repetition frequency of the stimulations and the mechanisms that cause this complex dependence are unknown. Here we propose a simple mathematical model that allows us to systematically study the interaction between the stimulation protocol and synaptic consolidation. We show the existence of optimal stimulation protocols for our model and, similarly to LTP experiments, the repetition frequency of the stimulation plays a crucial role in achieving consolidation. Our results show that the complex dependence of LTP on the stimulation frequency emerges naturally from a model which satisfies only minimal bistability requirements.
0
0
0
0
1
0
16,511
Parametric Polynomial Preserving Recovery on Manifolds
This paper investigates gradient recovery schemes for data defined on discretized manifolds. The proposed method, parametric polynomial preserving recovery (PPPR), does not require the tangent spaces of the exact manifolds which have been assumed for some significant gradient recovery methods in the literature. Another advantage is that superconvergence is guaranteed for PPPR without the symmetric condition which has been asked in the existing techniques. There is also numerical evidence that the superconvergence by PPPR is high curvature stable, which distinguishes itself from the other methods. As an application, we show that its capability of constructing an asymptotically exact \textit{a posteriori} error estimator. Several numerical examples on two-dimensional surfaces are presented to support the theoretical results and make comparisons with state of the art methods.
0
0
1
0
0
0
16,512
Integral points on the complement of plane quartics
Let $Y$ be the complement of a plane quartic curve $D$ defined over a number field. Our main theorem confirms the Lang-Vojta conjecture for $Y$ when $D$ is a generic smooth quartic curve, by showing that its integral points are confined in a curve except for a finite number of exceptions. The required finiteness will be obtained by reducing it to the Shafarevich conjecture for K3 surfaces. Some variants of our method confirm the same conjecture when $D$ is a reducible generic quartic curve which consists of four lines, two lines and a conic, or two conics.
0
0
1
0
0
0
16,513
Modelling collective motion based on the principle of agency
Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals. In theoretical models, these rules are normally \emph{assumed} to take a particular form, possibly constrained by heuristic arguments. We propose a new class of models, which describe the individuals as \emph{agents}, capable of deciding for themselves how to act and learning from their experiences. The local interaction rules do not need to be postulated in this model, since they \emph{emerge} from the learning process. We apply this ansatz to a concrete scenario involving marching locusts, in order to model the phenomenon of density-dependent alignment. We show that our learning agent-based model can account for a Fokker-Planck equation that describes the collective motion and, most notably, that the agents can learn the appropriate local interactions, requiring no strong previous assumptions on their form. These results suggest that learning agent-based models are a powerful tool for studying a broader class of problems involving collective motion and animal agency in general.
0
0
0
1
0
0
16,514
Cosmic Microwave Background constraints for global strings and global monopoles
We present the first CMB power spectra from numerical simulations of the global O(N) linear $\sigma$-model with N = 2,3, which have global strings and monopoles as topological defects. In order to compute the CMB power spectra we compute the unequal time correlators (UETCs) of the energy-momentum tensor, showing that they fall off at high wave number faster than naive estimates based on the geometry of the defects, indicating non-trivial (anti-)correlations between the defects and the surrounding Goldstone boson field. We obtain source functions for Einstein-Boltzmann solvers from the UETCs, using a recent method that improves the modelling at the radiation- matter transition. We show that the interpolation function that mimics the transition is similar to other defect models, but not identical, confirming the non-universality of the interpolation function. The CMB power spectra for global strings and monopoles have the same overall shape as those obtained using the non-linear $\sigma$-model approximation, which is well captured by a large-N calculation. However, the amplitudes are larger than the large-N calculation predict, and in the case of global strings much larger: a factor of 20 at the peak. Finally we compare the CMB power spectra with the latest CMB data to put limits on the allowed contribution to the temperature power spectrum at multipole $\ell$ = 10 of 1.7% for global strings and 2.4% for global monopoles. These limits correspond to symmetry-breaking scales of 2.9x1015 GeV (6.3x1014 GeV with the expected logarithmic scaling of the effective string tension between the simulation time and decoupling) and 6.4x1015 GeV respectively. The bound on global strings is a significant one for the ultra-light axion scenario with axion masses ma 10-28 eV. These upper limits indicate that gravitational wave from global topological defects will not be observable at the GW observatory LISA.
0
1
0
0
0
0
16,515
A parallel implementation of the Synchronised Louvain method
Community detection in networks is a very actual and important field of research with applications in many areas. But, given that the amount of processed data increases more and more, existing algorithms need to be adapted for very large graphs. The objective of this project was to parallelise the Synchronised Louvain Method, a community detection algorithm developed by Arnaud Browet, in order to improve its performances in terms of computation time and thus be able to faster detect communities in very large graphs. To reach this goal, we used the API OpenMP to parallelise the algorithm and then carried out performance tests. We studied the computation time and speedup of the parallelised algorithm and were able to bring out some qualitative trends. We obtained a great speedup, compared with the theoretical prediction of Amdahl law. To conclude, using the parallel implementation of the algorithm of Browet on large graphs seems to give good results, both in terms of computation time and speedup. Further tests should be carried out in order to obtain more quantitative results.
1
0
0
0
0
0
16,516
Index coding with erroneous side information
In this paper, new index coding problems are studied, where each receiver has erroneous side information. Although side information is a crucial part of index coding, the existence of erroneous side information has not yet been considered. We study an index code with receivers that have erroneous side information symbols in the error-free broadcast channel, which is called an index code with side information errors (ICSIE). The encoding and decoding procedures of the ICSIE are proposed, based on the syndrome decoding. Then, we derive the bounds on the optimal codelength of the proposed index code with erroneous side information. Furthermore, we introduce a special graph for the proposed index coding problem, called a $\delta_s$-cycle whose properties are similar to those of the cycle in the conventional index coding problem. Properties of the ICSIE are also discussed in the $\delta_s$-cycle and clique. Finally, the proposed ICSIE is generalized to an index code for the scenario having both additive channel errors and side information errors, called a generalized error correcting index code (GECIC).
1
0
0
0
0
0
16,517
HATS-36b and 24 other transiting/eclipsing systems from the HATSouth - K2 Campaign 7 program
We report on the result of a campaign to monitor 25 HATSouth candidates using the K2 space telescope during Campaign 7 of the K2 mission. We discover HATS-36b (EPIC 215969174b), a hot Jupiter with a mass of 2.79$\pm$0.40 M$_J$ and a radius of 1.263$\pm$0.045 R$_J$ which transits a solar-type G0V star (V=14.386) in a 4.1752d period. We also refine the properties of three previously discovered HATSouth transiting planets (HATS-9b, HATS-11b, and HATS-12b) and search the K2 data for TTVs and additional transiting planets in these systems. In addition we also report on a further three systems that remain as Jupiter-radius transiting exoplanet candidates. These candidates do not have determined masses, however pass all of our other vetting observations. Finally we report on the 18 candidates which we are now able to classify as eclipsing binary or blended eclipsing binary systems based on a combination of the HATSouth data, the K2 data, and follow-up ground-based photometry and spectroscopy. These range in periods from 0.7 days to 16.7 days, and down to 1.5 mmag in eclipse depths. Our results show the power of combining ground-based imaging and spectroscopy with higher precision space-based photometry, and serve as an illustration as to what will be possible when combining ground-based observations with TESS data.
0
1
0
0
0
0
16,518
The Carnegie-Chicago Hubble Program: Discovery of the Most Distant Ultra-faint Dwarf Galaxy in the Local Universe
Ultra-faint dwarf galaxies (UFDs) are the faintest known galaxies and due to their incredibly low surface brightness, it is difficult to find them beyond the Local Group. We report a serendipitous discovery of an UFD, Fornax UFD1, in the outskirts of NGC 1316, a giant galaxy in the Fornax cluster. The new galaxy is located at a projected radius of 55 kpc in the south-east of NGC 1316. This UFD is found as a small group of resolved stars in the Hubble Space Telescope images of a halo field of NGC 1316, obtained as part of the Carnegie-Chicago Hubble Program. Resolved stars in this galaxy are consistent with being mostly metal-poor red giant branch (RGB) stars. Applying the tip of the RGB method to the mean magnitude of the two brightest RGB stars, we estimate the distance to this galaxy, 19.0 +- 1.3 Mpc. Fornax UFD1 is probably a member of the Fornax cluster. The color-magnitude diagram of these stars is matched by a 12 Gyr isochrone with low metallicity ([Fe/H] ~ -2.4). Total magnitude and effective radius of Fornax UFD1 are Mv ~ -7.6 +- 0.2 mag and r_eff = 146 +- 9 pc, which are similar to those of Virgo UFD1 that was discovered recently in the intracluster field of Virgo by Jang & Lee (2014).Fornax UFD1 is the most distant known UFD that is confirmed by resolved stars. This indicates that UFDs are ubiquitous and that more UFDs remain to be discovered in the Fornax cluster.
0
1
0
0
0
0
16,519
An Exploratory Study on the Implementation and Adoption of ERP Solutions for Businesses
Enterprise Resource Planning (ERP) systems have been covered in both mainstream Information Technology (IT) periodicals, and in academic literature, as a result of extensive adoption by organisations in the last two decades. Some of the past studies have reported operational efficiency and other gains, while other studies have pointed out the challenges. ERP systems continue to evolve, moving into the cloud hosted sphere, and being implemented by relatively smaller and regional companies. This project has carried out an exploratory study into the use of ERP systems, within Hawke's Bay New Zealand. ERP systems make up a major investment and undertaking by those companies. Therefore, research and lessons learned in this area are very important. In addition to a significant initial literature review, this project has conducted a survey on the local users' experience with Microsoft Dynamics NAV (a popular ERP brand). As a result, this study will contribute new and relevant information to the literature on business information systems and to ERP systems, in particular.
1
0
0
0
0
0
16,520
Selective inference for the problem of regions via multiscale bootstrap
A general approach to selective inference is considered for hypothesis testing of the null hypothesis represented as an arbitrary shaped region in the parameter space of multivariate normal model. This approach is useful for hierarchical clustering where confidence levels of clusters are calculated only for those appeared in the dendrogram, thus subject to heavy selection bias. Our computation is based on a raw confidence measure, called bootstrap probability, which is easily obtained by counting how many times the same cluster appears in bootstrap replicates of the dendrogram. We adjust the bias of the bootstrap probability by utilizing the scaling-law in terms of geometric quantities of the region in the abstract parameter space, namely, signed distance and mean curvature. Although this idea has been used for non-selective inference of hierarchical clustering, its selective inference version has not been discussed in the literature. Our bias-corrected $p$-values are asymptotically second-order accurate in the large sample theory of smooth boundary surfaces of regions, and they are also justified for nonsmooth surfaces such as polyhedral cones. The $p$-values are asymptotically equivalent to those of the iterated bootstrap but with less computation.
0
0
1
1
0
0
16,521
Arbitrary Beam Synthesis of Different Hybrid Beamforming Systems
For future mmWave mobile communication systems the use of analog/hybrid beamforming is envisioned be a key as- pect. The synthesis of beams is a key technology of enable the best possible operation during beamsearch, data transmission and MU MIMO operation. The developed method for synthesizing beams is based on previous work in radar technology considering only phase array antennas. With this technique it is possible to generate a desired beam of any shape with the constraints of the desired target transceiver antenna frontend. It is not constraint to a certain antenna array geometry, but can handle 1d, 2d and even 3d antenna array geometries like cylindric arrays. The numerical examples show that the method can synthesize beams by considering a user defined tradeoff between gain, transition width and passband ripples.
1
0
0
0
0
0
16,522
The emergence of the concept of filter in topological categories
In all approaches to convergence where the concept of filter is taken as primary, the usual motivation is the notion of neighborhood filter in a topological space. However, these approaches often lead to spaces more general than topological ones, thereby calling into question the need to use filters in the first place. In this note we overturn the usual view and take as primary the notion of convergence in the most general context of centered spaces. In this setting, the notion of filterbase emerges from the concept of germ of a function, while the concept of filter emerges from an amnestic modification of the subcategory of centered spaces admitting germs at each point.
0
0
1
0
0
0
16,523
Memory-augmented Neural Machine Translation
Neural machine translation (NMT) has achieved notable success in recent times, however it is also widely recognized that this approach has limitations with handling infrequent words and word pairs. This paper presents a novel memory-augmented NMT (M-NMT) architecture, which stores knowledge about how words (usually infrequently encountered ones) should be translated in a memory and then utilizes them to assist the neural model. We use this memory mechanism to combine the knowledge learned from a conventional statistical machine translation system and the rules learned by an NMT system, and also propose a solution for out-of-vocabulary (OOV) words based on this framework. Our experiments on two Chinese-English translation tasks demonstrated that the M-NMT architecture outperformed the NMT baseline by $9.0$ and $2.7$ BLEU points on the two tasks, respectively. Additionally, we found this architecture resulted in a much more effective OOV treatment compared to competitive methods.
1
0
0
0
0
0
16,524
RFID Localisation For Internet Of Things Smart Homes: A Survey
The Internet of Things (IoT) enables numerous business opportunities in fields as diverse as e-health, smart cities, smart homes, among many others. The IoT incorporates multiple long-range, short-range, and personal area wireless networks and technologies into the designs of IoT applications. Localisation in indoor positioning systems plays an important role in the IoT. Location Based IoT applications range from tracking objects and people in real-time, assets management, agriculture, assisted monitoring technologies for healthcare, and smart homes, to name a few. Radio Frequency based systems for indoor positioning such as Radio Frequency Identification (RFID) is a key enabler technology for the IoT due to its costeffective, high readability rates, automatic identification and, importantly, its energy efficiency characteristic. This paper reviews the state-of-the-art RFID technologies in IoT Smart Homes applications. It presents several comparable studies of RFID based projects in smart homes and discusses the applications, techniques, algorithms, and challenges of adopting RFID technologies in IoT smart home systems.
1
0
0
0
0
0
16,525
Symplectic capacities from positive S^1-equivariant symplectic homology
We use positive S^1-equivariant symplectic homology to define a sequence of symplectic capacities c_k for star-shaped domains in R^{2n}. These capacities are conjecturally equal to the Ekeland-Hofer capacities, but they satisfy axioms which allow them to be computed in many more examples. In particular, we give combinatorial formulas for the capacities c_k of any "convex toric domain" or "concave toric domain". As an application, we determine optimal symplectic embeddings of a cube into any convex or concave toric domain. We also extend the capacities c_k to functions of Liouville domains which are almost but not quite symplectic capacities.
0
0
1
0
0
0
16,526
Synthesis versus analysis in patch-based image priors
In global models/priors (for example, using wavelet frames), there is a well known analysis vs synthesis dichotomy in the way signal/image priors are formulated. In patch-based image models/priors, this dichotomy is also present in the choice of how each patch is modeled. This paper shows that there is another analysis vs synthesis dichotomy, in terms of how the whole image is related to the patches, and that all existing patch-based formulations that provide a global image prior belong to the analysis category. We then propose a synthesis formulation, where the image is explicitly modeled as being synthesized by additively combining a collection of independent patches. We formally establish that these analysis and synthesis formulations are not equivalent in general and that both formulations are compatible with analysis and synthesis formulations at the patch level. Finally, we present an instance of the alternating direction method of multipliers (ADMM) that can be used to perform image denoising under the proposed synthesis formulation, showing its computational feasibility. Rather than showing the superiority of the synthesis or analysis formulations, the contributions of this paper is to establish the existence of both alternatives, thus closing the corresponding gap in the field of patch-based image processing.
1
0
0
0
0
0
16,527
Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks
We prove new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function. These bounds are tight for almost the entire range of parameters. Letting $W$ be the number of weights and $L$ be the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and provide examples with VC-dimension $\Omega( W L \log(W/L) )$. This improves both the previously known upper bounds and lower bounds. In terms of the number $U$ of non-linear units, we prove a tight bound $\Theta(W U)$ on the VC-dimension. All of these bounds generalize to arbitrary piecewise linear activation functions, and also hold for the pseudodimensions of these function classes. Combined with previous results, this gives an intriguing range of dependencies of the VC-dimension on depth for networks with different non-linearities: there is no dependence for piecewise-constant, linear dependence for piecewise-linear, and no more than quadratic dependence for general piecewise-polynomial.
1
0
0
0
0
0
16,528
Benchmarking Numerical Methods for Lattice Equations with the Toda Lattice
We compare performances of well-known numerical time-stepping methods that are widely used to compute solutions of the doubly-infinite Fermi-Pasta-Ulam-Tsingou (FPUT) lattice equations. The methods are benchmarked according to (1) their accuracy in capturing the soliton peaks and (2) in capturing highly-oscillatory parts of the solutions of the Toda lattice resulting from a variety of initial data. The numerical inverse scattering transform method is used to compute a reference solution with high accuracy. We find that benchmarking a numerical method on pure-soliton initial data can lead one to overestimate the accuracy of the method.
0
1
0
0
0
0
16,529
Can Planning Images Reduce Scatter in Follow-Up Cone-Beam CT?
Due to its wide field of view, cone-beam computed tomography (CBCT) is plagued by large amounts of scatter, where attenuated photons hit the detector, and corrupt the linear models used for reconstruction. Given that one can generate a good estimate of scatter however, then image accuracy can be retained. In the context of adaptive radiotherapy, one usually has a low-scatter planning CT image of the same patient at an earlier time. Correcting for scatter in the subsequent CBCT scan can either be self consistent with the new measurements or exploit the prior image, and there are several recent methods that report high accuracy with the latter. In this study, we will look at the accuracy of various scatter estimation methods, how they can be effectively incorporated into a statistical reconstruction algorithm, along with introducing a method for matching off-line Monte-Carlo (MC) prior estimates to the new measurements. Conclusions we draw from testing on a neck cancer patient are: statistical reconstruction that incorporates the scatter estimate significantly outperforms analytic and iterative methods with pre-correction; and although the most accurate scatter estimates can be made from the MC on planning image, they only offer a slight advantage over the measurement based scatter kernel superposition (SKS) in reconstruction error.
0
1
0
0
0
0
16,530
Proactive Defense Against Physical Denial of Service Attacks using Poisson Signaling Games
While the Internet of things (IoT) promises to improve areas such as energy efficiency, health care, and transportation, it is highly vulnerable to cyberattacks. In particular, distributed denial-of-service (DDoS) attacks overload the bandwidth of a server. But many IoT devices form part of cyber-physical systems (CPS). Therefore, they can be used to launch "physical" denial-of-service attacks (PDoS) in which IoT devices overflow the "physical bandwidth" of a CPS. In this paper, we quantify the population-based risk to a group of IoT devices targeted by malware for a PDoS attack. In order to model the recruitment of bots, we develop a "Poisson signaling game," a signaling game with an unknown number of receivers, which have varying abilities to detect deception. Then we use a version of this game to analyze two mechanisms (legal and economic) to deter botnet recruitment. Equilibrium results indicate that 1) defenders can bound botnet activity, and 2) legislating a minimum level of security has only a limited effect, while incentivizing active defense can decrease botnet activity arbitrarily. This work provides a quantitative foundation for proactive PDoS defense.
1
0
0
0
0
0
16,531
The weakly compact reflection principle need not imply a high order of weak compactness
The weakly compact reflection principle $\text{Refl}_{\text{wc}}(\kappa)$ states that $\kappa$ is a weakly compact cardinal and every weakly compact subset of $\kappa$ has a weakly compact proper initial segment. The weakly compact reflection principle at $\kappa$ implies that $\kappa$ is an $\omega$-weakly compact cardinal. In this article we show that the weakly compact reflection principle does not imply that $\kappa$ is $(\omega+1)$-weakly compact. Moreover, we show that if the weakly compact reflection principle holds at $\kappa$ then there is a forcing extension preserving this in which $\kappa$ is the least $\omega$-weakly compact cardinal. Along the way we generalize the well-known result which states that if $\kappa$ is a regular cardinal then in any forcing extension by $\kappa$-c.c. forcing the nonstationary ideal equals the ideal generated by the ground model nonstationary ideal; our generalization states that if $\kappa$ is a weakly compact cardinal then after forcing with a `typical' Easton-support iteration of length $\kappa$ the weakly compact ideal equals the ideal generated by the ground model weakly compact ideal.
0
0
1
0
0
0
16,532
Large scale distributed neural network training through online distillation
Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing $6\times 10^{11}$ tokens and based on the Common Crawl repository of web data.
0
0
0
1
0
0
16,533
New method to design stellarator coils without the winding surface
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal "winding" surface. We'll investigate if the existence of winding surface unnecessarily constrains the optimization, and a new method to design coils for stellarators is presented. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that covers both physical requirements and engineering constraints is constructed. The derivatives of the target function are calculated analytically. A numerical code, named FOCUS, has been developed. Applications to a simple configuration, the W7-X, and LHD plasmas are presented.
0
1
0
0
0
0
16,534
An Analytical Framework for Modeling a Spatially Repulsive Cellular Network
We propose a new cellular network model that captures both deterministic and random aspects of base station deployments. Namely, the base station locations are modeled as the superposition of two independent stationary point processes: a random shifted grid with intensity $\lambda_g$ and a Poisson point process (PPP) with intensity $\lambda_p$. Grid and PPP deployments are special cases with $\lambda_p \to 0$ and $\lambda_g \to 0$, with actual deployments in between these two extremes, as we demonstrate with deployment data. Assuming that each user is associated with the base station that provides the strongest average received signal power, we obtain the probability that a typical user is associated with either a grid or PPP base station. Assuming Rayleigh fading channels, we derive the expression for the coverage probability of the typical user, resulting in the following observations. First, the association and the coverage probability of the typical user are fully characterized as functions of intensity ratio $\rho_\lambda = \lambda_p/\lambda_g$. Second, the user association is biased towards the base stations located on a grid. Finally, the proposed model predicts the coverage probability of the actual deployment with great accuracy.
1
0
1
0
0
0
16,535
Glycolaldehyde in Perseus young solar analogs
Aims: In this paper we focus on the occurrence of glycolaldehyde (HCOCH2OH) in young solar analogs by performing the first homogeneous and unbiased study of this molecule in the Class 0 protostars of the nearby Perseus star forming region. Methods: We obtained sub-arcsec angular resolution maps at 1.3mm and 1.4mm of glycolaldehyde emission lines using the IRAM Plateau de Bure (PdB) interferometer in the framework of the CALYPSO IRAM large program. Results: Glycolaldehyde has been detected towards 3 Class 0 and 1 Class I protostars out of the 13 continuum sources targeted in Perseus: NGC1333-IRAS2A1, NGC1333-IRAS4A2, NGC1333-IRAS4B1, and SVS13-A. The NGC1333 star forming region looks particularly glycolaldehyde rich, with a rate of occurrence up to 60%. The glycolaldehyde spatial distribution overlaps with the continuum one, tracing the inner 100 au around the protostar. A large number of lines (up to 18), with upper-level energies Eu from 37 K up to 375 K has been detected. We derived column densities > 10^15 cm^-2 and rotational temperatures Trot between 115 K and 236 K, imaging for the first time hot-corinos around NGC1333-IRAS4B1 and SVS13-A. Conclusions: In multiple systems glycolaldehyde emission is detected only in one component. The case of the SVS13-A+B and IRAS4-A1+A2 systems support that the detection of glycolaldehyde (at least in the present Perseus sample) indicates older protostars (i.e. SVS13-A and IRAS4-A2), evolved enough to develop the hot-corino region (i.e. 100 K in the inner 100 au). However, only two systems do not allow us to firmly conclude whether the primary factor leading to the detection of glycolaldehyde emission is the environments hosting the protostars, evolution (e.g. low value of Lsubmm/Lint), or accretion luminosity (high Lint).
0
1
0
0
0
0
16,536
Comprehension-guided referring expressions
We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic "image captioning" which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receiver's ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a "critic" of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension module in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.
1
0
0
0
0
0
16,537
Modeling the Multiple Sclerosis Brain Disease Using Agents: What Works and What Doesn't?
The human brain is one of the most complex living structures in the known Universe. It consists of billions of neurons and synapses. Due to its intrinsic complexity, it can be a formidable task to accurately depict brain's structure and functionality. In the past, numerous studies have been conducted on modeling brain disease, structure, and functionality. Some of these studies have employed Agent-based approaches including multiagent-based simulation models as well as brain complex networks. While these models have all been developed using agent-based computing, however, to our best knowledge, none of them have employed the use of Agent-Oriented Software Engineering (AOSE) methodologies in developing the brain or disease model. This is a problem because without due process, developed models can miss out on important requirements. AOSE has the unique capability of merging concepts from multiagent systems, agent-based modeling, artificial intelligence, besides concepts from distributed systems. AOSE involves the various tested software engineering principles in various phases of the model development ranging from analysis, design, implementation, and testing phases. In this paper, we employ the use of three different AOSE methodologies for modeling the Multiple Sclerosis brain disease namely GAIA, TROPOS, and MASE. After developing the models, we further employ the use of Exploratory Agent-based Modeling (EABM) to develop an actual model replicating previous results as a proof of concept. The key objective of this study is to demonstrate and explore the viability and effectiveness of AOSE methodologies in the development of complex brain structure and cognitive process models. Our key finding include demonstration that AOSE methodologies can be considerably helpful in modeling various living complex systems, in general, and the human brain, in particular.
1
1
0
0
0
0
16,538
Improved NN-JPDAF for Joint Multiple Target Tracking and Feature Extraction
Feature aided tracking can often yield improved tracking performance over the standard multiple target tracking (MTT) algorithms with only kinematic measurements. However, in many applications, the feature signal of the targets consists of sparse Fourier-domain signals. It changes quickly and nonlinearly in the time domain, and the feature measurements are corrupted by missed detections and mis-associations. These two factors make it hard to extract the feature information to be used in MTT. In this paper, we develop a feature-aided nearest neighbour joint probabilistic data association filter (NN-JPDAF) for joint MTT and feature extraction in dense target environments. To estimate the rapidly varying feature signal from incomplete and corrupted measurements, we use the atomic norm constraint to formulate the sparsity of feature signal and use the $\ell_1$-norm to formulate the sparsity of the corruption induced by mis-associations. Based on the sparse representation, the feature signal are estimated by solving a semidefinite program (SDP) which is convex. We also provide an iterative method for solving this SDP via the alternating direction method of multipliers (ADMM) where each iteration involves closed-form computation. With the estimated feature signal, re-filtering is performed to estimate the kinematic states of the targets, where the association makes use of both kinematic and feature information. Simulation results are presented to illustrate the performance of the proposed algorithm in a radar application.
1
0
0
1
0
0
16,539
KINETyS: Constraining spatial variations of the stellar initial mass function in early-type galaxies
The heavyweight stellar initial mass function (IMF) observed in the cores of massive early-type galaxies (ETGs) has been linked to formation of their cores in an initial swiftly-quenched rapid starburst. However, the outskirts of ETGs are thought to be assembled via the slow accumulation of smaller systems in which the star formation is less extreme; this suggests the form of the IMF should exhibit a radial trend in ETGs. Here we report radial stellar population gradients out to the half-light radii of a sample of eight nearby ETGs. Spatially resolved spectroscopy at 0.8-1.35{\mu}m from the VLT's KMOS instrument was used to measure radial trends in the strengths of a variety of IMF-sensitive absorption features (including some which are previously unexplored). We find weak or no radial variation in some of these which, given a radial IMF trend, ought to vary measurably, e.g. for the Wing-Ford band we measure a gradient of +0.06$\pm$0.04 per decade in radius. Using stellar population models to fit stacked and individual spectra, we infer that the measured radial changes in absorption feature strengths are primarily accounted for by abundance gradients which are fairly consistent across our sample (e.g. we derive an average [Na/H] gradient of -0.53$\pm$0.07). The inferred contribution of dwarf stars to the total light typically corresponds to a bottom heavy IMF, but we find no evidence for radial IMF variations in the majority of our sample galaxies.
0
1
0
0
0
0
16,540
Multi-Agent Deep Reinforcement Learning with Human Strategies
Deep learning has enabled traditional reinforcement learning methods to deal with high-dimensional problems. However, one of the disadvantages of deep reinforcement learning methods is the limited exploration capacity of learning agents. In this paper, we introduce an approach that integrates human strategies to increase the exploration capacity of multiple deep reinforcement learning agents. We also report the development of our own multi-agent environment called Multiple Tank Defence to simulate the proposed approach. The results show the significant performance improvement of multiple agents that have learned cooperatively with human strategies. This implies that there is a critical need for human intellect teamed with machines to solve complex problems. In addition, the success of this simulation indicates that our developed multi-agent environment can be used as a testbed platform to develop and validate other multi-agent control algorithms. Details of the environment implementation can be referred to this http URL
0
0
0
1
0
0
16,541
Deep into the Water Fountains: The case of IRAS 18043-2116
(Abridged) The formation of large-scale (hundreds to few thousands of AU) bipolar structures in the circumstellar envelopes (CSEs) of post-Asymptotic Giant Branch (post-AGB) stars is poorly understood. The shape of these structures, traced by emission from fast molecular outflows, suggests that the dynamics at the innermost regions of these CSEs does not depend only on the energy of the radiation field of the central star. Deep into the Water Fountains is an observational project based on the results of programs carried out with three telescope facilities: The Karl G. Jansky Very Large Array (JVLA), The Australia Telescope Compact Array (ATCA), and the Very Large Telescope (SINFONI-VLT). Here we report the results of the observations towards the WF nebula IRAS 18043$-$2116: Detection of radio continuum emission in the frequency range 1.5GHz - 8.0GHz; H$_{2}$O maser spectral features and radio continuum emission detected at 22GHz, and H$_{2}$ ro-vibrational emission lines detected at the near infrared. The high-velocity H$_{2}$O maser spectral features, and the shock-excited H$_{2}$ emission detected could be produced in molecular layers which are swept up as a consequence of the propagation of a jet-driven wind. Using the derived H$_{2}$ column density, we estimated a molecular mass-loss rate of the order of $10^{-9}$M$_{\odot}$yr$^{-1}$. On the other hand, if the radio continuum flux detected is generated as a consequence of the propagation of a thermal radio jet, the mass-loss rate associated to the outflowing ionized material is of the order of 10$^{-5}$M$_{\odot}$yr$^{-1}$. The presence of a rotating disk could be a plausible explanation for the mass-loss rates estimated.
0
1
0
0
0
0
16,542
QAOA for Max-Cut requires hundreds of qubits for quantum speed-up
Computational quantum technologies are entering a new phase in which noisy intermediate-scale quantum computers are available, but are still too small to benefit from active error correction. Even with a finite coherence budget to invest in quantum information processing, noisy devices with about 50 qubits are expected to experimentally demonstrate quantum supremacy in the next few years. Defined in terms of artificial tasks, current proposals for quantum supremacy, even if successful, will not help to provide solutions to practical problems. Instead, we believe that future users of quantum computers are interested in actual applications and that noisy quantum devices may still provide value by approximately solving hard combinatorial problems via hybrid classical-quantum algorithms. To lower bound the size of quantum computers with practical utility, we perform realistic simulations of the Quantum Approximate Optimization Algorithm and conclude that quantum speedup will not be attainable, at least for a representative combinatorial problem, until several hundreds of qubits are available.
1
0
0
0
0
0
16,543
Mixed-Effect Modeling for Longitudinal Prediction of Cancer Tumor
In this paper, a mixed-effect modeling scheme is proposed to construct a predictor for different features of cancer tumor. For this purpose, a set of features is extracted from two groups of patients with the same type of cancer but with two medical outcome: 1) survived and 2) passed away. The goal is to build different models for the two groups, where in each group, patient-specified behavior of individuals can be characterized. These models are then used as predictors to forecast future state of patients with a given history or initial state. To this end, a leave-on-out cross validation method is used to measure the prediction accuracy of each patient-specified model. Experiments show that compared to fixed-effect modeling (regression), mixed-effect modeling has a superior performance on some of the extracted features and similar or worse performance on the others.
0
0
0
1
0
0
16,544
Distributed Exact Shortest Paths in Sublinear Time
The distributed single-source shortest paths problem is one of the most fundamental and central problems in the message-passing distributed computing. Classical Bellman-Ford algorithm solves it in $O(n)$ time, where $n$ is the number of vertices in the input graph $G$. Peleg and Rubinovich (FOCS'99) showed a lower bound of $\tilde{\Omega}(D + \sqrt{n})$ for this problem, where $D$ is the hop-diameter of $G$. Whether or not this problem can be solved in $o(n)$ time when $D$ is relatively small is a major notorious open question. Despite intensive research \cite{LP13,N14,HKN15,EN16,BKKL16} that yielded near-optimal algorithms for the approximate variant of this problem, no progress was reported for the original problem. In this paper we answer this question in the affirmative. We devise an algorithm that requires $O((n \log n)^{5/6})$ time, for $D = O(\sqrt{n \log n})$, and $O(D^{1/3} \cdot (n \log n)^{2/3})$ time, for larger $D$. This running time is sublinear in $n$ in almost the entire range of parameters, specifically, for $D = o(n/\log^2 n)$. For the all-pairs shortest paths problem, our algorithm requires $O(n^{5/3} \log^{2/3} n)$ time, regardless of the value of $D$. We also devise the first algorithm with non-trivial complexity guarantees for computing exact shortest paths in the multipass semi-streaming model of computation. From the technical viewpoint, our algorithm computes a hopset $G"$ of a skeleton graph $G'$ of $G$ without first computing $G'$ itself. We then conduct a Bellman-Ford exploration in $G' \cup G"$, while computing the required edges of $G'$ on the fly. As a result, our algorithm computes exactly those edges of $G'$ that it really needs, rather than computing approximately the entire $G'$.
1
0
0
0
0
0
16,545
Fingerprints of angulon instabilities in the spectra of matrix-isolated molecules
The formation of vortices is usually considered to be the main mechanism of angular momentum disposal in superfluids. Recently, it was predicted that a superfluid can acquire angular momentum via an alternative, microscopic route -- namely, through interaction with rotating impurities, forming so-called `angulon quasiparticles' [Phys. Rev. Lett. 114, 203001 (2015)]. The angulon instabilities correspond to transfer of a small number of angular momentum quanta from the impurity to the superfluid, as opposed to vortex instabilities, where angular momentum is quantized in units of $\hbar$ per atom. Furthermore, since conventional impurities (such as molecules) represent three-dimensional (3D) rotors, the angular momentum transferred is intrinsically 3D as well, as opposed to a merely planar rotation which is inherent to vortices. Herein we show that the angulon theory can explain the anomalous broadening of the spectroscopic lines observed for CH$_3$ and NH$_3$ molecules in superfluid helium nanodroplets, thereby providing a fingerprint of the emerging angulon instabilities in experiment.
0
1
0
0
0
0
16,546
Considering Multiple Uncertainties in Stochastic Security-Constrained Unit Commitment Using Point Estimation Method
Security-Constrained Unit Commitment (SCUC) is one of the most significant problems in secure and optimal operation of modern electricity markets. New sources of uncertainties such as wind speed volatility and price-sensitive loads impose additional challenges to this large-scale problem. This paper proposes a new Stochastic SCUC using point estimation method to model the power system uncertainties more efficiently. Conventional scenario-based Stochastic SCUC approaches consider the Mont Carlo method; which presents additional computational burdens to this large-scale problem. In this paper we use point estimation instead of scenario generating to detract computational burdens of the problem. The proposed approach is implemented on a six-bus system and on a modified IEEE 118-bus system with 94 uncertain variables. The efficacy of proposed algorithm is confirmed, especially in the last case with notable reduction in computational burden without considerable loss of precision.
0
1
1
0
0
0
16,547
Gluing Delaunay ends to minimal n-noids using the DPW method
We construct constant mean curvature surfaces in euclidean space by gluing n half Delaunay surfaces to a non-degenerate minimal n-noid, using the DPW method.
0
0
1
0
0
0
16,548
Some Remarks on the Hyperkähler Reduction
We consider a hyperkähler reduction and describe it via frame bundles. Tracing the connection through the various reductions, we recover the results of Gocho and Nakajima. In addition, we show that the fibers of such a reduction are necessarily totally geodesic. As an independent result, we describe O'Neill's submersion tensors on principal bundles.
0
0
1
0
0
0
16,549
Image Labeling Based on Graphical Models Using Wasserstein Messages and Geometric Assignment
We introduce a novel approach to Maximum A Posteriori inference based on discrete graphical models. By utilizing local Wasserstein distances for coupling assignment measures across edges of the underlying graph, a given discrete objective function is smoothly approximated and restricted to the assignment manifold. A corresponding multiplicative update scheme combines in a single process (i) geometric integration of the resulting Riemannian gradient flow and (ii) rounding to integral solutions that represent valid labelings. Throughout this process, local marginalization constraints known from the established LP relaxation are satisfied, whereas the smooth geometric setting results in rapidly converging iterations that can be carried out in parallel for every edge.
1
0
0
0
0
0
16,550
Operator algebraic approach to inverse and stability theorems for amenable groups
We prove an inverse theorem for the Gowers $U^2$-norm for maps $G\to\mathcal M$ from an countable, discrete, amenable group $G$ into a von Neumann algebra $\mathcal M$ equipped with an ultraweakly lower semi-continuous, unitarily invariant (semi-)norm $\Vert\cdot\Vert$. We use this result to prove a stability result for unitary-valued $\varepsilon$-representations $G\to\mathcal U(\mathcal M)$ with respect to $\Vert\cdot \Vert$.
0
0
1
0
0
0
16,551
Collective irregular dynamics in balanced networks of leaky integrate-and-fire neurons
We extensively explore networks of weakly unbalanced, leaky integrate-and-fire (LIF) neurons for different coupling strength, connectivity, and by varying the degree of refractoriness, as well as the delay in the spike transmission. We find that the neural network does not only exhibit a microscopic (single-neuron) stochastic-like evolution, but also a collective irregular dynamics (CID). Our analysis is based on the computation of a suitable order parameter, typically used to characterize synchronization phenomena and on a detailed scaling analysis (i.e. simulations of different network sizes). As a result, we can conclude that CID is a true thermodynamic phase, intrinsically different from the standard asynchronous regime.
0
0
0
0
1
0
16,552
Measurement of human activity using velocity GPS data obtained from mobile phones
Human movement is used as an indicator of human activity in modern society. The velocity of moving humans is calculated based on position information obtained from mobile phones. The level of human activity, as recorded by velocity, varies throughout the day. Therefore, velocity can be used to identify the intervals of highest and lowest activity. More specifically, we obtained mobile-phone GPS data from the people around Shibuya station in Tokyo, which has the highest population density in Japan. From these data, we observe that velocity tends to consistently increase with the changes in social activities. For example, during the earthquake in Kumamoto Prefecture in April 2016, the activity on that day was much lower than usual. In this research, we focus on natural disasters such as earthquakes owing to their significant effects on human activities in developed countries like Japan. In the event of a natural disaster in another developed country, considering the change in human behavior at the time of the disaster (e.g., the 2016 Kumamoto Great Earthquake) from the viewpoint of velocity allows us to improve our planning for mitigation measures. Thus, we analyze the changes in human activity through velocity calculations in Shibuya, Tokyo, and compare times of disasters with normal times.
1
1
0
0
0
0
16,553
Concurrent Segmentation and Localization for Tracking of Surgical Instruments
Real-time instrument tracking is a crucial requirement for various computer-assisted interventions. In order to overcome problems such as specular reflections and motion blur, we propose a novel method that takes advantage of the interdependency between localization and segmentation of the surgical tool. In particular, we reformulate the 2D instrument pose estimation as heatmap regression and thereby enable a concurrent, robust and near real-time regression of both tasks via deep learning. As demonstrated by our experimental results, this modeling leads to a significantly improved performance than directly regressing the tool position and allows our method to outperform the state of the art on a Retinal Microsurgery benchmark and the MICCAI EndoVis Challenge 2015.
1
0
0
0
0
0
16,554
Bobtail: A Proof-of-Work Target that Minimizes Blockchain Mining Variance (Draft)
Blockchain systems are designed to produce blocks at a constant average rate. The most popular systems currently employ a Proof of Work (PoW) algorithm as a means of creating these blocks. Bitcoin produces, on average, one block every 10 minutes. An unfortunate limitation of all deployed PoW blockchain systems is that the time between blocks has high variance. For example, 5% of the time, Bitcoin's inter-block time is at least 40 minutes. This variance impedes the consistent flow of validated transactions through the system. We propose an alternative process for PoW-based block discovery that results in an inter-block time with significantly lower variance. Our algorithm, called Bobtail, generalizes the current algorithm by comparing the mean of the k lowest order statistics to a target. We show that the variance of inter-block times decreases as k increases. If our approach were applied to Bitcoin, about 80% of blocks would be found within 7 to 12 minutes, and nearly every block would be found within 5 to 18 minutes; the average inter-block time would remain at 10 minutes. Further, we show that low-variance mining significantly thwarts doublespend and selfish mining attacks. For Bitcoin and Ethereum currently (k=1), an attacker with 40% of the mining power will succeed with 30% probability when the merchant sets up an embargo of 8 blocks; however, when k>=20, the probability of success falls to less than 1%. Similarly, for Bitcoin and Ethereum currently, a selfish miner with 40% of the mining power will claim about 66% of blocks; however, when k>=5, the same miner will find that selfish mining is less successful than honest mining. The cost of our approach is a larger block header.
1
0
0
0
0
0
16,555
Mobility Edges in 1D Bichromatic Incommensurate Potentials
We theoretically study a one-dimensional (1D) mutually incommensurate bichromatic lattice system which has been implemented in ultracold atoms to study quantum localization. It has been universally believed that the tight-binding version of this bichromatic incommensurate system is represented by the well-known Aubry-Andre model. Here we establish that this belief is incorrect and that the Aubry-Andre model description, which applies only in the extreme tight-binding limit of very deep primary lattice potential, generically breaks down near the localization transition due to the unavoidable appearance of single-particle mobility edges (SPME). In fact, we show that the 1D bichromatic incommensurate potential system manifests generic mobility edges which disappear in the tight-binding limit, leading to the well-studied Aubry-Andre physics. We carry out an extensive study of the localization properties of the 1D incommensurate optical lattice without making any tight-binding approximation. We find that, for the full lattice system, an intermediate phase between completely localized and completely delocalized regions appears due to the existence of the SPME, making the system qualitatively distinct from the Aubry-Andre prediction. Using the Wegner flow approach, we show that the SPME in the real lattice system can be attributed to significant corrections of higher-order harmonics in the lattice potential which are absent in the strict tight-binding limit. We calculate the dynamical consequences of the intermediate phase in detail to guide future experimental investigations for the observation of 1D SPME and the associated intermediate phase. We consider effects of interaction numerically, and conjecture the stability of SPME to weak interaction effects, thus leading to the exciting possibility of an experimentally viable nonergodic extended phase in interacting 1D optical lattices.
0
1
0
0
0
0
16,556
Privacy Preserving Identification Using Sparse Approximation with Ambiguization
In this paper, we consider a privacy preserving encoding framework for identification applications covering biometrics, physical object security and the Internet of Things (IoT). The proposed framework is based on a sparsifying transform, which consists of a trained linear map, an element-wise nonlinearity, and privacy amplification. The sparsifying transform and privacy amplification are not symmetric for the data owner and data user. We demonstrate that the proposed approach is closely related to sparse ternary codes (STC), a recent information-theoretic concept proposed for fast approximate nearest neighbor (ANN) search in high dimensional feature spaces that being machine learning in nature also offers significant benefits in comparison to sparse approximation and binary embedding approaches. We demonstrate that the privacy of the database outsourced to a server as well as the privacy of the data user are preserved at a low computational cost, storage and communication burdens.
1
0
0
1
0
0
16,557
Manifold Learning Using Kernel Density Estimation and Local Principal Components Analysis
We consider the problem of recovering a $d-$dimensional manifold $\mathcal{M} \subset \mathbb{R}^n$ when provided with noiseless samples from $\mathcal{M}$. There are many algorithms (e.g., Isomap) that are used in practice to fit manifolds and thus reduce the dimensionality of a given data set. Ideally, the estimate $\mathcal{M}_\mathrm{put}$ of $\mathcal{M}$ should be an actual manifold of a certain smoothness; furthermore, $\mathcal{M}_\mathrm{put}$ should be arbitrarily close to $\mathcal{M}$ in Hausdorff distance given a large enough sample. Generally speaking, existing manifold learning algorithms do not meet these criteria. Fefferman, Mitter, and Narayanan (2016) have developed an algorithm whose output is provably a manifold. The key idea is to define an approximate squared-distance function (asdf) to $\mathcal{M}$. Then, $\mathcal{M}_\mathrm{put}$ is given by the set of points where the gradient of the asdf is orthogonal to the subspace spanned by the largest $n - d$ eigenvectors of the Hessian of the asdf. As long as the asdf meets certain regularity conditions, $\mathcal{M}_\mathrm{put}$ is a manifold that is arbitrarily close in Hausdorff distance to $\mathcal{M}$. In this paper, we define two asdfs that can be calculated from the data and show that they meet the required regularity conditions. The first asdf is based on kernel density estimation, and the second is based on estimation of tangent spaces using local principal components analysis.
0
0
0
1
0
0
16,558
Unsupervised Learning-based Depth Estimation aided Visual SLAM Approach
The RGB-D camera maintains a limited range for working and is hard to accurately measure the depth information in a far distance. Besides, the RGB-D camera will easily be influenced by strong lighting and other external factors, which will lead to a poor accuracy on the acquired environmental depth information. Recently, deep learning technologies have achieved great success in the visual SLAM area, which can directly learn high-level features from the visual inputs and improve the estimation accuracy of the depth information. Therefore, deep learning technologies maintain the potential to extend the source of the depth information and improve the performance of the SLAM system. However, the existing deep learning-based methods are mainly supervised and require a large amount of ground-truth depth data, which is hard to acquire because of the realistic constraints. In this paper, we first present an unsupervised learning framework, which not only uses image reconstruction for supervising but also exploits the pose estimation method to enhance the supervised signal and add training constraints for the task of monocular depth and camera motion estimation. Furthermore, we successfully exploit our unsupervised learning framework to assist the traditional ORB-SLAM system when the initialization module of ORB-SLAM method could not match enough features. Qualitative and quantitative experiments have shown that our unsupervised learning framework performs the depth estimation task comparable to the supervised methods and outperforms the previous state-of-the-art approach by $13.5\%$ on KITTI dataset. Besides, our unsupervised learning framework could significantly accelerate the initialization process of ORB-SLAM system and effectively improve the accuracy on environmental mapping in strong lighting and weak texture scenes.
1
0
0
0
0
0
16,559
On the Properties of the Power Systems Nodal Admittance Matrix
This letter provides conditions determining the rank of the nodal admittance matrix, and arbitrary block partitions of it, for connected AC power networks with complex admittances. Furthermore, some implications of these properties concerning Kron Reduction and Hybrid Network Parameters are outlined.
1
0
1
0
0
0
16,560
Isolation and connectivity in random geometric graphs with self-similar intensity measures
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
0
1
0
0
0
0
16,561
Multi-SpaM: a Maximum-Likelihood approach to Phylogeny reconstruction based on Multiple Spaced-Word Matches
Motivation: Word-based or `alignment-free' methods for phylogeny reconstruction are much faster than traditional approaches, but they are generally less accurate. Most of these methods calculate pairwise distances for a set of input sequences, for example from word frequencies, from so-called spaced-word matches or from the average length of common substrings. Results: In this paper, we propose the first word-based approach to tree reconstruction that is based on multiple sequence comparison and Maximum Likelihood. Our algorithm first samples small, gap-free alignments involving four taxa each. For each of these alignments, it then calculates a quartet tree and, finally, the program Quartet MaxCut is used to infer a super tree topology for the full set of input taxa from the calculated quartet trees. Experimental results show that trees calculated with our approach are of high quality. Availability: The source code of the program is available at this https URL Contact: [email protected]
0
0
0
0
1
0
16,562
Zermelo deformation of Finsler metrics by Killing vector fields
We show how geodesics, Jacobi vector fields and flag curvature of a Finsler metric behave under Zermelo deformation with respect to a Killing vector field. We also show that Zermelo deformation with respect to a Killing vector field of a locally symmetric Finsler metric is also locally symmetric.
0
0
1
0
0
0
16,563
Robust estimation of mixing measures in finite mixture models
In finite mixture models, apart from underlying mixing measure, true kernel density function of each subpopulation in the data is, in many scenarios, unknown. Perhaps the most popular approach is to choose some kernel functions that we empirically believe our data are generated from and use these kernels to fit our models. Nevertheless, as long as the chosen kernel and the true kernel are different, statistical inference of mixing measure under this setting will be highly unstable. To overcome this challenge, we propose flexible and efficient robust estimators of the mixing measure in these models, which are inspired by the idea of minimum Hellinger distance estimator, model selection criteria, and superefficiency phenomenon. We demonstrate that our estimators consistently recover the true number of components and achieve the optimal convergence rates of parameter estimation under both the well- and mis-specified kernel settings for any fixed bandwidth. These desirable asymptotic properties are illustrated via careful simulation studies with both synthetic and real data.
0
0
1
1
0
0
16,564
Phase Noise and Jitter in Digital Electronics
This article explains phase noise, jitter, and some slower phenomena in digital integrated circuits, focusing on high-demanding, noise-critical applications. We introduce the concept of phase type and time type phase noise. The rules for scaling the noise with frequency are chiefly determined by the spectral properties of these two basic types, by the aliasing phenomenon, and by the input and output circuits. Then, we discuss the parameter extraction from experimental data and we report on the measured phase noise in some selected devices of different node size and complexity. We observed flicker noise between -80 and -130 dBrad^2/Hz at 1 Hz offset, and white noise down to -165 dBrad^2/Hz in some fortunate cases and using the appropriate tricks. It turns out that flicker noise is proportional to the reciprocal of the volume of the transistor. This unpleasant conclusion is supported by a gedanken experiment. Further experiments provide understanding on: (i) the interplay between noise sources in the internal PLL, often present in FPGAs; (ii) the chattering phenomenon, which consists in multiple bouncing at transitions; and (iii) thermal time constants, and their effect on phase wander and on the Allan variance.
0
1
0
0
0
0
16,565
Advanced Soccer Skills and Team Play of RoboCup 2017 TeenSize Winner NimbRo
In order to pursue the vision of the RoboCup Humanoid League of beating the soccer world champion by 2050, new rules and competitions are added or modified each year fostering novel technological advances. In 2017, the number of players in the TeenSize class soccer games was increase to 3 vs. 3, which allowed for more team play strategies. Improvements in individual skills were also demanded through a set of technical challenges. This paper presents the latest individual skills and team play developments used in RoboCup 2017 that lead our team Nimbro winning the 2017 TeenSize soccer tournament, the technical challenges, and the drop-in games.
1
0
0
0
0
0
16,566
Phase partitioning in a novel near equi-atomic AlCuFeMn alloy
A novel low cost, near equi-atomic alloy comprising of Al, Cu, Fe and Mn is synthesized using arc-melting technique. The cast alloy possesses a dendritic microstructure where the dendrites consist of disordered FCC and ordered FCC phases. The inter-dendritic region is comprised of ordered FCC phase and spinodally decomposed BCC phases. A Cu segregation is observed in the inter-dendritic region while dendritic region is rich in Fe. The bulk hardness of the alloy is ~ 380 HV, indicating significant yield strength.
0
1
0
0
0
0
16,567
Overview of Project 8 and Progress Towards Tritium Operation
Project 8 is a tritium endpoint neutrino mass experiment utilizing a phased program to achieve sensitivity to the range of neutrino masses allowed by the inverted mass hierarchy. The Cyclotron Radiation Emission Spectroscopy (CRES) technique is employed to measure the differential energy spectrum of decay electrons with high precision. We present an overview of the Project 8 experimental program, from first demonstration of the CRES technique to ultimate sensitivity with an atomic tritium source. We highlight recent advances in preparation for the first measurement of the continuous tritium spectrum with CRES.
0
1
0
0
0
0
16,568
Proper efficiency and cone efficiency
In this report, two general concepts for proper efficiency in vector optimization are studied. Properly efficient elements can be defined as minimizers of functionals with certain monotonicity properties or as weakly efficient elements with respect to sets that contain the domination set. Interdependencies between both concepts are proved in topological vector spaces by means of Gerstewitz functionals. The investigation includes proper efficiency notions introduced by Henig and by Nehse and Iwanow. In contrary to Henig's notion, proper efficiency by Nehse and Iwanow is defined as efficiency with respect to certain convex sets which are not necessarily cones. For the finite-dimensional case, we turn to Geoffrion's proper efficiency as a special case of Henig's proper efficiency. It is characterized as efficiency with regard to subclasses of the set of polyhedral cones. Conditions for the existence of Geoffrion's properly efficient points are proved. For closed feasible point sets, Geoffrion's properly efficient point set is empty or coincides with that of Nehse and Iwanow. Properly efficient elements by Nehse and Iwanow are the minimizers of continuous convex functionals with certain monotonicity properties. Henig's proper efficiency can be described by means of minimizers of continuous sublinear functionals with certain monotonicity properties.
0
0
1
0
0
0
16,569
Sequential Skip Prediction with Few-shot in Streamed Music Contents
This paper provides an outline of the algorithms submitted for the WSDM Cup 2019 Spotify Sequential Skip Prediction Challenge (team name: mimbres). In the challenge, complete information including acoustic features and user interaction logs for the first half of a listening session is provided. Our goal is to predict whether the individual tracks in the second half of the session will be skipped or not, only given acoustic features. We proposed two different kinds of algorithms that were based on metric learning and sequence learning. The experimental results showed that the sequence learning approach performed significantly better than the metric learning approach. Moreover, we conducted additional experiments to find that significant performance gain can be achieved using complete user log information.
1
0
0
0
0
0
16,570
Genetic interactions from first principles
We derive a general statistical model of interactions, starting from probabilistic principles and elementary requirements. Prevailing interaction models in biomedical researches diverge both mathematically and practically. In particular, genetic interaction inquiries are formulated without an obvious mathematical unity. Our model reveals theoretical properties unnoticed so far, particularly valuable for genetic interaction mapping, where mechanistic details are mostly unknown, distribution of gene variants differ between populations, and genetic susceptibilities are spuriously propagated by linkage disequilibrium. When applied to data of the largest interaction mapping experiment on Saccharomyces Cerevisiae to date, our results imply less aversion to positive interactions, detection of well-documented hubs and partial remapping of functional regions of the currently known genetic interaction landscape. Assessment of divergent annotations across functional categories further suggests that positive interactions have a more important role on ribosome biogenesis than previously realized. The unity of arguments elaborated here enables the analysis of dissimilar interaction models and experimental data with a common framework.
0
0
0
1
0
0
16,571
On the Log Partition Function of Ising Model on Stochastic Block Model
A sparse stochastic block model (SBM) with two communities is defined by the community probability $\pi_0,\pi_1$, and the connection probability between communities $a,b\in\{0,1\}$, namely $q_{ab} = \frac{\alpha_{ab}}{n}$. When $q_{ab}$ is constant in $a,b$, the random graph is simply the Erdős-Rény random graph. We evaluate the log partition function of the Ising model on sparse SBM with two communities. As an application, we give consistent parameter estimation of the sparse SBM with two communities in a special case. More specifically, let $d_0,d_1$ be the average degree of the two communities, i.e., $d_0\overset{def}{=}\pi_0\alpha_{00}+\pi_1\alpha_{01},d_1\overset{def}{=}\pi_0\alpha_{10}+\pi_1\alpha_{11}$. We focus on the regime $d_0=d_1$ (the regime $d_0\ne d_1$ is trivial). In this regime, there exists $d,\lambda$ and $r\geq 0$ with $\pi_0=\frac{1}{1+r}, \pi_1=\frac{r}{1+r}$, $\alpha_{00}=d(1+r\lambda), \alpha_{01}=\alpha_{10} = d(1-\lambda), \alpha_{11} = d(1+\frac{\lambda}{r})$. We give a consistent estimator of $r$ when $\lambda<0$. The estimator of $\lambda$ given by \citep{mossel2015reconstruction} is valid in the general situation. We also provide a random clustering algorithm which does not require knowledge of parameters and which is positively correlated with the true community label when $\lambda<0$.
0
0
0
1
0
0
16,572
Radial transonic shock solutions of Euler-Poisson system in convergent nozzles
Given constant data of density $\rho_0$, velocity $-u_0{\bf e}_r$, pressure $p_0$ and electric force $-E_0{\bf e}_r$ for supersonic flow at the entrance, and constant pressure $p_{\rm ex}$ for subsonic flow at the exit, we prove that Euler-Poisson system admits a unique transonic shock solution in a two dimensional convergent nozzle, provided that $u_0>0$, $E_0>0$, and that $E_0$ is sufficiently large depending on $(\rho_0, u_0, p_0)$ and the length of the nozzle.
0
0
1
0
0
0
16,573
X-rays from Green Pea Analogs
X-ray observations of two metal-deficient luminous compact galaxies (LCG) (SHOC~486 and SDSS J084220.94+115000.2) with properties similar to the so-called Green Pea galaxies were obtained using the {\emph{Chandra X-ray Observatory}}. Green Pea galaxies are relatively small, compact (a few kpc across) galaxies that get their green color from strong [OIII]$\lambda$5007\AA\ emission, an indicator of intense, recent star formation. These two galaxies were predicted to have the highest observed count rates, using the X-ray luminosity -- star formation rate ($L_X$--SFR) relation for X-ray binaries, from a statistically complete sample drawn from optical criteria. We determine the X-ray luminosity relative to star-formation rate and metallicity for these two galaxies. Neither exhibit any evidence of active galactic nuclei and we suspect the X-ray emission originates from unresolved populations of high mass X-ray binaries. We discuss the $L_X$--SFR--metallicity plane for star-forming galaxies and show that the two LCGs are consistent with the prediction of this relation. This is the first detection of Green Pea analogs in X-rays.
0
1
0
0
0
0
16,574
A deeper view of the CoRoT-9 planetary system. A small non-zero eccentricity for CoRoT-9b likely generated by planet-planet scattering
CoRoT-9b is one of the rare long-period (P=95.3 days) transiting giant planets with a measured mass known to date. We present a new analysis of the CoRoT-9 system based on five years of radial-velocity (RV) monitoring with HARPS and three new space-based transits observed with CoRoT and Spitzer. Combining our new data with already published measurements we redetermine the CoRoT-9 system parameters and find good agreement with the published values. We uncover a higher significance for the small but non-zero eccentricity of CoRoT-9b ($e=0.133^{+0.042}_{-0.037}$) and find no evidence for additional planets in the system. We use simulations of planet-planet scattering to show that the eccentricity of CoRoT-9b may have been generated by an instability in which a $\sim 50~M_\oplus$ planet was ejected from the system. This scattering would not have produced a spin-orbit misalignment, so we predict that CoRoT-9b orbit should lie within a few degrees of the initial plane of the protoplanetary disk. As a consequence, any significant stellar obliquity would indicate that the disk was primordially tilted.
0
1
0
0
0
0
16,575
Wave propagation characteristics of Parareal
The paper derives and analyses the (semi-)discrete dispersion relation of the Parareal parallel-in-time integration method. It investigates Parareal's wave propagation characteristics with the aim to better understand what causes the well documented stability problems for hyperbolic equations. The analysis shows that the instability is caused by convergence of the amplification factor to the exact value from above for medium to high wave numbers. Phase errors in the coarse propagator are identified as the culprit, which suggests that specifically tailored coarse level methods could provide a remedy.
1
0
1
0
0
0
16,576
Benchmark of Deep Learning Models on Large Healthcare MIMIC Datasets
Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the `raw' clinical time series data is used as input features to the models.
1
0
0
1
0
0
16,577
Modeling of Persistent Homology
Topological Data Analysis (TDA) is a novel statistical technique, particularly powerful for the analysis of large and high dimensional data sets. Much of TDA is based on the tool of persistent homology, represented visually via persistence diagrams. In an earlier paper we proposed a parametric representation for the probability distributions of persistence diagrams, and based on it provided a method for their replication. Since the typical situation for big data is that only one persistence diagram is available, these replications allow for conventional statistical inference, which, by its very nature, requires some form of replication. In the current paper we continue this analysis, and further develop its practical statistical methodology, by investigating a wider class of examples than treated previously.
0
0
0
1
0
0
16,578
Auto-Encoding Sequential Monte Carlo
We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models.
0
0
0
1
0
0
16,579
Einstein's accelerated reference systems and Fermi-Walker coordinates
We show that the uniformly accelerated reference systems proposed by Einstein when introducing acceleration in the theory of relativity are Fermi-Walker coordinate systems. We then consider more general accelerated motions and, on the one hand we obtain Thomas precession and, on the other, we prove that the only accelerated reference systems that at any time admit an instantaneously comoving inertial system belong necessarily to the Fermi-Walker class.
0
1
0
0
0
0
16,580
Premise Selection for Theorem Proving by Deep Graph Embedding
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.
1
0
0
0
0
0
16,581
Goldbach Representations in Arithmetic Progressions and zeros of Dirichlet L-functions
Assuming a conjecture on distinct zeros of Dirichlet L-functions we get asymptotic results on the average number of representations of an integer as the sum of two primes in arithmetic progression. On the other hand the existence of good error terms gives information on the the location of zeros of L-functions and possible Siegel zeros. Similar results are obtained for an integer in a congruence class expressed as the sum of two primes.
0
0
1
0
0
0
16,582
Estimable group effects for strongly correlated variables in linear models
It is well known that parameters for strongly correlated predictor variables in a linear model cannot be accurately estimated. We look for linear combinations of these parameters that can be. Under a uniform model, we find such linear combinations in a neighborhood of a simple variability weighted average of these parameters. Surprisingly, this variability weighted average is more accurately estimated when the variables are more strongly correlated, and it is the only linear combination with this property. It can be easily computed for strongly correlated predictor variables in all linear models and has applications in inference and estimation concerning parameters of such variables.
0
0
1
1
0
0
16,583
Online Structure Learning for Sum-Product Networks with Gaussian Leaves
Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes the first online structure learning technique for continuous SPNs with Gaussian leaves. We also introduce an accompanying new parameter learning technique.
1
0
0
1
0
0
16,584
Rapid Design of Wide-Area Heterogeneous Electromagnetic Metasurfaces beyond the Unit-Cell Approximation
We propose a novel numerical approach for the optimal design of wide-area heterogeneous electromagnetic metasurfaces beyond the conventionally used unit-cell approximation. The proposed method exploits the combination of Rigorous Coupled Wave Analysis (RCWA) and global optimization techniques (two evolutionary algorithms namely the Genetic Algorithm (GA) and a modified form of the Artificial Bee Colony (ABC with memetic search phase method) are considered). As a specific example, we consider the design of beam deflectors using all-dielectric nanoantennae for operation in the visible wavelength region; beam deflectors can serve as building blocks for other more complicated devices like metalenses. Compared to previous reports using local optimization approaches our approach improves device efficiency; transmission efficiency is especially improved for wide deflection angle beam deflectors. The ABC method with memetic search phase is also an improvement over the more commonly used GA as it reaches similar efficiency levels with upto 35% reduction in computation time. The method described here is of interest for the rapid design of a wide variety of electromagnetic metasurfaces irrespective of their operational wavelength.
0
1
0
0
0
0
16,585
Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks
Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. However, despite large interest and a number of industry players working in the autonomous domain, there is still more to be done in order to develop a system capable of operating at a level comparable to best human drivers. One reason for this is high uncertainty of traffic behavior and large number of situations that an SDV may encounter on the roads, making it very difficult to create a fully generalizable system. To ensure safe and efficient operations, an autonomous vehicle is required to account for this uncertainty and to anticipate a multitude of possible behaviors of traffic actors in its surrounding. In this work, we address this critical problem and present a method to predict multiple possible trajectories of actors while also estimating their probabilities. The method encodes each actor's surrounding context into a raster image, used as input by deep convolutional networks to automatically derive relevant features for the task. Following extensive offline evaluation and comparison to state-of-the-art baselines, as well as closed course tests, the method was successfully deployed to a fleet of SDVs.
1
0
0
0
0
0
16,586
Phase-Resolved Two-Dimensional Spectroscopy of Electronic Wavepackets by Laser-Induced XUV Free Induction Decay
We present a novel time- and phase-resolved, background-free scheme to study the extreme ultraviolet dipole emission of a bound electronic wavepacket, without the use of any extreme ultraviolet exciting pulse. Using multiphoton transitions, we populate a superposition of quantum states which coherently emit extreme ultraviolet radiation through free induction decay. This emission is probed and controlled, both in amplitude and phase, by a time-delayed infrared femtosecond pulse. We directly measure the laser-induced dephasing of the emission by using a simple heterodyne detection scheme based on two-source interferometry. This technique provides rich information about the interplay between the laser field and the Coulombic potential on the excited electron dynamics. Its background-free nature enables us to use a large range of gas pressures and to reveal the influence of collisions in the relaxation process.
0
1
0
0
0
0
16,587
Optimized Spatial Partitioning via Minimal Swarm Intelligence
Optimized spatial partitioning algorithms are the corner stone of many successful experimental designs and statistical methods. Of these algorithms, the Centroidal Voronoi Tessellation (CVT) is the most widely utilized. CVT based methods require global knowledge of spatial boundaries, do not readily allow for weighted regions, have challenging implementations, and are inefficiently extended to high dimensional spaces. We describe two simple partitioning schemes based on nearest and next nearest neighbor locations which easily incorporate these features at the slight expense of optimal placement. Several novel qualitative techniques which assess these partitioning schemes are also included. The feasibility of autonomous uninformed sensor networks utilizing these algorithms are considered. Some improvements in particle swarm optimizer results on multimodal test functions from partitioned initial positions in two space are also illustrated. Pseudo code for all of the novel algorithms depicted here-in is available in the supplementary information of this manuscript.
1
0
0
1
0
0
16,588
Learning Correspondence Structures for Person Re-identification
This paper addresses the problem of handling spatial misalignments due to camera-view changes or human-pose variations in person re-identification. We first introduce a boosting-based approach to learn a correspondence structure which indicates the patch-wise matching probabilities between images from a target camera pair. The learned correspondence structure can not only capture the spatial correspondence pattern between cameras but also handle the viewpoint or human-pose variation in individual images. We further introduce a global constraint-based matching process. It integrates a global matching constraint over the learned correspondence structure to exclude cross-view misalignments during the image patch matching process, hence achieving a more reliable matching score between images. Finally, we also extend our approach by introducing a multi-structure scheme, which learns a set of local correspondence structures to capture the spatial correspondence sub-patterns between a camera pair, so as to handle the spatial misalignments between individual images in a more precise way. Experimental results on various datasets demonstrate the effectiveness of our approach.
1
0
0
0
0
0
16,589
Inner Product and Set Disjointness: Beyond Logarithmically Many Parties
A basic goal in complexity theory is to understand the communication complexity of number-on-the-forehead problems $f\colon(\{0,1\}^n)^{k}\to\{0,1\}$ with $k\gg\log n$ parties. We study the problems of inner product and set disjointness and determine their randomized communication complexity for every $k\geq\log n$, showing in both cases that $\Theta(1+\lceil\log n\rceil/\log\lceil1+k/\log n\rceil)$ bits are necessary and sufficient. In particular, these problems admit constant-cost protocols if and only if the number of parties is $k\geq n^{\epsilon}$ for some constant $\epsilon>0.$
1
0
0
0
0
0
16,590
Topological classification of time-asymmetry in unitary quantum processes
Effective gauge fields have allowed the emulation of matter under strong magnetic fields leading to the realization of Harper-Hofstadter, Haldane models, and led to demonstrations of one-way waveguides and topologically protected edge states. Central to these discoveries is the chirality induced by time-symmetry breaking. Due to the discovery of quantum search algorithms based on walks on graphs, recent work has discovered new implications the effect of time-reversal symmetry breaking has on the transport of quantum states and has brought with it a host of new experimental implementations. We provide a full classification of the unitary operators defining quantum processes which break time-reversal symmetry in their induced transition properties between basis elements in a preferred site-basis. Our results are furthermore proven in terms of the geometry of the corresponding Hamiltonian support graph and hence provide a topological classification. A quantum process of this type is necessarily time-symmetric for any choice of time-independent Hamiltonian if and only if the underlying support graph is bipartite. Moreover, for non-bipartite support, there exists a time-independent Hamiltonian with necessarily complex edge weights that induces time-asymmetric transition probabilities between edge(s). We further prove that certain bipartite graphs give rise to transition probability suppression, but not broken time-reversal symmetry. These results fill an important missing gap in understanding the role this omnipresent effect has in quantum physics. Furthermore, through our development of a general framework, along the way to our results we completely characterize gauge potentials on combinatorial graphs.
0
1
0
0
0
0
16,591
A Light Modality for Recursion
We investigate the interplay between a modality for controlling the behaviour of recursive functional programs on infinite structures which are completely silent in the syntax. The latter means that programs do not contain "marks" showing the application of the introduction and elimination rules for the modality. This shifts the burden of controlling recursion from the programmer to the compiler. To do this, we introduce a typed lambda calculus a la Curry with a silent modality and guarded recursive types. The typing discipline guarantees normalisation and can be transformed into an algorithm which infers the type of a program.
1
0
0
0
0
0
16,592
Counting the number of metastable states in the modularity landscape: Algorithmic detectability limit of greedy algorithms in community detection
Modularity maximization using greedy algorithms continues to be a popular approach toward community detection in graphs, even after various better forming algorithms have been proposed. Apart from its clear mechanism and ease of implementation, this approach is persistently popular because, presumably, its risk of algorithmic failure is not well understood. This Rapid Communication provides insight into this issue by estimating the algorithmic performance limit of modularity maximization. This is achieved by counting the number of metastable states under a local update rule. Our results offer a quantitative insight into the level of sparsity at which a greedy algorithm typically fails.
1
0
0
0
0
0
16,593
Infinite rank surface cluster algebras
We generalise surface cluster algebras to the case of infinite surfaces where the surface contains finitely many accumulation points of boundary marked points. To connect different triangulations of an infinite surface, we consider infinite mutation sequences. We show transitivity of infinite mutation sequences on triangulations of an infinite surface and examine different types of mutation sequences. Moreover, we use a hyperbolic structure on an infinite surface to extend the notion of surface cluster algebras to infinite rank by giving cluster variables as lambda lengths of arcs. Furthermore, we study the structural properties of infinite rank surface cluster algebras in combinatorial terms, namely we extend "snake graph combinatorics" to give an expansion formula for cluster variables. We also show skein relations for infinite rank surface cluster algebras.
0
0
1
0
0
0
16,594
Machine learning prediction errors better than DFT accuracy
We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of thirteen electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to $\sim$117k distinct molecules. Molecular structures and properties at hybrid density functional theory (DFT) level of theory used for training and testing come from the QM9 database [Ramakrishnan et al, {\em Scientific Data} {\bf 1} 140022 (2014)] and include dipole moment, polarizability, HOMO/LUMO energies and gap, electronic spatial extent, zero point vibrational energy, enthalpies and free energies of atomization, heat capacity and the highest fundamental vibrational frequency. Various representations from the literature have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), and angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR) and two types of neural net works, graph convolutions (GC) and gated graph networks (GG). We present numerical evidence that ML model predictions deviate from DFT less than DFT deviates from experiment for all properties. Furthermore, our out-of-sample prediction errors with respect to hybrid DFT reference are on par with, or close to, chemical accuracy. Our findings suggest that ML models could be more accurate than hybrid DFT if explicitly electron correlated quantum (or experimental) data was available.
0
1
0
0
0
0
16,595
Invariant tori for the Nosé Thermostat near the High-Temperature Limit
Let H(q,p) = p^2/2 + V(q) be a 1-degree of freedom mechanical Hamiltonian with a C^n periodic potential V where n>4. The Nosé-thermostated system associated to H is shown to have invariant tori near the infinite temperature limit. This is shown to be true for all thermostats similar to Nosé's. These results complement the result of Legoll, Luskin and Moeckel who proved the existence of such tori near the decoupling limit.
0
0
1
0
0
0
16,596
Dyadic Green's function formalism for photo-induced forces in tip-sample nanojunctions
A comprehensive theoretical analysis of photo-induced forces in an illuminated nanojunction, formed between an atomic force microscopy tip and a sample, is presented. The formalism is valid within the dipolar approximation and includes multiple scattering effects between the tip, sample and a planar substrate through a dyadic Green's function approach. This physically intuitive description allows a detailed look at the quantitative contribution of multiple scattering effects to the measured photo-induced force, effects that are typically unaccounted for in simpler analytical models. Our findings show that the presence of the planar substrate and anisotropy of the tip have a substantial effect on the magnitude and the spectral response of the photo-induced force exerted on the tip. Unlike previous models, our calculations predict photo-induced forces that are within range of experimentally measured values in photo-induced force microscopy (PiFM) experiments.
0
1
0
0
0
0
16,597
Semi-supervised Embedding in Attributed Networks with Outliers
In this paper, we propose a novel framework, called Semi-supervised Embedding in Attributed Networks with Outliers (SEANO), to learn a low-dimensional vector representation that systematically captures the topological proximity, attribute affinity and label similarity of vertices in a partially labeled attributed network (PLAN). Our method is designed to work in both transductive and inductive settings while explicitly alleviating noise effects from outliers. Experimental results on various datasets drawn from the web, text and image domains demonstrate the advantages of SEANO over state-of-the-art methods in semi-supervised classification under transductive as well as inductive settings. We also show that a subset of parameters in SEANO is interpretable as outlier score and can significantly outperform baseline methods when applied for detecting network outliers. Finally, we present the use of SEANO in a challenging real-world setting -- flood mapping of satellite images and show that it is able to outperform modern remote sensing algorithms for this task.
1
0
0
0
0
0
16,598
Kernel partial least squares for stationary data
We consider the kernel partial least squares algorithm for non-parametric regression with stationary dependent data. Probabilistic convergence rates of the kernel partial least squares estimator to the true regression function are established under a source and an effective dimensionality condition. It is shown both theoretically and in simulations that long range dependence results in slower convergence rates. A protein dynamics example shows high predictive power of kernel partial least squares.
0
0
1
1
0
0
16,599
Doubled Khovanov Homology
We define a homology theory of virtual links built out of the direct sum of the standard Khovanov complex with itself, motivating the name doubled Khovanov homology. We demonstrate that it can be used to show that some virtual links are non-classical, and that it yields a condition on a virtual knot being the connect sum of two unknots. Further, we show that doubled Khovanov homology possesses a perturbation analogous to that defined by Lee in the classical case and define a doubled Rasmussen invariant. This invariant is used to obtain various cobordism obstructions; in particular it is an obstruction to sliceness. Finally, we show that the doubled Rasmussen invariant contains the odd writhe of a virtual knot, and use this to show that knots with non-zero odd writhe are not slice.
0
0
1
0
0
0
16,600
On the multi-dimensional elephant random walk
The purpose of this paper is to investigate the asymptotic behavior of the multi-dimensional elephant random walk (MERW). It is a non-Markovian random walk which has a complete memory of its entire history. A wide range of literature is available on the one-dimensional ERW. Surprisingly, no references are available on the MERW. The goal of this paper is to fill the gap by extending the results on the one-dimensional ERW to the MERW. In the diffusive and critical regimes, we establish the almost sure convergence, the law of iterated logarithm and the quadratic strong law for the MERW. The asymptotic normality of the MERW, properly normalized, is also provided. In the superdiffusive regime, we prove the almost sure convergence as well as the mean square convergence of the MERW. All our analysis relies on asymptotic results for multi-dimensional martingales.
0
0
1
1
0
0