ID
int64 1
16.8k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.59k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
16,601 | Acceleration and Averaging in Stochastic Mirror Descent Dynamics | We formulate and study a general family of (continuous-time) stochastic
dynamics for accelerated first-order minimization of smooth convex functions.
Building on an averaging formulation of accelerated mirror descent, we propose
a stochastic variant in which the gradient is contaminated by noise, and study
the resulting stochastic differential equation. We prove a bound on the rate of
change of an energy function associated with the problem, then use it to derive
estimates of convergence rates of the function values, (a.s. and in
expectation) both for persistent and asymptotically vanishing noise. We discuss
the interaction between the parameters of the dynamics (learning rate and
averaging weights) and the covariation of the noise process, and show, in
particular, how the asymptotic rate of covariation affects the choice of
parameters and, ultimately, the convergence rate.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,602 | Electrothermal Feedback in Kinetic Inductance Detectors | In Kinetic Inductance Detectors (KIDs) and other similar applications of
superconducting microresonators, both the large and small-signal behaviour of
the device may be affected by electrothermal feedback. Microwave power applied
to read out the device is absorbed by and heats the superconductor
quasiparticles, changing the superconductor conductivity and hence the readout
power absorbed in a positive or negative feedback loop. In this work, we
explore numerically the implications of an extensible theoretical model of a
generic superconducting microresonator device for a typical KID, incorporating
recent work on the power flow between superconductor quasiparticles and
phonons. This model calculates the large-signal (changes in operating point)
and small-signal behaviour of a device, allowing us to determine the effect of
electrothermal feedback on device responsivity and noise characteristics under
various operating conditions. We also investigate how thermally isolating the
device from the bath, for example by designing the device on a membrane only
connected to the bulk substrate by thin legs, affects device performance. We
find that at a typical device operating point, positive electrothermal feedback
reduces the effective thermal conductance from the superconductor
quasiparticles to the bath, and so increases responsivity to signal
(pair-breaking) power, increases noise from temperature fluctuations, and
decreases the Noise Equivalent Power (NEP). Similarly, increasing the thermal
isolation of the device while keeping the quasiparticle temperature constant
decreases the NEP, but also decreases the device response bandwidth.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,603 | Comparison of Modified Kneser-Ney and Witten-Bell Smoothing Techniques in Statistical Language Model of Bahasa Indonesia | Smoothing is one technique to overcome data sparsity in statistical language
model. Although in its mathematical definition there is no explicit dependency
upon specific natural language, different natures of natural languages result
in different effects of smoothing techniques. This is true for Russian language
as shown by Whittaker (1998). In this paper, We compared Modified Kneser-Ney
and Witten-Bell smoothing techniques in statistical language model of Bahasa
Indonesia. We used train sets of totally 22M words that we extracted from
Indonesian version of Wikipedia. As far as we know, this is the largest train
set used to build statistical language model for Bahasa Indonesia. The
experiments with 3-gram, 5-gram, and 7-gram showed that Modified Kneser-Ney
consistently outperforms Witten-Bell smoothing technique in term of perplexity
values. It is interesting to note that our experiments showed 5-gram model for
Modified Kneser-Ney smoothing technique outperforms that of 7-gram. Meanwhile,
Witten-Bell smoothing is consistently improving over the increase of n-gram
order.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,604 | Social evolution of structural discrimination | Structural discrimination appears to be a persistent phenomenon in social
systems. We here outline the hypothesis that it can result from the
evolutionary dynamics of the social system itself. We study the evolutionary
dynamics of agents with neutral badges in a simple social game and find that
the badges are readily discriminated by the system although not being tied to
the payoff matrix of the game. The sole property of being distinguishable leads
to the subsequent discrimination, therefore providing a model for the emergence
and freezing of social prejudice.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,605 | Decentralized Control of a Hexapod Robot Using a Wireless Time Synchronized Network | Robots and control systems rely upon precise timing of sensors and actuators
in order to operate intelligently. We present a functioning hexapod robot that
walks with a dual tripod gait; each tripod is actuated using its own local
controller running on a separate wireless node. We compare and report the
results of operating the robot using two different decentralized control
schemes. With the first scheme, each controller relies on its own local clock
to generate control signals for the tripod it controls. With the second scheme,
each controller relies on a variable that is local to itself but that is
necessarily the same across controllers as a by-product of their host nodes
being part of a time synchronized IEEE802.15.4e network. The gait
synchronization error (time difference between what both controllers believe is
the start of the gait period) grows linearly when the controllers use their
local clocks, but remains bounded to within 112 microseconds when the
controllers use their nodes' time synchronized local variable.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,606 | Domain wall motion by localized temperature gradients | Magnetic domain wall (DW) motion induced by a localized Gaussian temperature
profile is studied in a Permalloy nanostrip within the framework of the
stochastic Landau-Lifshitz-Bloch equation. The different contributions to
thermally induced DW motion, entropic torque and magnonic spin transfer torque,
are isolated and compared. The analysis of magnonic spin transfer torque
includes a description of thermally excited magnons in the sample. A third
driving force due to a thermally induced dipolar field is found and described.
Finally, thermally induced DW motion is studied under realistic conditions by
taking into account the edge roughness. The results give quantitative insights
into the different mechanisms responsible for domain wall motion in temperature
gradients and allow for comparison with experimental results.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,607 | Generation of High Dynamic Range Illumination from a Single Image for the Enhancement of Undesirably Illuminated Images | This paper presents an algorithm that enhances undesirably illuminated images
by generating and fusing multi-level illuminations from a single image.The
input image is first decomposed into illumination and reflectance components by
using an edge-preserving smoothing filter. Then the reflectance component is
scaled up to improve the image details in bright areas. The illumination
component is scaled up and down to generate several illumination images that
correspond to certain camera exposure values different from the original. The
virtual multi-exposure illuminations are blended into an enhanced illumination,
where we also propose a method to generate appropriate weight maps for the tone
fusion. Finally, an enhanced image is obtained by multiplying the equalized
illumination and enhanced reflectance. Experiments show that the proposed
algorithm produces visually pleasing output and also yields comparable
objective results to the conventional enhancement methods, while requiring
modest computational loads.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,608 | Replace or Retrieve Keywords In Documents at Scale | In this paper we introduce, the FlashText algorithm for replacing keywords or
finding keywords in a given text. FlashText can search or replace keywords in
one pass over a document. The time complexity of this algorithm is not
dependent on the number of terms being searched or replaced. For a document of
size N (characters) and a dictionary of M keywords, the time complexity will be
O(N). This algorithm is much faster than Regex, because regex time complexity
is O(MxN). It is also different from Aho Corasick Algorithm, as it doesn't
match substrings. FlashText is designed to only match complete words (words
with boundary characters on both sides). For an input dictionary of {Apple},
this algorithm won't match it to 'I like Pineapple'. This algorithm is also
designed to go for the longest match first. For an input dictionary {Machine,
Learning, Machine learning} on a string 'I like Machine learning', it will only
consider the longest match, which is Machine Learning. We have made python
implementation of this algorithm available as open-source on GitHub, released
under the permissive MIT License.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,609 | A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms | Logistics network is expected that opened facilities work continuously for a
long time horizon without any failure, but in real world problems, facilities
may face disruptions. This paper studies a reliable joint inventory location
problem to optimize the cost of facility locations, customers assignment, and
inventory management decisions when facilities face failure risks and do not
work. In our model we assume when a facility is out of work, its customers may
be reassigned to other operational facilities otherwise they must endure high
penalty costs associated with losing service. For defining the model closer to
real world problems, the model is proposed based on pmedian problem and the
facilities are considered to have limited capacities. We define a new binary
variable for showing that customers are not assigned to any facilities. Our
problem involves a biobjective model, the first one minimizes the sum of
facility construction costs and expected inventory holding costs, the second
one function that mentions for the first one is minimized maximum expected
customer costs under normal and failure scenarios. For solving this model we
use NSGAII and MOSS algorithms have been applied to find the Pareto archive
solution. Also, Response Surface Methodology (RSM) is applied for optimizing
the NSGAII Algorithm Parameters. We compare the performance of two algorithms
with three metrics and the results show NSGAII is more suitable for our model.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,610 | Formation of Galactic Prominence in Galactic Central Region | We carried out 2.5-dimensional resistive MHD simulations to study the
formation mechanism of molecular loops observed by Fukui et al. (2006) at
Galactic central region. Since it is hard to form molecular loops by uplifting
dense molecular gas, we study the formation mechanism of molecular gas in
rising magnetic arcades. This model is based on the in-situ formation model of
solar prominences, in which prominences are formed by cooling instability in
helical magnetic flux ropes formed by imposing converging and shearing motion
at footpoints of the magnetic arch anchored to the solar surface. We extended
this model to Galactic center scale (a few hundreds pc). Numerical results
indicate that magnetic reconnection taking place in the current sheet formed
inside the rising magnetic arcade creates dense blobs confined by the rising
helical magnetic flux ropes. Thermal instability taking place in the flux ropes
forms dense molecular filaments floating at high Galactic latitude. The mass of
the filament increases with time, and can exceed 10^5 solar mass.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,611 | Joint Probabilistic Linear Discriminant Analysis | Standard probabilistic linear discriminant analysis (PLDA) for speaker
recognition assumes that the sample's features (usually, i-vectors) are given
by a sum of three terms: a term that depends on the speaker identity, a term
that models the within-speaker variability and is assumed independent across
samples, and a final term that models any remaining variability and is also
independent across samples. In this work, we propose a generalization of this
model where the within-speaker variability is not necessarily assumed
independent across samples but dependent on another discrete variable. This
variable, which we call the channel variable as in the standard PLDA approach,
could be, for example, a discrete category for the channel characteristics, the
language spoken by the speaker, the type of speech in the sample
(conversational, monologue, read), etc. The value of this variable is assumed
to be known during training but not during testing. Scoring is performed, as in
standard PLDA, by computing a likelihood ratio between the null hypothesis that
the two sides of a trial belong to the same speaker versus the alternative
hypothesis that the two sides belong to different speakers. The two likelihoods
are computed by marginalizing over two hypothesis about the channels in both
sides of a trial: that they are the same and that they are different. This way,
we expect that the new model will be better at coping with same-channel versus
different-channel trials than standard PLDA, since knowledge about the channel
(or language, or speech style) is used during training and implicitly
considered during scoring.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,612 | Adversarial Attack on Graph Structured Data | Deep learning on graph structures has shown exciting results in various
applications. However, few attentions have been paid to the robustness of such
models, in contrast to numerous research work for image or text adversarial
attack and defense. In this paper, we focus on the adversarial attacks that
fool the model by modifying the combinatorial structure of data. We first
propose a reinforcement learning based attack method that learns the
generalizable attack policy, while only requiring prediction labels from the
target classifier. Also, variants of genetic algorithms and gradient methods
are presented in the scenario where prediction confidence or gradients are
available. We use both synthetic and real-world data to show that, a family of
Graph Neural Network models are vulnerable to these attacks, in both
graph-level and node-level classification tasks. We also show such attacks can
be used to diagnose the learned classifiers.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,613 | Uncertainty quantification for kinetic models in socio-economic and life sciences | Kinetic equations play a major rule in modeling large systems of interacting
particles. Recently the legacy of classical kinetic theory found novel
applications in socio-economic and life sciences, where processes characterized
by large groups of agents exhibit spontaneous emergence of social structures.
Well-known examples are the formation of clusters in opinion dynamics, the
appearance of inequalities in wealth distributions, flocking and milling
behaviors in swarming models, synchronization phenomena in biological systems
and lane formation in pedestrian traffic. The construction of kinetic models
describing the above processes, however, has to face the difficulty of the lack
of fundamental principles since physical forces are replaced by empirical
social forces. These empirical forces are typically constructed with the aim to
reproduce qualitatively the observed system behaviors, like the emergence of
social structures, and are at best known in terms of statistical information of
the modeling parameters. For this reason the presence of random inputs
characterizing the parameters uncertainty should be considered as an essential
feature in the modeling process. In this survey we introduce several examples
of such kinetic models, that are mathematically described by nonlinear Vlasov
and Fokker--Planck equations, and present different numerical approaches for
uncertainty quantification which preserve the main features of the kinetic
solution.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,614 | How Much Chemistry Does a Deep Neural Network Need to Know to Make Accurate Predictions? | The meteoric rise of deep learning models in computer vision research, having
achieved human-level accuracy in image recognition tasks is firm evidence of
the impact of representation learning of deep neural networks. In the chemistry
domain, recent advances have also led to the development of similar CNN models,
such as Chemception, that is trained to predict chemical properties using
images of molecular drawings. In this work, we investigate the effects of
systematically removing and adding localized domain-specific information to the
image channels of the training data. By augmenting images with only 3
additional basic information, and without introducing any architectural
changes, we demonstrate that an augmented Chemception (AugChemception)
outperforms the original model in the prediction of toxicity, activity, and
solvation free energy. Then, by altering the information content in the images,
and examining the resulting model's performance, we also identify two distinct
learning patterns in predicting toxicity/activity as compared to solvation free
energy. These patterns suggest that Chemception is learning about its tasks in
the manner that is consistent with established knowledge. Thus, our work
demonstrates that advanced chemical knowledge is not a pre-requisite for deep
learning models to accurately predict complex chemical properties.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,615 | Accelerated Computing in Magnetic Resonance Imaging -- Real-Time Imaging Using Non-Linear Inverse Reconstruction | Purpose: To develop generic optimization strategies for image reconstruction
using graphical processing units (GPUs) in magnetic resonance imaging (MRI) and
to exemplarily report about our experience with a highly accelerated
implementation of the non-linear inversion algorithm (NLINV) for dynamic MRI
with high frame rates. Methods: The NLINV algorithm is optimized and ported to
run on an a multi-GPU single-node server. The algorithm is mapped to multiple
GPUs by decomposing the data domain along the channel dimension. Furthermore,
the algorithm is decomposed along the temporal domain by relaxing a temporal
regularization constraint, allowing the algorithm to work on multiple frames in
parallel. Finally, an autotuning method is presented that is capable of
combining different decomposition variants to achieve optimal algorithm
performance in different imaging scenarios. Results: The algorithm is
successfully ported to a multi-GPU system and allows online image
reconstruction with high frame rates. Real-time reconstruction with low latency
and frame rates up to 30 frames per second is demonstrated. Conclusion: Novel
parallel decomposition methods are presented which are applicable to many
iterative algorithms for dynamic MRI. Using these methods to parallelize the
NLINV algorithm on multiple GPUs it is possible to achieve online image
reconstruction with high frame rates.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,616 | Exploiting Color Name Space for Salient Object Detection | In this paper, we will investigate the contribution of color names for
salient object detection. Each input image is first converted to the color name
space, which is consisted of 11 probabilistic channels. By exploring the
topological structure relationship between the figure and the ground, we obtain
a saliency map through a linear combination of a set of sequential attention
maps. To overcome the limitation of only exploiting the surroundedness cue, two
global cues with respect to color names are invoked for guiding the computation
of another weighted saliency map. Finally, we integrate the two saliency maps
into a unified framework to infer the saliency result. In addition, an improved
post-processing procedure is introduced to effectively suppress the background
while uniformly highlight the salient objects. Experimental results show that
the proposed model produces more accurate saliency maps and performs well
against 23 saliency models in terms of three evaluation metrics on three public
datasets.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,617 | Possibility to realize spin-orbit-induced correlated physics in iridium fluorides | Recent theoretical predictions of "unprecedented proximity" of the electronic
ground state of iridium fluorides to the SU(2) symmetric $j_{\mathrm{eff}}=1/2$
limit, relevant for superconductivity in iridates, motivated us to investigate
their crystal and electronic structure. To this aim, we performed
high-resolution x-ray powder diffraction, Ir L$_3$-edge resonant inelastic
x-ray scattering, and quantum chemical calculations on Rb$_2$[IrF$_6$] and
other iridium fluorides. Our results are consistent with the Mott insulating
scenario predicted by Birol and Haule [Phys. Rev. Lett. 114, 096403 (2015)],
but we observe a sizable deviation of the $j_{\mathrm{eff}}=1/2$ state from the
SU(2) symmetric limit. Interactions beyond the first coordination shell of
iridium are negligible, hence the iridium fluorides do not show any magnetic
ordering down to at least 20 K. A larger spin-orbit coupling in iridium
fluorides compared to oxides is ascribed to a reduction of the degree of
covalency, with consequences on the possibility to realize spin-orbit-induced
strongly correlated physics in iridium fluorides.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,618 | Numerical Investigation of Unsteady Aerodynamic Effects on Thick Flatback Airfoils | The unsteady characteristics of the flow over thick flatback airfoils have
been investigated by means of CFD calculations. Sandia airfoils which have 35%
maximum thickness with three different trailing edge thicknesses were selected.
The calculations provided good results compared with available experimental
data with regard to the lift curve and the impact of trailing edge thickness.
Unsteady CFD simulations revealed that the Strouhal number is found to be
independent of the lift coefficient before stall and increases with the
trailing edge. The present work shows the dependency of the Strouhal number and
the wake development on the trailing edge thickness. A recommendation of the
Strouhal number definition is given for flatback airfoils by considering the
trailing edge separation at low angle of attack. The detailed unsteady
characteristics of thick flatback airfoils are discussed more in the present
paper.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,619 | Development of a passive Rehabilitation Robot for the wrist joint through the implementation of an Arduino UNO microcontroller | In this research was implemented the use of an Arduino UNO R3 microcontroller
to control the movements of a prototype robotic functional developed to perform
rehabilitation exercises in the wrist joint; This device can be used to assist
the physiatrist to rehabilitate the tendinitis, synovitis, rheumatoid arthritis
and for pre-operative and post-operative therapy in this joint. During the
design stage of the functional prototype, the methodology of the industrial
design process was used from a concurrent engineering approach, through which
anthropometric studies could be performed related to the dimensions and angles
of movement of the wrist joint in the population Venezuelan from the
information collected, the design proposal was elaborated, and the use of CAD
programs defined the different forms, geometries and materials of the
components of the rehabilitation device, which were later analyzed using the
finite element method for the determination The tensional state of efforts and
safety factors through the use of CAE programs. In addition, a software was
developed for the acquisition, registration, reproduction and execution of the
different movements produced during the rehabilitation therapy. Through the
research developed, a device was designed that will help the rehabilitation of
the wrist joint allowing the combination of dorsal-palmar flexion and
ulnar-radial movements to recover the joint function of various pathologies
presented in the Venezuelan population.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,620 | PHOEG Helps Obtaining Extremal Graphs | Extremal Graph Theory aims to determine bounds for graph invariants as well
as the graphs attaining those bounds.
We are currently developping PHOEG, an ecosystem of tools designed to help
researchers in Extremal Graph Theory.
It uses a big relational database of undirected graphs and works with the
convex hull of the graphs as points in the invariants space in order to exactly
obtain the extremal graphs and optimal bounds on the invariants for some fixed
parameters. The results obtained on the restricted finite class of graphs can
later be used to infer conjectures. This database also allows us to make
queries on those graphs. Once the conjecture defined, PHOEG goes one step
further by helping in the process of designing a proof guided by successive
applications of transformations from any graph to an extremal graph. To this
aim, we use a second database based on a graph data model.
The paper presents ideas and techniques used in PHOEG to assist the study of
Extremal Graph Theory.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,621 | Crime Prediction by Data-Driven Green's Function method | We develop an algorithm that forecasts cascading events, by employing a
Green's function scheme on the basis of the self-exciting point process model.
This method is applied to open data of 10 types of crimes happened in Chicago.
It shows a good prediction accuracy superior to or comparable to the standard
methods which are the expectation-maximization method and prospective hotspot
maps method. We find a cascade influence of the crimes that has a long-time,
logarithmic tail; this result is consistent with an earlier study on
burglaries. This long-tail feature cannot be reproduced by the other standard
methods. In addition, a merit of the Green's function method is the low
computational cost in the case of high density of events and/or large amount of
the training data.
| 1 | 1 | 0 | 1 | 0 | 0 |
16,622 | DZ Cha: a bona fide photoevaporating disc | DZ Cha is a weak-lined T Tauri star (WTTS) surrounded by a bright
protoplanetary disc with evidence of inner disc clearing. Its narrow $\Ha$ line
and infrared spectral energy distribution suggest that DZ Cha may be a
photoevaporating disc. We aim to analyse the DZ Cha star + disc system to
identify the mechanism driving the evolution of this object. We have analysed
three epochs of high resolution optical spectroscopy, photometry from the UV up
to the sub-mm regime, infrared spectroscopy, and J-band imaging polarimetry
observations of DZ Cha. Combining our analysis with previous studies we find no
signatures of accretion in the $\Ha$ line profile in nine epochs covering a
time baseline of $\sim20$ years. The optical spectra are dominated by
chromospheric emission lines, but they also show emission from the forbidden
lines [SII] 4068 and [OI] 6300$\,\AA$ that indicate a disc outflow. The
polarized images reveal a dust depleted cavity of $\sim7$ au in radius and two
spiral-like features, and we derive a disc dust mass limit of
$M_\mathrm{dust}<3\MEarth$ from the sub-mm photometry. No stellar ($M_\star >
80 \MJup$) companions are detected down to $0\farcs07$ ($\sim 8$ au,
projected). The negligible accretion rate, small cavity, and forbidden line
emission strongly suggests that DZ Cha is currently at the initial stages of
disc clearing by photoevaporation. At this point the inner disc has drained and
the inner wall of the truncated outer disc is directly exposed to the stellar
radiation. We argue that other mechanisms like planet formation or binarity
cannot explain the observed properties of DZ Cha. The scarcity of objects like
this one is in line with the dispersal timescale ($\lesssim 10^5$ yr) predicted
by this theory. DZ Cha is therefore an ideal target to study the initial stages
of photoevaporation.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,623 | A Bayesian Filtering Algorithm for Gaussian Mixture Models | A Bayesian filtering algorithm is developed for a class of state-space
systems that can be modelled via Gaussian mixtures. In general, the exact
solution to this filtering problem involves an exponential growth in the number
of mixture terms and this is handled here by utilising a Gaussian mixture
reduction step after both the time and measurement updates. In addition, a
square-root implementation of the unified algorithm is presented and this
algorithm is profiled on several simulated systems. This includes the state
estimation for two non-linear systems that are strictly outside the class
considered in this paper.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,624 | Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems | We propose a generic algorithmic building block to accelerate training of
machine learning models on heterogeneous compute systems. Our scheme allows to
efficiently employ compute accelerators such as GPUs and FPGAs for the training
of large-scale machine learning models, when the training data exceeds their
memory capacity. Also, it provides adaptivity to any system's memory hierarchy
in terms of size and processing speed. Our technique is built upon novel
theoretical insights regarding primal-dual coordinate methods, and uses duality
gap information to dynamically decide which part of the data should be made
available for fast processing. To illustrate the power of our approach we
demonstrate its performance for training of generalized linear models on a
large-scale dataset exceeding the memory size of a modern GPU, showing an
order-of-magnitude speedup over existing approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,625 | Fast Autonomous Flight in Warehouses for Inventory Applications | The past years have shown a remarkable growth in use-cases for micro aerial
vehicles (MAVs). Conceivable indoor applications require highly robust
environment perception, fast reaction to changing situations, and stable
navigation, but reliable sources of absolute positioning like GNSS or compass
measurements are unavailable during indoor flights. We present a
high-performance autonomous inventory MAV for operation inside warehouses. The
MAV navigates along warehouse aisles and detects the placed stock in the
shelves alongside its path with a multimodal sensor setup containing an RFID
reader and two high-resolution cameras. We describe in detail the SLAM pipeline
based on a 3D lidar, the setup for stock recognition, the mission planning and
trajectory generation, as well as a low-level routine for avoidance of
dynamical or previously unobserved obstacles. Experiments were performed in an
operative warehouse of a logistics provider, in which an external warehouse
management system provided the MAV with high-level inspection missions that are
executed fully autonomously.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,626 | Cosmological solutions in generalized hybrid metric-Palatini gravity | We construct exact solutions representing a
Friedmann-Lemaître-Robsertson-Walker (FLRW) universe in a generalized hybrid
metric-Palatini theory. By writing the gravitational action in a scalar-tensor
representation, the new solutions are obtained by either making an ansatz on
the scale factor or on the effective potential. Among other relevant results,
we show that it is possible to obtain exponentially expanding solutions for
flat universes even when the cosmology is not purely vacuum. We then derive the
classes of actions for the original theory which generate these solutions.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,627 | Ultimate Boundedness for Switched Systems with Multiple Equilibria Under Disturbances | In this paper, we investigate the robustness to external disturbances of
switched discrete and continuous systems with multiple equilibria. It is shown
that if each subsystem of the switched system is Input-to-State Stable (ISS),
then under switching signals that satisfy an average dwell-time bound, the
solutions are ultimately bounded within a compact set. Furthermore, the size of
this set varies monotonically with the supremum norm of the disturbance signal.
It is observed that when the subsystems share a common equilibrium, ISS is
recovered for solutions of the corresponding switched system; hence, the
results in this paper are a natural generalization of classical results in
switched systems that exhibit a common equilibrium. Additionally, we provide a
method to analytically compute the average dwell time if each subsystem
possesses a quadratic ISS-Lyapunov function. Our motivation for studying this
class of switched systems arises from certain motion planning problems in
robotics, where primitive motions, each corresponding to an equilibrium point
of a dynamical system, must be composed to realize a task. However, the results
are relevant to a much broader class of applications, in which composition of
different modes of behavior is required.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,628 | The two-to-infinity norm and singular subspace geometry with applications to high-dimensional statistics | The singular value matrix decomposition plays a ubiquitous role throughout
statistics and related fields. Myriad applications including clustering,
classification, and dimensionality reduction involve studying and exploiting
the geometric structure of singular values and singular vectors.
This paper provides a novel collection of technical and theoretical tools for
studying the geometry of singular subspaces using the two-to-infinity norm.
Motivated by preliminary deterministic Procrustes analysis, we consider a
general matrix perturbation setting in which we derive a new Procrustean matrix
decomposition. Together with flexible machinery developed for the
two-to-infinity norm, this allows us to conduct a refined analysis of the
induced perturbation geometry with respect to the underlying singular vectors
even in the presence of singular value multiplicity. Our analysis yields
singular vector entrywise perturbation bounds for a range of popular matrix
noise models, each of which has a meaningful associated statistical inference
task. In addition, we demonstrate how the two-to-infinity norm is the preferred
norm in certain statistical settings. Specific applications discussed in this
paper include covariance estimation, singular subspace recovery, and multiple
graph inference.
Both our Procrustean matrix decomposition and the technical machinery
developed for the two-to-infinity norm may be of independent interest.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,629 | Variational Bayesian Complex Network Reconstruction | Complex network reconstruction is a hot topic in many fields. A popular
data-driven reconstruction framework is based on lasso. However, it is found
that, in the presence of noise, it may be inefficient for lasso to determine
the network topology. This paper builds a new framework to cope with this
problem. The key idea is to employ a series of linear regression problems to
model the relationship between network nodes, and then to use an efficient
variational Bayesian method to infer the unknown coefficients. Based on the
obtained information, the network is finally reconstructed by determining
whether two nodes connect with each other or not. The numerical experiments
conducted with both synthetic and real data demonstrate that the new method
outperforms lasso with regard to both reconstruction accuracy and running
speed.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,630 | Renormalization of the two-dimensional stochastic nonlinear wave equations | We study the two-dimensional stochastic nonlinear wave equations (SNLW) with
an additive space-time white noise forcing. In particular, we introduce a
time-dependent renor- malization and prove that SNLW is pathwise locally
well-posed. As an application of the local well-posedness argument, we also
establish a weak universality result for the renormalized SNLW.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,631 | Generation of High-Purity Millimeter-Wave Orbital Angular Momentum Modes Using Horn Antenna: Theory and Implementation | Twisted electromagnetic waves, of which the helical phase front is called
orbital angular momentum (OAM), have been recently explored for quantum
information, high speed communication and radar detections. In this context,
generation of high purity waves carrying OAM is of great significance and
challenge from low frequency band to optical area. Here, a novel strategy of
mode combination method is proposed to generate twisted waves with arbitrary
order of OAM index. The higher order mode of a circular horn antenna is used to
generate the twisted waves with quite high purity. The proposed strategy is
verified with theoretical analysis, numerical simulation and experiments. A
circular horn antenna operating at millimeter wave band is designed,
fabricated, and measured. Two twisted waves with OAM index of l=+1 and l=-1
with a mode purity as high as 87% are obtained. Compared with the other OAM
antennas, the antenna proposed here owns a high antenna gain (over 12 dBi) and
wide operating bandwidth (over 15%). The high mode purity, high antenna gain
and wide operating band make the antenna suitable for the twisted-wave
applications, not only in the microwave and millimeter wave band, but also in
the terahertz band.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,632 | CFT: A Cluster-based File Transfer Scheme for Highway | Effective file transfer between vehicles is fundamental to many emerging
vehicular infotainment applications in the highway Vehicular Ad Hoc Networks
(VANETs), such as content distribution and social networking. However, due to
fast mobility, the connection between vehicles tends to be short-lived and
lossy, which makes intact file transfer extremely challenging. To tackle this
problem, we presents a novel Cluster-based File Transfer (CFT) scheme for
highway VANETs in this paper. With CFT, when a vehicle requests a file, the
transmission capacity between the resource vehicle and the destination vehicle
is evaluated. If the requested file can be successfully transferred over the
direct Vehicular-to-Vehicular (V2V) connection, the file transfer will be
completed by the resource and the destination themselves. Otherwise, a cluster
will be formed to help the file transfer. As a fully-distributed scheme that
relies on the collaboration of cluster members, CFT does not require any
assistance from roadside units or access points. Our experimental results
indicate that CFT outperforms the existing file transfer schemes for highway
VANETs.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,633 | Empirical priors and posterior concentration rates for a monotone density | In a Bayesian context, prior specification for inference on monotone
densities is conceptually straightforward, but proving posterior convergence
theorems is complicated by the fact that desirable prior concentration
properties often are not satisfied. In this paper, I first develop a new prior
designed specifically to satisfy an empirical version of the prior
concentration property, and then I give sufficient conditions on the prior
inputs such that the corresponding empirical Bayes posterior concentrates
around the true monotone density at nearly the optimal minimax rate. Numerical
illustrations also reveal the practical benefits of the proposed empirical
Bayes approach compared to Dirichlet process mixtures.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,634 | PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review | We consider the problem of automated assignment of papers to reviewers in
conference peer review, with a focus on fairness and statistical accuracy. Our
fairness objective is to maximize the review quality of the most disadvantaged
paper, in contrast to the commonly used objective of maximizing the total
quality over all papers. We design an assignment algorithm based on an
incremental max-flow procedure that we prove is near-optimally fair. Our
statistical accuracy objective is to ensure correct recovery of the papers that
should be accepted. We provide a sharp minimax analysis of the accuracy of the
peer-review process for a popular objective-score model as well as for a novel
subjective-score model that we propose in the paper. Our analysis proves that
our proposed assignment algorithm also leads to a near-optimal statistical
accuracy. Finally, we design a novel experiment that allows for an objective
comparison of various assignment algorithms, and overcomes the inherent
difficulty posed by the absence of a ground truth in experiments on
peer-review. The results of this experiment corroborate the theoretical
guarantees of our algorithm.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,635 | CT Image Reconstruction in a Low Dimensional Manifold | Regularization methods are commonly used in X-ray CT image reconstruction.
Different regularization methods reflect the characterization of different
prior knowledge of images. In a recent work, a new regularization method called
a low-dimensional manifold model (LDMM) is investigated to characterize the
low-dimensional patch manifold structure of natural images, where the manifold
dimensionality characterizes structural information of an image. In this paper,
we propose a CT image reconstruction method based on the prior knowledge of the
low-dimensional manifold of CT image. Using the clinical raw projection data
from GE clinic, we conduct comparisons for the CT image reconstruction among
the proposed method, the simultaneous algebraic reconstruction technique (SART)
with the total variation (TV) regularization, and the filtered back projection
(FBP) method. Results show that the proposed method can successfully recover
structural details of an imaging object, and achieve higher spatial and
contrast resolution of the reconstructed image than counterparts of FBP and
SART with TV.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,636 | Deep Unsupervised Clustering Using Mixture of Autoencoders | Unsupervised clustering is one of the most fundamental challenges in machine
learning. A popular hypothesis is that data are generated from a union of
low-dimensional nonlinear manifolds; thus an approach to clustering is
identifying and separating these manifolds. In this paper, we present a novel
approach to solve this problem by using a mixture of autoencoders. Our model
consists of two parts: 1) a collection of autoencoders where each autoencoder
learns the underlying manifold of a group of similar objects, and 2) a mixture
assignment neural network, which takes the concatenated latent vectors from the
autoencoders as input and infers the distribution over clusters. By jointly
optimizing the two parts, we simultaneously assign data to clusters and learn
the underlying manifolds of each cluster.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,637 | Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory | The Pepper robot has become a widely recognised face for the perceived
potential of social robots to enter our homes and businesses. However, to date,
commercial and research applications of the Pepper have been largely restricted
to roles in which the robot is able to remain stationary. This restriction is
the result of a number of technical limitations, including limited sensing
capabilities, and have as a result, reduced the number of roles in which use of
the robot can be explored. In this paper, we present our approach to solving
these problems, with the intention of opening up new research applications for
the robot. To demonstrate the applicability of our approach, we have framed
this work within the context of providing interactive tours of an open-plan
robotics laboratory.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,638 | The Ensemble Kalman Filter: A Signal Processing Perspective | The ensemble Kalman filter (EnKF) is a Monte Carlo based implementation of
the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear and
non-Gaussian state estimation problems. Its ability to handle state dimensions
in the order of millions has made the EnKF a popular algorithm in different
geoscientific disciplines. Despite a similarly vital need for scalable
algorithms in signal processing, e.g., to make sense of the ever increasing
amount of sensor data, the EnKF is hardly discussed in our field.
This self-contained review paper is aimed at signal processing researchers
and provides all the knowledge to get started with the EnKF. The algorithm is
derived in a KF framework, without the often encountered geoscientific
terminology. Algorithmic challenges and required extensions of the EnKF are
provided, as well as relations to sigma-point KF and particle filters. The
relevant EnKF literature is summarized in an extensive survey and unique
simulation examples, including popular benchmark problems, complement the
theory with practical insights. The signal processing perspective highlights
new directions of research and facilitates the exchange of potentially
beneficial ideas, both for the EnKF and high-dimensional nonlinear and
non-Gaussian filtering in general.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,639 | Simultaneous Transmit and Receive Operation in Next Generation IEEE 802.11 WLANs: A MAC Protocol Design Approach | Full-duplex (FD) technology is likely to be adopted in various legacy
communications standards. The IEEE 802.11ax working group has been considering
a simultaneous transmit and receive (STR) mode for the next generation wireless
local area networks (WLANs). Enabling STR mode (FD communication mode) in
802.11 networks creates bi-directional FD (BFD) and uni-directional FD (UFD)
links. The key challenge is to integrate STR mode with minimal protocol
modifications, while considering the co-existence of FD and legacy half-duplex
(HD) stations (STAs) and backwards compatibility. This paper proposes a simple
and practical approach to enable STR mode in 802.11 networks with co-existing
FD and HD STAs. The protocol explicitly accounts for the peculiarities of FD
environments and backwards compatibility. Key aspects of the proposed solution
include FD capability discovery, handshake mechanism for channel access, node
selection for UFD transmission, adaptive acknowledgement (ACK) timeout for STAs
engaged in BFD or UFD transmission, and mitigation of contention unfairness.
Performance evaluation demonstrates the effectiveness of the proposed solution
in realizing the gains of FD technology for next generation WLANs.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,640 | Transfer Learning-Based Crack Detection by Autonomous UAVs | Unmanned Aerial Vehicles (UAVs) have recently shown great performance
collecting visual data through autonomous exploration and mapping in building
inspection. Yet, the number of studies is limited considering the post
processing of the data and its integration with autonomous UAVs. These will
enable huge steps onward into full automation of building inspection. In this
regard, this work presents a decision making tool for revisiting tasks in
visual building inspection by autonomous UAVs. The tool is an implementation of
fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack
detection. It offers an optional mechanism for task planning of revisiting
pinpoint locations during inspection. It is integrated to a quadrotor UAV
system that can autonomously navigate in GPS-denied environments. The UAV is
equipped with onboard sensors and computers for autonomous localization,
mapping and motion planning. The integrated system is tested through
simulations and real-world experiments. The results show that the system
achieves crack detection and autonomous navigation in GPS-denied environments
for building inspection.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,641 | A Separation Between Run-Length SLPs and LZ77 | In this paper we give an infinite family of strings for which the length of
the Lempel-Ziv'77 parse is a factor $\Omega(\log n/\log\log n)$ smaller than
the smallest run-length grammar.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,642 | A Model that Predicts the Material Recognition Performance of Thermal Tactile Sensing | Tactile sensing can enable a robot to infer properties of its surroundings,
such as the material of an object. Heat transfer based sensing can be used for
material recognition due to differences in the thermal properties of materials.
While data-driven methods have shown promise for this recognition problem, many
factors can influence performance, including sensor noise, the initial
temperatures of the sensor and the object, the thermal effusivities of the
materials, and the duration of contact. We present a physics-based mathematical
model that predicts material recognition performance given these factors. Our
model uses semi-infinite solids and a statistical method to calculate an F1
score for the binary material recognition. We evaluated our method using
simulated contact with 69 materials and data collected by a real robot with 12
materials. Our model predicted the material recognition performance of support
vector machine (SVM) with 96% accuracy for the simulated data, with 92%
accuracy for real-world data with constant initial sensor temperatures, and
with 91% accuracy for real-world data with varied initial sensor temperatures.
Using our model, we also provide insight into the roles of various factors on
recognition performance, such as the temperature difference between the sensor
and the object. Overall, our results suggest that our model could be used to
help design better thermal sensors for robots and enable robots to use them
more effectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,643 | The Value of Inferring the Internal State of Traffic Participants for Autonomous Freeway Driving | Safe interaction with human drivers is one of the primary challenges for
autonomous vehicles. In order to plan driving maneuvers effectively, the
vehicle's control system must infer and predict how humans will behave based on
their latent internal state (e.g., intentions and aggressiveness). This
research uses a simple model for human behavior with unknown parameters that
make up the internal states of the traffic participants and presents a method
for quantifying the value of estimating these states and planning with their
uncertainty explicitly modeled. An upper performance bound is established by an
omniscient Monte Carlo Tree Search (MCTS) planner that has perfect knowledge of
the internal states. A baseline lower bound is established by planning with
MCTS assuming that all drivers have the same internal state. MCTS variants are
then used to solve a partially observable Markov decision process (POMDP) that
models the internal state uncertainty to determine whether inferring the
internal state offers an advantage over the baseline. Applying this method to a
freeway lane changing scenario reveals that there is a significant performance
gap between the upper bound and baseline. POMDP planning techniques come close
to closing this gap, especially when important hidden model parameters are
correlated with measurable parameters.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,644 | Long range scattering for nonlinear Schrödinger equations with critical homogeneous nonlinearity in three space dimensions | In this paper, we consider the final state problem for the nonlinear
Schrödinger equation with a homogeneous nonlinearity of the critical order
which is not necessarily a polynomial. In [10], the first and the second
authors consider one- and two-dimensional cases and gave a sufficient condition
on the nonlinearity for that the corresponding equation admits a solution that
behaves like a free solution with or without a logarithmic phase correction.
The present paper is devoted to the study of the three-dimensional case, in
which it is required that a solution converges to a given asymptotic profile in
a faster rate than in the lower dimensional cases. To obtain the necessary
convergence rate, we employ the end-point Strichartz estimate and modify a
time-dependent regularizing operator, introduced in [10]. Moreover, we present
a candidate of the second asymptotic profile to the solution.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,645 | SegMap: 3D Segment Mapping using Data-Driven Descriptors | When performing localization and mapping, working at the level of structure
can be advantageous in terms of robustness to environmental changes and
differences in illumination. This paper presents SegMap: a map representation
solution to the localization and mapping problem based on the extraction of
segments in 3D point clouds. In addition to facilitating the computationally
intensive task of processing 3D point clouds, working at the level of segments
addresses the data compression requirements of real-time single- and
multi-robot systems. While current methods extract descriptors for the single
task of localization, SegMap leverages a data-driven descriptor in order to
extract meaningful features that can also be used for reconstructing a dense 3D
map of the environment and for extracting semantic information. This is
particularly interesting for navigation tasks and for providing visual feedback
to end-users such as robot operators, for example in search and rescue
scenarios. These capabilities are demonstrated in multiple urban driving and
search and rescue experiments. Our method leads to an increase of area under
the ROC curve of 28.3% over current state of the art using eigenvalue based
features. We also obtain very similar reconstruction capabilities to a model
specifically trained for this task. The SegMap implementation will be made
available open-source along with easy to run demonstrations at
www.github.com/ethz-asl/segmap. A video demonstration is available at
this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,646 | The Hesse curve of a Lefschtz pencil of plane curves | We prove that for a generic Lefschetz pencil of plane curves of degree $d\geq
3$ there exists a curve $H$ (called the Hesse curve of the pencil) of degree
$6(d-1)$ and genus $3(4d^2-13d+8)+1$, and such that: $(i)$ $H$ has $d^2$
singular points of multiplicity three at the base points of the pencil and
$3(d-1)^2$ ordinary nodes at the singular points of the degenerate members of
the pencil; $(ii)$ for each member of the pencil the intersection of $H$ with
this fibre consists of the inflection points of this member and the base points
of the pencil.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,647 | Markov State Models from short non-Equilibrium Simulations - Analysis and Correction of Estimation Bias | Many state of the art methods for the thermodynamic and kinetic
characterization of large and complex biomolecular systems by simulation rely
on ensemble approaches, where data from large numbers of relatively short
trajectories are integrated. In this context, Markov state models (MSMs) are
extremely popular because they can be used to compute stationary quantities and
long-time kinetics from ensembles of short simulations, provided that these
short simulations are in "local equilibrium" within the MSM states. However, in
the last over 15 years since the inception of MSMs, it has been controversially
discussed and not yet been answered how deviations from local equilibrium can
be detected, whether these deviations induce a practical bias in MSM
estimation, and how to correct for them. In this paper, we address these
issues: We systematically analyze the estimation of Markov state models (MSMs)
from short non-equilibrium simulations, and we provide an expression for the
error between unbiased transition probabilities and the expected estimate from
many short simulations. We show that the unbiased MSM estimate can be obtained
even from relatively short non-equilibrium simulations in the limit of long lag
times and good discretization. Further, we exploit observable operator model
(OOM) theory to derive an unbiased estimator for the MSM transition matrix that
corrects for the effect of starting out of equilibrium, even when short lag
times are used. Finally, we show how the OOM framework can be used to estimate
the exact eigenvalues or relaxation timescales of the system without estimating
an MSM transition matrix, which allows us to practically assess the
discretization quality of the MSM. Applications to model systems and molecular
dynamics simulation data of alanine dipeptide are included for illustration.
The improved MSM estimator is implemented in PyEMMA as of version 2.3.
| 0 | 1 | 1 | 0 | 0 | 0 |
16,648 | Principal series for general linear groups over finite commutative rings | We construct, for any finite commutative ring $R$, a family of
representations of the general linear group $\mathrm{GL}_n(R)$ whose
intertwining properties mirror those of the principal series for
$\mathrm{GL}_n$ over a finite field.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,649 | Constraining Radon Backgrounds in LZ | The LZ dark matter detector, like many other rare-event searches, will suffer
from backgrounds due to the radioactive decay of radon daughters. In order to
achieve its science goals, the concentration of radon within the xenon should
not exceed $2\mu$Bq/kg, or 20 mBq total within its 10 tonnes. The LZ
collaboration is in the midst of a program to screen all significant components
in contact with the xenon. The four institutions involved in this effort have
begun sharing two cross-calibration sources to ensure consistent measurement
results across multiple distinct devices. We present here five preliminary
screening results, some mitigation strategies that will reduce the amount of
radon produced by the most problematic components, and a summary of the current
estimate of radon emanation throughout the detector. This best estimate totals
$<17.3$ mBq, sufficiently low to meet the detector's science goals.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,650 | Single-particle dispersion in stably stratified turbulence | We present models for single-particle dispersion in vertical and horizontal
directions of stably stratified flows. The model in the vertical direction is
based on the observed Lagrangian spectrum of the vertical velocity, while the
model in the horizontal direction is a combination of a continuous-time
eddy-constrained random walk process with a contribution to transport from
horizontal winds. Transport at times larger than the Lagrangian turnover time
is not universal and dependent on these winds. The models yield results in good
agreement with direct numerical simulations of stratified turbulence, for which
single-particle dispersion differs from the well studied case of homogeneous
and isotropic turbulence.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,651 | Pili mediated intercellular forces shape heterogeneous bacterial microcolonies prior to multicellular differentiation | Microcolonies are aggregates of a few dozen to a few thousand cells exhibited
by many bacteria. The formation of microcolonies is a crucial step towards the
formation of more mature bacterial communities known as biofilms, but also
marks a significant change in bacterial physiology. Within a microcolony,
bacteria forgo a single cell lifestyle for a communal lifestyle hallmarked by
high cell density and physical interactions between cells potentially altering
their behaviour. It is thus crucial to understand how initially identical
single cells start to behave differently while assembling in these tight
communities. Here we show that cells in the microcolonies formed by the human
pathogen Neisseria gonorrhoeae (Ng) present differential motility behaviors
within an hour upon colony formation. Observation of merging microcolonies and
tracking of single cells within microcolonies reveal a heterogeneous motility
behavior: cells close to the surface of the microcolony exhibit a much higher
motility compared to cells towards the center. Numerical simulations of a
biophysical model for the microcolonies at the single cell level suggest that
the emergence of differential behavior within a multicellular microcolony of
otherwise identical cells is of mechanical origin. It could suggest a route
toward further bacterial differentiation and ultimately mature biofilms.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,652 | Accurate Inference for Adaptive Linear Models | Estimators computed from adaptively collected data do not behave like their
non-adaptive brethren. Rather, the sequential dependence of the collection
policy can lead to severe distributional biases that persist even in the
infinite data limit. We develop a general method -- $\mathbf{W}$-decorrelation
-- for transforming the bias of adaptive linear regression estimators into
variance. The method uses only coarse-grained information about the data
collection policy and does not need access to propensity scores or exact
knowledge of the policy. We bound the finite-sample bias and variance of the
$\mathbf{W}$-estimator and develop asymptotically correct confidence intervals
based on a novel martingale central limit theorem. We then demonstrate the
empirical benefits of the generic $\mathbf{W}$-decorrelation procedure in two
different adaptive data settings: the multi-armed bandit and the autoregressive
time series.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,653 | Distribution on Warp Maps for Alignment of Open and Closed Curves | Alignment of curve data is an integral part of their statistical analysis,
and can be achieved using model- or optimization-based approaches. The
parameter space is usually the set of monotone, continuous warp maps of a
domain. Infinite-dimensional nature of the parameter space encourages sampling
based approaches, which require a distribution on the set of warp maps.
Moreover, the distribution should also enable sampling in the presence of
important landmark information on the curves which constrain the warp maps. For
alignment of closed and open curves in $\mathbb{R}^d, d=1,2,3$, possibly with
landmark information, we provide a constructive, point-process based definition
of a distribution on the set of warp maps of $[0,1]$ and the unit circle
$\mathbb{S}^1$ that is (1) simple to sample from, and (2) possesses the
desiderata for decomposition of the alignment problem with landmark constraints
into multiple unconstrained ones. For warp maps on $[0,1]$, the distribution is
related to the Dirichlet process. We demonstrate its utility by using it as a
prior distribution on warp maps in a Bayesian model for alignment of two
univariate curves, and as a proposal distribution in a stochastic algorithm
that optimizes a suitable alignment functional for higher-dimensional curves.
Several examples from simulated and real datasets are provided.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,654 | Unbiased and Consistent Nested Sampling via Sequential Monte Carlo | We introduce a new class of sequential Monte Carlo methods called Nested
Sampling via Sequential Monte Carlo (NS-SMC), which reframes the Nested
Sampling method of Skilling (2006) in terms of sequential Monte Carlo
techniques. This new framework allows convergence results to be obtained in the
setting when Markov chain Monte Carlo (MCMC) is used to produce new samples. An
additional benefit is that marginal likelihood estimates are unbiased. In
contrast to NS, the analysis of NS-SMC does not require the (unrealistic)
assumption that the simulated samples be independent. As the original NS
algorithm is a special case of NS-SMC, this provides insights as to why NS
seems to produce accurate estimates despite a typical violation of its
assumptions. For applications of NS-SMC, we give advice on tuning MCMC kernels
in an automated manner via a preliminary pilot run, and present a new method
for appropriately choosing the number of MCMC repeats at each iteration.
Finally, a numerical study is conducted where the performance of NS-SMC and
temperature-annealed SMC is compared on several challenging and realistic
problems. MATLAB code for our experiments is made available at
this https URL .
| 0 | 0 | 0 | 1 | 0 | 0 |
16,655 | Cuntz semigroups of compact-type Hopf C*-algebras | The classical Cuntz semigroup has an important role in the study of
C*-algebras, being one of the main invariants used to classify recalcitrant
C*-algebras up to isomorphism. We consider C*-algebras that have Hopf algebra
structure, and find additional structure in their Cuntz semigroups, thus
generalizing the equivariant Cuntz semigroup. We develop various aspects of the
theory of such semigroups, and in particular, we give general results allowing
classification results of the Elliott classification program to be extended to
classification results for C*-algebraic quantum groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,656 | City-Scale Intelligent Systems and Platforms | As of 2014, 54% of the earth's population resides in urban areas, and it is
steadily increasing, expecting to reach 66% by 2050. Urban areas range from
small cities with tens of thousands of people to megacities with greater than
10 million people. Roughly 12% of the global population today lives in 28
megacities, and at least 40 are projected by 2030. At these scales, the urban
infrastructure such as roads, buildings, and utility networks will cover areas
as large as New England. This steady urbanization and the resulting expansion
of infrastructure, combined with renewal of aging urban infrastructure,
represent tens of trillion of dollars in new urban infrastructure investment
over the coming decades. These investments must balance factors including
impact on clean air and water, energy and maintenance costs, and the
productivity and health of city dwellers. Moreover, cost-effective management
and sustainability of these growing urban areas will be one of the most
critical challenges to our society, motivating the concept of science- and
data-driven urban design, retrofit, and operation-that is, "Smart Cities".
| 1 | 0 | 0 | 0 | 0 | 0 |
16,657 | A simulated comparison between profile and areal surface parameters: $R_a$ as an estimate of $S_a$ | Direct comparison of areal and profile roughness measurement values is not
advisable due to fundamental differences in the measurement techniques. However
researchers may wish to compare between laboratories with differing equipment,
or against literature values. This paper investigates how well the profile
arithmetic mean average roughness, $R_a$, approximates its areal equivalent
$S_a$. Simulated rough surfaces and samples from the ETOPO1 global relief model
were used. The mean of up to 20 $R_a$ profiles from the surface were compared
with surface $S_a$ for 100 repeats. Differences between $\bar{R_a}$ and $S_a$
fell as the number of $R_a$ values averaged increased. For simulated surfaces
mean % difference between $\bar{R_a}$ and $S_a$ was in the range 16.06% to
3.47% when only one $R_a$ profile was taken. By averaging 20 $R_a$ values mean
% difference fell to 6.60% to 0.81%. By not considering $R_a$ profiles parallel
to the main feature direction (identified visually), mean % difference was
further reduced. For ETOPO1 global relief surfaces mean % difference was in the
range 52.09% to 22.60% when only one $R_a$ value was used, and was 33.22% to
9.90% when 20 $R_a$ values were averaged. Where a surface feature direction
could be identified, accounting for reduced the difference between $\bar{R_a}$
and $S_a$ by approximately 5% points. The results suggest that taking the mean
of between 3 and 5 $R_a$ values will give a good estimate of $S_a$ on regular
or simple surfaces. However, for some complex real world surfaces discrepancy
between $\bar{R_a}$ and $S_a$ are high. Caveats including the use of filters
for areal and profile measurements, and profile alignment are discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,658 | The VLA-COSMOS 3 GHz Large Project: Continuum data and source catalog release | We present the VLA-COSMOS 3 GHz Large Project based on 384 hours of
observations with the Karl G. Jansky Very Large Array (VLA) at 3 GHz (10 cm)
toward the two square degree Cosmic Evolution Survey (COSMOS) field. The final
mosaic reaches a median rms of 2.3 uJy/beam over the two square degrees at an
angular resolution of 0.75". To fully account for the spectral shape and
resolution variations across the broad (2 GHz) band, we image all data with a
multiscale, multifrequency synthesis algorithm. We present a catalog of 10,830
radio sources down to 5 sigma, out of which 67 are combined from multiple
components. Comparing the positions of our 3 GHz sources with those from the
Very Long Baseline Array (VLBA)-COSMOS survey, we estimate that the astrometry
is accurate to 0.01" at the bright end (signal-to-noise ratio, S/N_3GHz > 20).
Survival analysis on our data combined with the VLA-COSMOS 1.4~GHz Joint
Project catalog yields an expected median radio spectral index of alpha=-0.7.
We compute completeness corrections via Monte Carlo simulations to derive the
corrected 3 GHz source counts. Our counts are in agreement with previously
derived 3 GHz counts based on single-pointing (0.087 square degrees) VLA data.
In summary, the VLA-COSMOS 3 GHz Large Project simultaneously provides the
largest and deepest radio continuum survey at high (0.75") angular resolution
to date, bridging the gap between last-generation and next-generation surveys.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,659 | DeepSource: Point Source Detection using Deep Learning | Point source detection at low signal-to-noise is challenging for astronomical
surveys, particularly in radio interferometry images where the noise is
correlated. Machine learning is a promising solution, allowing the development
of algorithms tailored to specific telescope arrays and science cases. We
present DeepSource - a deep learning solution - that uses convolutional neural
networks to achieve these goals. DeepSource enhances the Signal-to-Noise Ratio
(SNR) of the original map and then uses dynamic blob detection to detect
sources. Trained and tested on two sets of 500 simulated 1 deg x 1 deg MeerKAT
images with a total of 300,000 sources, DeepSource is essentially perfect in
both purity and completeness down to SNR = 4 and outperforms PyBDSF in all
metrics. For uniformly-weighted images it achieves a Purity x Completeness (PC)
score at SNR = 3 of 0.73, compared to 0.31 for the best PyBDSF model. For
natural-weighting we find a smaller improvement of ~40% in the PC score at SNR
= 3. If instead we ask where either of the purity or completeness first drop to
90%, we find that DeepSource reaches this value at SNR = 3.6 compared to the
4.3 of PyBDSF (natural-weighting). A key advantage of DeepSource is that it can
learn to optimally trade off purity and completeness for any science case under
consideration. Our results show that deep learning is a promising approach to
point source detection in astronomical images.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,660 | Co-design of aperiodic sampled-data min-jumping rules for linear impulsive, switched impulsive and sampled-data systems | Co-design conditions for the design of a jumping-rule and a sampled-data
control law for impulsive and impulsive switched systems subject to aperiodic
sampled-data measurements are provided. Semi-infinite discrete-time
Lyapunov-Metzler conditions are first obtained. As these conditions are
difficult to check and generalize to more complex systems, an equivalent
formulation is provided in terms of clock-dependent (infinite-dimensional)
matrix inequalities. These conditions are then, in turn, approximated by a
finite-dimensional optimization problem using a sum of squares based
relaxation. It is proven that the sum of squares relaxation is non conservative
provided that the degree of the polynomials is sufficiently large. It is
emphasized that acceptable results are obtained for low polynomial degrees in
the considered examples.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,661 | Coalescence of Two Impurities in a Trapped One-dimensional Bose Gas | We study the ground state of a one-dimensional (1D) trapped Bose gas with two
mobile impurity particles. To investigate this set-up, we develop a variational
procedure in which the coordinates of the impurity particles are slow-like
variables. We validate our method using the exact results obtained for small
systems. Then, we discuss energies and pair densities for systems that contain
of the order of one hundred atoms. We show that bosonic non-interacting
impurities cluster. To explain this clustering, we calculate and discuss
induced impurity-impurity potentials in a harmonic trap. Further, we compute
the force between static impurities in a ring ({\it {à} la} the Casimir
force), and contrast the two effective potentials: the one obtained from the
mean-field approximation, and the one due to the one-phonon exchange. Our
formalism and findings are important for understanding (beyond the polaron
model) the physics of modern 1D cold-atom systems with more than one impurity.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,662 | Rust Distilled: An Expressive Tower of Languages | Rust represents a major advancement in production programming languages
because of its success in bridging the gap between high-level application
programming and low-level systems programming. At the heart of its design lies
a novel approach to ownership that remains highly programmable.
In this talk, we will describe our ongoing work on designing a formal
semantics for Rust that captures ownership and borrowing without the details of
lifetime analysis. This semantics models a high-level understanding of
ownership and as a result is close to source-level Rust (but with full type
annotations) which differs from the recent RustBelt effort that essentially
models MIR, a CPS-style IR used in the Rust compiler. Further, while RustBelt
aims to verify the safety of unsafe code in Rust's standard library, we model
standard library APIs as primitives, which is sufficient to reason about their
behavior. This yields a simpler model of Rust and its type system that we think
researchers will find easier to use as a starting point for investigating Rust
extensions. Unlike RustBelt, we aim to prove type soundness using progress and
preservation instead of a Kripke logical relation. Finally, our semantics is a
family of languages of increasing expressive power, where subsequent levels
have features that are impossible to define in previous levels. Following
Felleisen, expressive power is defined in terms of observational equivalence.
Separating the language into different levels of expressive power should
provide a framework for future work on Rust verification and compiler
optimization.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,663 | Game Theory for Multi-Access Edge Computing: Survey, Use Cases, and Future Trends | Game Theory (GT) has been used with significant success to formulate, and
either design or optimize, the operation of many representative communications
and networking scenarios. The games in these scenarios involve, as usual,
diverse players with conflicting goals. This paper primarily surveys the
literature that has applied theoretical games to wireless networks, emphasizing
use cases of upcoming Multi-Access Edge Computing (MEC). MEC is relatively new
and offers cloud services at the network periphery, aiming to reduce service
latency backhaul load, and enhance relevant operational aspects such as Quality
of Experience or security. Our presentation of GT is focused on the major
challenges imposed by MEC services over the wireless resources. The survey is
divided into classical and evolutionary games. Then, our discussion proceeds to
more specific aspects which have a considerable impact on the game usefulness,
namely: rational vs. evolving strategies, cooperation among players, available
game information, the way the game is played (single turn, repeated), the game
model evaluation, and how the model results can be applied for both optimizing
resource-constrained resources and balancing diverse trade-offs in real edge
networking scenarios. Finally, we reflect on lessons learned, highlighting
future trends and research directions for applying theoretical model games in
upcoming MEC services, considering both network design issues and usage
scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,664 | General description of spin motion in storage rings in presence of oscillating horizontal fields | The general theoretical description of the influence of oscillating
horizontal magnetic and quasimagnetic fields on the spin evolution in storage
rings is presented. Previous results are generalized to the case when both of
the horizontal components of the oscillating field are nonzero and the vector
of this field circumscribes an ellipse. General equations describing a behavior
of all components of the polarization vector are derived and the case of an
arbitrary initial polarization is considered. The derivation is fulfilled in
the case when the oscillation frequency is nonresonant. The general spin
evolution in storage rings conditioned by vertical betatron oscillations is
calculated as an example.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,665 | Simultaneous active parameter estimation and control using sampling-based Bayesian reinforcement learning | Robots performing manipulation tasks must operate under uncertainty about
both their pose and the dynamics of the system. In order to remain robust to
modeling error and shifts in payload dynamics, agents must simultaneously
perform estimation and control tasks. However, the optimal estimation actions
are often not the optimal actions for accomplishing the control tasks, and thus
agents trade between exploration and exploitation. This work frames the problem
as a Bayes-adaptive Markov decision process and solves it online using Monte
Carlo tree search and an extended Kalman filter to handle Gaussian process
noise and parameter uncertainty in a continuous space. MCTS selects control
actions to reduce model uncertainty and reach the goal state nearly optimally.
Certainty equivalent model predictive control is used as a benchmark to compare
performance in simulations with varying process noise and parameter
uncertainty.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,666 | An exploratory factor analysis model for slum severity index in Mexico City | In Mexico, 25 per cent of the urban population now lives in informal
settlements with varying degree of depravity. Although some informal
neighbourhoods have contributed to the upward mobility of the inhabitants, the
majority still lack basic services. Mexico City and the conurbation around it,
form a mega city of 21 million people that has been growing in a manner
qualified as "highly unproductive, (that) deepens inequality, raises pollution
levels" and contains the largest slum in the world, Neza-Chalco-Izta. Urban
reforms are now aiming to better the conditions in these slums and therefore it
is very important to have reliable measurement tools to assess the changes that
are undergoing. In this paper, we use exploratory factor analysis to define an
index of depravity in Mexico City, namely the Slum Severity Index (SSI), based
on the UN-HABITATs definition of a slum. We apply this novel approach to the
Census survey of Mexico and measure the housing deprivation levels types from
1990 - 2010. The analysis highlights high variability in housing conditions
within Mexico City. We find that the SSI decreased significantly between 1990 -
2000 due to several policy reforms, but increased between 2000 - 2010. We also
show correlations of the SSI with other social factors such as education,
health and migration. We present a validation of the SSI using Grey Level
Co-occurrence Matrix (GLCM) features extracted from Very-High Resolution (VHR)
remote-sensed satellite images. Finally, we show that the SSI can present a
cardinally meaningful assessment of the extent of the difference in depravity
as compared to a similar index defined by CONEVAL, a government institution
that studies poverty in Mexico.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,667 | An Estimate of the First Eigenvalue of a Schrödinger Operator on Closed Surfaces | Based on the work of Schoen-Yau, we derive an estimate of the first
eigenvalue of a Schrödinger Operator (the Jaocbi operator of minimal surfaces
in flat 3-spaces) on surfaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,668 | Ease.ml: Towards Multi-tenant Resource Sharing for Machine Learning Workloads | We present ease.ml, a declarative machine learning service platform we built
to support more than ten research groups outside the computer science
departments at ETH Zurich for their machine learning needs. With ease.ml, a
user defines the high-level schema of a machine learning application and
submits the task via a Web interface. The system automatically deals with the
rest, such as model selection and data movement. In this paper, we describe the
ease.ml architecture and focus on a novel technical problem introduced by
ease.ml regarding resource allocation. We ask, as a "service provider" that
manages a shared cluster of machines among all our users running machine
learning workloads, what is the resource allocation strategy that maximizes the
global satisfaction of all our users?
Resource allocation is a critical yet subtle issue in this multi-tenant
scenario, as we have to balance between efficiency and fairness. We first
formalize the problem that we call multi-tenant model selection, aiming for
minimizing the total regret of all users running automatic model selection
tasks. We then develop a novel algorithm that combines multi-armed bandits with
Bayesian optimization and prove a regret bound under the multi-tenant setting.
Finally, we report our evaluation of ease.ml on synthetic data and on one
service we are providing to our users, namely, image classification with deep
neural networks. Our experimental evaluation results show that our proposed
solution can be up to 9.8x faster in achieving the same global quality for all
users as the two popular heuristics used by our users before ease.ml.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,669 | Computation on Encrypted Data using Data Flow Authentication | Encrypting data before sending it to the cloud protects it against hackers
and malicious insiders, but requires the cloud to compute on encrypted data.
Trusted (hardware) modules, e.g., secure enclaves like Intel's SGX, can very
efficiently run entire programs in encrypted memory. However, it already has
been demonstrated that software vulnerabilities give an attacker ample
opportunity to insert arbitrary code into the program. This code can then
modify the data flow of the program and leak any secret in the program to an
observer in the cloud via SGX side-channels. Since any larger program is rife
with software vulnerabilities, it is not a good idea to outsource entire
programs to an SGX enclave. A secure alternative with a small trusted code base
would be fully homomorphic encryption (FHE) -- the holy grail of encrypted
computation. However, due to its high computational complexity it is unlikely
to be adopted in the near future. As a result researchers have made several
proposals for transforming programs to perform encrypted computations on less
powerful encryption schemes. Yet, current approaches fail on programs that make
control-flow decisions based on encrypted data. In this paper, we introduce the
concept of data flow authentication (DFAuth). DFAuth prevents an adversary from
arbitrarily deviating from the data flow of a program. Hence, an attacker
cannot perform an attack as outlined before on SGX. This enables that all
programs, even those including operations on control-flow decision variables,
can be computed on encrypted data. We implemented DFAuth using a novel
authenticated homomorphic encryption scheme, a Java bytecode-to-bytecode
compiler producing fully executable programs, and SGX enclaves. A transformed
neural network that performs machine learning on sensitive medical data can be
evaluated on encrypted inputs and encrypted weights in 0.86 seconds.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,670 | Analytic and Numerical Analysis of Singular Cauchy integrals with exponential-type weights | Let $I=(c,d)$, $c < 0 < d$, $Q\in C^1: I\rightarrow[0,\infty)$ be a function
with given regularity behavior on $I$. Write $w:=\exp(-Q)$ on $I$ and assume
that $\int_I x^nw^2(x)dx<\infty$ for all $n=0,1,2,\ldots$. For $x\in I$, we
consider the problem of the analytic and numerical approximation of the Cauchy
principal value integral: \begin{equation*} I[f;x]:=\lim_{\varepsilon \to 0+}
\left( \int_{c}^{x-\varepsilon} w^2(t)\frac{f(t)}{t-x}dt+
\int_{x+\varepsilon}^{d} w^2(t)\frac{f(t)}{t-x}dt. \right) \end{equation*} for
a class of functions $f: I\rightarrow \mathbb{R^+}$ for which $I[f;x]$ is
finite. In [1-4], the first two authors studied this problem and some of its
applications for even exponential weights $w$ on $(-\infty,\infty)$ of smooth
polynomial decay at $\pm \infty$ and given regularity.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,671 | Observation of a new field-induced phase transition and its concomitant quantum critical fluctuations in CeCo(In$_{1-x}$Zn$_x$)$_5$ | We demonstrate a close connection between observed field-induced
antiferromagnetic (AFM) order and quantum critical fluctuation (QCF) in the
Zn7%-doped heavy-fermion superconductor CeCoIn5. Magnetization, specific heat,
and electrical resistivity at low temperatures all show the presence of new
field-induced AFM order under the magnetic field B of 5-10 T, whose order
parameter is clearly distinguished from the low-field AFM phase observed for B
< 5 T and the superconducting phase for B < 3 T. The 4f electronic specific
heat divided by the temperature, C_e/T, exhibits -lnT dependence at B~10 T (=
B_0), and furthermore, the C_e/T data for B >= B_0 are well scaled by the
logarithmic function of B and T: ln[(B-B_0)/T^{2.7}]. These features are quite
similar to the scaling behavior found in pure CeCoIn5, strongly suggesting that
the field-induced QCF in pure CeCoIn5 originates from the hidden AFM order
parameter equivalent to high-field AFM order in Zn7%-doped CeCoIn5.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,672 | Two-dimensional magneto-optical trap as a source for cold strontium atoms | We report on the realization of a transversely loaded two-dimensional
magneto-optical trap serving as a source for cold strontium atoms. We analyze
the dependence of the source's properties on various parameters, in particular
the intensity of a pushing beam accelerating the atoms out of the source. An
atomic flux exceeding $10^9\,\mathrm{atoms/s}$ at a rather moderate oven
temperature of $500\,^\circ\mathrm{C}$ is achieved. The longitudinal velocity
of the atomic beam can be tuned over several tens of m/s by adjusting the power
of the pushing laser beam. The beam divergence is around $60$ mrad, determined
by the transverse velocity distribution of the cold atoms. The slow atom source
is used to load a three-dimensional magneto-optical trap realizing loading
rates up to $10^9\,\mathrm{atoms/s}$ without indication of saturation of the
loading rate for increasing oven temperature. The compact setup avoids
undesired effects found in alternative sources like, e.g., Zeeman slowers, such
as vacuum contamination and black-body radiation due to the hot strontium oven.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,673 | Loading a linear Paul trap to saturation from a magneto-optical trap | We present experimental measurements of the steady-state ion number in a
linear Paul trap (LPT) as a function of the ion-loading rate. These
measurements, taken with (a) constant Paul trap stability parameter $q$, (b)
constant radio-frequency (rf) amplitude, or (c) constant rf frequency, show
nonlinear behavior. At the loading rates achieved in this experiment, a plot of
the steady-state ion number as a function of loading rate has two regions: a
monotonic rise (region I) followed by a plateau (region II). Also described are
simulations and analytical theory which match the experimental results. Region
I is caused by rf heating and is fundamentally due to the time dependence of
the rf Paul-trap forces. We show that the time-independent pseudopotential,
frequently used in the analytical investigation of trapping experiments, cannot
explain region I, but explains the plateau in region II and can be used to
predict the steady-state ion number in that region. An important feature of our
experimental LPT is the existence of a radial cut-off $\hat R_{\rm cut}$ that
limits the ion capacity of our LPT and features prominently in the analytical
and numerical analysis of our LPT-loading results. We explain the dynamical
origin of $\hat R_{\rm cut}$ and relate it to the chaos border of the fractal
of non-escaping trajectories in our LPT. We also present an improved model of
LPT ion-loading as a function of time.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,674 | Early warning signals in plant disease outbreaks | Summary
1. Infectious disease outbreaks in plants threaten ecosystems, agricultural
crops and food trade. Currently, several fungal diseases are affecting forests
worldwide, posing a major risk to tree species, habitats and consequently
ecosystem decay. Prediction and control of disease spread are difficult, mainly
due to the complexity of the interaction between individual components
involved.
2. In this work, we introduce a lattice-based epidemic model coupled with a
stochastic process that mimics, in a very simplified way, the interaction
between the hosts and pathogen. We studied the disease spread by measuring the
propagation velocity of the pathogen on the susceptible hosts. Quantitative
results indicate the occurrence of a critical transition between two stable
phases: local confinement and an extended epiphytotic outbreak that depends on
the density of the susceptible individuals.
3. Quantitative predictions of epiphytotics are performed using the framework
early-warning indicators for impending regime shifts, widely applied on
dynamical systems. These signals forecast successfully the outcome of the
critical shift between the two stable phases before the system enters the
epiphytotic regime.
4. Synthesis: Our study demonstrates that early-warning indicators could be
useful for the prediction of forest disease epidemics through mathematical and
computational models suited to more specific pathogen-host-environmental
interactions.
| 0 | 0 | 0 | 0 | 1 | 0 |
16,675 | Predicting the Gender of Indonesian Names | We investigated a way to predict the gender of a name using character-level
Long-Short Term Memory (char-LSTM). We compared our method with some
conventional machine learning methods, namely Naive Bayes, logistic regression,
and XGBoost with n-grams as the features. We evaluated the models on a dataset
consisting of the names of Indonesian people. It is not common to use a family
name as the surname in Indonesian culture, except in some ethnicities.
Therefore, we inferred the gender from both full names and first names. The
results show that we can achieve 92.25% accuracy from full names, while using
first names only yields 90.65% accuracy. These results are better than the ones
from applying the classical machine learning algorithms to n-grams.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,676 | Exploring Students Blended Learning Through Social Media | Information technology (IT) has been used widely in many aspects of our daily
life. After discuss politics related aspects for some articles. In this article
author would like to discuss social media for students learning environment.
Social media as a leading application on the internet has changed many aspects
of life become more globalized. This article discusses the use of social media
to support learning activities for students in the faculty of computer science.
The author uses Facebook and WordPress as an alternative to electronic
learning: 1) online attendance tool, 2) media storage and dissemination of
course materials, 3) event scheduling for the lectures. Social media succeed to
change the way of modern learning styles and environment. The results of this
study are some learning activities such as : 1) Preparation, 2) Weekly meeting
activities, 3) Course Page, 4) Social Media as Online Attendance Tool, 5)
Social Media as Learning Repository and Dissemination, and 6) Social Media as
Online Event Scheduling.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,677 | Thompson Sampling for a Fatigue-aware Online Recommendation System | In this paper we consider an online recommendation setting, where a platform
recommends a sequence of items to its users at every time period. The users
respond by selecting one of the items recommended or abandon the platform due
to fatigue from seeing less useful items. Assuming a parametric stochastic
model of user behavior, which captures positional effects of these items as
well as the abandoning behavior of users, the platform's goal is to recommend
sequences of items that are competitive to the single best sequence of items in
hindsight, without knowing the true user model a priori. Naively applying a
stochastic bandit algorithm in this setting leads to an exponential dependence
on the number of items. We propose a new Thompson sampling based algorithm with
expected regret that is polynomial in the number of items in this combinatorial
setting, and performs extremely well in practice. We also show a contextual
version of our solution.
| 1 | 0 | 0 | 1 | 0 | 0 |
16,678 | Finding Large Primes | In this paper we present and expand upon procedures for obtaining large d
digit prime number to an arbitrary probability. We use a layered approach. The
first step is to limit the pool of random number to exclude numbers that are
obviously composite. We first remove any number ending in 1,3,7 or 9. We then
exclude numbers whose digital root is not 3, 6, or 9. This sharply reduces the
probability of the random number being composite. We then use the Prime Number
Theorem to find the probability that the selected number n is prime and use
primality tests to increase the probability to an arbitrarily high degree that
n is prime. We apply primality tests including Euler's test based on Fermat
Little theorem and the Miller-Rabin test. We computed these conditional
probabilities and implemented it using the GNU GMP library.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,679 | D-optimal Designs for Multinomial Logistic Models | We consider optimal designs for general multinomial logistic models, which
cover baseline-category, cumulative, adjacent-categories, and
continuation-ratio logit models, with proportional odds, non-proportional odds,
or partial proportional odds assumption. We derive the corresponding Fisher
information matrices in three different forms to facilitate their calculations,
determine the conditions for their positive definiteness, and search for
optimal designs. We conclude that, unlike the designs for binary responses, a
feasible design for a multinomial logistic model may contain less experimental
settings than parameters, which is of practical significance. We also conclude
that even for a minimally supported design, a uniform allocation, which is
typically used in practice, is not optimal in general for a multinomial
logistic model. We develop efficient algorithms for searching D-optimal
designs. Using examples based on real experiments, we show that the efficiency
of an experiment can be significantly improved if our designs are adopted.
| 0 | 0 | 1 | 1 | 0 | 0 |
16,680 | Rover Descent: Learning to optimize by learning to navigate on prototypical loss surfaces | Learning to optimize - the idea that we can learn from data algorithms that
optimize a numerical criterion - has recently been at the heart of a growing
number of research efforts. One of the most challenging issues within this
approach is to learn a policy that is able to optimize over classes of
functions that are fairly different from the ones that it was trained on. We
propose a novel way of framing learning to optimize as a problem of learning a
good navigation policy on a partially observable loss surface. To this end, we
develop Rover Descent, a solution that allows us to learn a fairly broad
optimization policy from training on a small set of prototypical
two-dimensional surfaces that encompasses the classically hard cases such as
valleys, plateaus, cliffs and saddles and by using strictly zero-order
information. We show that, without having access to gradient or curvature
information, we achieve state-of-the-art convergence speed on optimization
problems not presented at training time such as the Rosenbrock function and
other hard cases in two dimensions. We extend our framework to optimize over
high dimensional landscapes, while still handling only two-dimensional local
landscape information and show good preliminary results.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,681 | Reordering of the Logistic Map with a Nonlinear Growth Rate | In the well known logistic map, the parameter of interest is weighted by a
coefficient that decreases linearly when this parameter increases. Since such a
linear decrease forms a specific case, we consider the more general case where
this coefficient decreases nonlinearly as in a hyperbolic tangent relaxation of
a system toward equilibrium. We show that, in this latter case, the asymptotic
values obtained via iteration of the logistic map are considerably modified. We
demonstrate that both the steepness of the nonlinear decrease as well as its
upper and lower boundaries significantly alter the bifurcation diagram. New
period doubling features and transitions to chaos appear, possibly leading to
regimes with small periods. Computations with a variety of parameter values
show that the logistic map can be significantly reordered in the case of a
nonlinear growth rate.
| 0 | 1 | 1 | 0 | 0 | 0 |
16,682 | Tensor network factorizations: Relationships between brain structural connectomes and traits | Advanced brain imaging techniques make it possible to measure individuals'
structural connectomes in large cohort studies non-invasively. The structural
connectome is initially shaped by genetics and subsequently refined by the
environment. It is extremely interesting to study relationships between
structural connectomes and environment factors or human traits, such as
substance use and cognition. Due to limitations in structural connectome
recovery, previous studies largely focus on functional connectomes. Questions
remain about how well structural connectomes can explain variance in different
human traits. Using a state-of-the-art structural connectome processing
pipeline and a novel dimensionality reduction technique applied to data from
the Human Connectome Project (HCP), we show strong relationships between
structural connectomes and various human traits. Our dimensionality reduction
approach uses a tensor characterization of the connectome and relies on a
generalization of principal components analysis. We analyze over 1100 scans for
1076 subjects from the HCP and the Sherbrooke test-retest data set, as well as
$175$ human traits that measure domains including cognition, substance use,
motor, sensory and emotion. We find that structural connectomes are associated
with many traits. Specifically, fluid intelligence, language comprehension, and
motor skills are associated with increased cortical-cortical brain structural
connectivity, while the use of alcohol, tobacco, and marijuana are associated
with decreased cortical-cortical connectivity.
| 0 | 0 | 0 | 1 | 0 | 0 |
16,683 | Model-free prediction of noisy chaotic time series by deep learning | We present a deep neural network for a model-free prediction of a chaotic
dynamical system from noisy observations. The proposed deep learning model aims
to predict the conditional probability distribution of a state variable. The
Long Short-Term Memory network (LSTM) is employed to model the nonlinear
dynamics and a softmax layer is used to approximate a probability distribution.
The LSTM model is trained by minimizing a regularized cross-entropy function.
The LSTM model is validated against delay-time chaotic dynamical systems,
Mackey-Glass and Ikeda equations. It is shown that the present LSTM makes a
good prediction of the nonlinear dynamics by effectively filtering out the
noise. It is found that the prediction uncertainty of a multiple-step forecast
of the LSTM model is not a monotonic function of time; the predicted standard
deviation may increase or decrease dynamically in time.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,684 | Robust XVA | We introduce an arbitrage-free framework for robust valuation adjustments. An
investor trades a credit default swap portfolio with a risky counterparty, and
hedges credit risk by taking a position in the counterparty bond. The investor
does not know the expected rate of return of the counterparty bond, but he is
confident that it lies within an uncertainty interval. We derive both upper and
lower bounds for the XVA process of the portfolio, and show that these bounds
may be recovered as solutions of nonlinear ordinary differential equations. The
presence of collateralization and closeout payoffs leads to fundamental
differences with respect to classical credit risk valuation. The value of the
super-replicating portfolio cannot be directly obtained by plugging one of the
extremes of the uncertainty interval in the valuation equation, but rather
depends on the relation between the XVA replicating portfolio and the close-out
value throughout the life of the transaction.
| 0 | 0 | 0 | 0 | 0 | 1 |
16,685 | Rarefaction Waves for the Toda Equation via Nonlinear Steepest Descent | We apply the method of nonlinear steepest descent to compute the long-time
asymptotics of the Toda lattice with steplike initial data corresponding to a
rarefaction wave.
| 0 | 1 | 1 | 0 | 0 | 0 |
16,686 | IRA codes derived from Gruenbaum graph | In this paper, we consider coding of short data frames (192 bits) by IRA
codes. A new interleaver for the IRA codes based on a Gruenbaum graph is
proposed. The difference of the proposed algorithm from known methods consists
in the following: permutation is performed by using a match smaller interleaver
which is derived from the Gruenbaum graph by finding in this graph a
Hamiltonian path, enumerating the passed vertices in ascending order and
passing them again in the permuted order through the edges which are not
included in the Hamiltonian path. For the IRA code the obtained interleaver
provides 0.7-0.8 db gain over a convolutional code decoded by Viterbi
algorithm.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,687 | Three- and four-electron integrals involving Gaussian geminals: fundamental integrals, upper bounds and recurrence relations | We report the three main ingredients to calculate three- and four-electron
integrals over Gaussian basis functions involving Gaussian geminal operators:
fundamental integrals, upper bounds, and recurrence relations. In particular,
we consider the three- and four-electron integrals that may arise in
explicitly-correlated F12 methods. A straightforward method to obtain the
fundamental integrals is given. We derive vertical, transfer and horizontal
recurrence relations to build up angular momentum over the centers. Strong,
simple and scaling-consistent upper bounds are also reported. This latest
ingredient allows to compute only the $\order{N^2}$ significant three- and
four-electron integrals, avoiding the computation of the very large number of
negligible integrals.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,688 | HARP: Hierarchical Representation Learning for Networks | We present HARP, a novel method for learning low dimensional embeddings of a
graph's nodes which preserves higher-order structural features. Our proposed
method achieves this by compressing the input graph prior to embedding it,
effectively avoiding troublesome embedding configurations (i.e. local minima)
which can pose problems to non-convex optimization. HARP works by finding a
smaller graph which approximates the global structure of its input. This
simplified graph is used to learn a set of initial representations, which serve
as good initializations for learning representations in the original, detailed
graph. We inductively extend this idea, by decomposing a graph in a series of
levels, and then embed the hierarchy of graphs from the coarsest one to the
original graph. HARP is a general meta-strategy to improve all of the
state-of-the-art neural algorithms for embedding graphs, including DeepWalk,
LINE, and Node2vec. Indeed, we demonstrate that applying HARP's hierarchical
paradigm yields improved implementations for all three of these methods, as
evaluated on both classification tasks on real-world graphs such as DBLP,
BlogCatalog, CiteSeer, and Arxiv, where we achieve a performance gain over the
original implementations by up to 14% Macro F1.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,689 | Mixed penalization in convolutive nonnegative matrix factorization for blind speech dereverberation | When a signal is recorded in an enclosed room, it typically gets affected by
reverberation. This degradation represents a problem when dealing with audio
signals, particularly in the field of speech signal processing, such as
automatic speech recognition. Although there are some approaches to deal with
this issue that are quite satisfactory under certain conditions, constructing a
method that works well in a general context still poses a significant
challenge. In this article, we propose a method based on convolutive
nonnegative matrix factorization that mixes two penalizers in order to impose
certain characteristics over the time-frequency components of the restored
signal and the reverberant components. An algorithm for implementing the method
is described and tested. Comparisons of the results against those obtained with
state of the art methods are presented, showing significant improvement.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,690 | Using Contour Trees in the Analysis and Visualization of Radio Astronomy Data Cubes | The current generation of radio and millimeter telescopes, particularly the
Atacama Large Millimeter Array (ALMA), offers enormous advances in observing
capabilities. While these advances represent an unprecedented opportunity to
advance scientific understanding, the increased complexity in the spatial and
spectral structure of even a single spectral line is hard to interpret. The
complexity present in current ALMA data cubes therefore challenges not only the
existing tools for fundamental analysis of these datasets, but also users'
ability to explore and visualize their data. We have performed a feasibility
study for applying forms of topological data analysis and visualization never
before tested by the ALMA community. Through contour tree-based data analysis,
we seek to improve upon existing data cube analysis and visualization
workflows, in the forms of improved accuracy and speed in extracting features.
In this paper, we review our design process in building effective analysis and
visualization capabilities for the astrophysicist users. We summarize effective
design practices, in particular, we identify domain-specific needs of
simplicity, integrability and reproducibility, in order to best target and
service the large astrophysics community.
| 1 | 1 | 0 | 0 | 0 | 0 |
16,691 | The Price of BitCoin: GARCH Evidence from High Frequency Data | This is the first paper that estimates the price determinants of BitCoin in a
Generalised Autoregressive Conditional Heteroscedasticity framework using high
frequency data. Derived from a theoretical model, we estimate BitCoin
transaction demand and speculative demand equations in a GARCH framework using
hourly data for the period 2013-2018. In line with the theoretical model, our
empirical results confirm that both the BitCoin transaction demand and
speculative demand have a statistically significant impact on the BitCoin price
formation. The BitCoin price responds negatively to the BitCoin velocity,
whereas positive shocks to the BitCoin stock, interest rate and the size of the
BitCoin economy exercise an upward pressure on the BitCoin price.
| 0 | 0 | 0 | 0 | 0 | 1 |
16,692 | Application of a unified Kenmotsu-type formula for surfaces in Euclidean or Lorentzian three-space | Kenmotsu's formula describes surfaces in Euclidean 3-space by their mean
curvature functions and Gauss maps. In Lorentzian 3-space,
Akutagawa-Nishikawa's formula and Magid's formula are Kenmotsu-type formulas
for spacelike surfaces and for timelike surfaces, respectively. We apply them
to a few problems concerning rotational or helicoidal surfaces with constant
mean curvature. Before that, we show that the three formulas above can be
written in a unified single equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,693 | A Vector Matroid-Theoretic Approach in the Study of Structural Controllability Over F(z) | In this paper, the structural controllability of the systems over F(z) is
studied using a new mathematical method-matroids. Firstly, a vector matroid is
defined over F(z). Secondly, the full rank conditions of [sI-A|B] are derived
in terms of the concept related to matroid theory, such as rank, base and
union. Then the sufficient condition for the linear system and composite system
over F(z) to be structurally controllable is obtained. Finally, this paper
gives several examples to demonstrate that the married-theoretic approach is
simpler than other existing approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,694 | From cold Fermi fluids to the hot QGP | Strongly coupled quantum fluids are found in different forms, including
ultracold Fermi gases or tiny droplets of extremely hot Quark-Gluon Plasma.
Although the systems differ in temperature by many orders of magnitude, they
exhibit a similar almost inviscid fluid dynamical behavior. In this work, we
summarize some of the recent theoretical developments toward better
understanding this property in cold Fermi gases at and near unitarity.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,695 | Decomposition method related to saturated hyperball packings | In this paper we study the problem of hyperball (hypersphere) packings in
$3$-dimensional hyperbolic space. We introduce a new definition of the
non-compact saturated ball packings and describe to each saturated hyperball
packing, a new procedure to get a decomposition of 3-dimensional hyperbolic
space $\HYP$ into truncated tetrahedra. Therefore, in order to get a density
upper bound for hyperball packings, it is sufficient to determine the density
upper bound of hyperball packings in truncated simplices.
| 0 | 0 | 1 | 0 | 0 | 0 |
16,696 | Dynamic interdependence and competition in multilayer networks | From critical infrastructure, to physiology and the human brain, complex
systems rarely occur in isolation. Instead, the functioning of nodes in one
system often promotes or suppresses the functioning of nodes in another.
Despite advances in structural interdependence, modeling interdependence and
other interactions between dynamic systems has proven elusive. Here we define a
broadly applicable dynamic dependency link and develop a general framework for
interdependent and competitive interactions between general dynamic systems. We
apply our framework to studying interdependent and competitive synchronization
in multi-layer oscillator networks and cooperative/competitive contagions in an
epidemic model. Using a mean-field theory which we verify numerically, we find
explosive transitions and rich behavior which is absent in percolation models
including hysteresis, multi-stability and chaos. The framework presented here
provides a powerful new way to model and understand many of the interacting
complex systems which surround us.
| 0 | 1 | 0 | 0 | 0 | 0 |
16,697 | Applications of L systems to group theory | L systems generalise context-free grammars by incorporating parallel
rewriting, and generate languages such as EDT0L and ET0L that are strictly
contained in the class of indexed languages. In this paper we show that many of
the languages naturally appearing in group theory, and that were known to be
indexed or context-sensitive, are in fact ET0L and in many cases EDT0L. For
instance, the language of primitives in the free group on two generators, the
Bridson-Gilman normal forms for the fundamental groups of 3-manifolds or
orbifolds, and the co-word problem of Grigorchuk's group can be generated by L
systems. To complement the result on primitives in free groups, we show that
the language of primitives, and primitive sets, in free groups of rank higher
than two is context-sensitive. We also show the existence of EDT0L and ET0L
languages of intermediate growth.
| 1 | 0 | 1 | 0 | 0 | 0 |
16,698 | 'Senator, We Sell Ads': Analysis of the 2016 Russian Facebook Ads Campaign | One of the key aspects of the United States democracy is free and fair
elections that allow for a peaceful transfer of power from one President to the
next. The 2016 US presidential election stands out due to suspected foreign
influence before, during, and after the election. A significant portion of that
suspected influence was carried out via social media. In this paper, we look
specifically at 3,500 Facebook ads allegedly purchased by the Russian
government. These ads were released on May 10, 2018 by the US Congress House
Intelligence Committee. We analyzed the ads using natural language processing
techniques to determine textual and semantic features associated with the most
effective ones. We clustered the ads over time into the various campaigns and
the labeled parties associated with them. We also studied the effectiveness of
Ads on an individual, campaign and party basis. The most effective ads tend to
have less positive sentiment, focus on past events and are more specific and
personalized in nature. The more effective campaigns also show such similar
characteristics. The campaigns' duration and promotion of the Ads suggest a
desire to sow division rather than sway the election.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,699 | On the k-Means/Median Cost Function | In this work, we study the $k$-means cost function. The (Euclidean) $k$-means
problem can be described as follows: given a dataset $X \subseteq \mathbb{R}^d$
and a positive integer $k$, find a set of $k$ centers $C \subseteq
\mathbb{R}^d$ such that $\Phi(C, X) \stackrel{def}{=} \sum_{x \in X} \min_{c
\in C} ||x - c||^2$ is minimized. Let $\Delta_k(X) \stackrel{def}{=} \min_{C
\subseteq \mathbb{R}^d} \Phi(C, X)$ denote the cost of the optimal $k$-means
solution. It is simple to observe that for any dataset $X$, $\Delta_k(X)$
decreases as $k$ increases. We try to understand this behaviour more precisely.
For any dataset $X \subseteq \mathbb{R}^d$, integer $k \geq 1$, and a small
precision parameter $\varepsilon > 0$, let $\mathcal{L}_{X}^{k, \varepsilon}$
denote the smallest integer such that $\Delta_{\mathcal{L}_{X}^{k,
\varepsilon}}(X) \leq \varepsilon \cdot \Delta_{k}(X)$. We show upper and lower
bounds on this quantity. Our techniques generalize for the metric $k$-median
problem in arbitrary metrics and we give bounds in terms of the doubling
dimension of the metric. Finally, we observe that for any dataset $X$, we can
compute a set $S$ of size $O \left(\mathcal{L}_{X}^{k, \frac{\varepsilon}{c}}
\right)$ such that $\Delta_{S}(X) \leq \varepsilon \cdot \Delta_k(X)$ using the
$D^2$-sampling algorithm which is also known as the $k$-means++ seeding
procedure. In the previous statement, $c$ is some fixed constant. We also
discuss some applications of our bounds.
| 1 | 0 | 0 | 0 | 0 | 0 |
16,700 | Secure and Reconfigurable Network Design for Critical Information Dissemination in the Internet of Battlefield Things (IoBT) | The Internet of things (IoT) is revolutionizing the management and control of
automated systems leading to a paradigm shift in areas such as smart homes,
smart cities, health care, transportation, etc. The IoT technology is also
envisioned to play an important role in improving the effectiveness of military
operations in battlefields. The interconnection of combat equipment and other
battlefield resources for coordinated automated decisions is referred to as the
Internet of battlefield things (IoBT). IoBT networks are significantly
different from traditional IoT networks due to the battlefield specific
challenges such as the absence of communication infrastructure, and the
susceptibility of devices to cyber and physical attacks. The combat efficiency
and coordinated decision-making in war scenarios depends highly on real-time
data collection, which in turn relies on the connectivity of the network and
the information dissemination in the presence of adversaries. This work aims to
build the theoretical foundations of designing secure and reconfigurable IoBT
networks. Leveraging the theories of stochastic geometry and mathematical
epidemiology, we develop an integrated framework to study the communication of
mission-critical data among different types of network devices and consequently
design the network in a cost effective manner.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.